1_agentic-design-ptn/01_reflection/LangGraph/01.1_self-rag.ipynb (762 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"id": "c26ab996",
"metadata": {},
"source": [
"# Self-RAG\n",
"---\n",
"\n",
"### What is Self-RAG?\n",
"\n",
"Self-RAG reflects on the retrieved documents and generated responses, and includes a self-evaluation process to improve the quality of the generated answers.\n",
"\n",
"Original paper says Self-RAG generates special tokens, termed \"reflection tokens,\" to determine if retrieval would enhance the response, allowing for on-demand retrieval integration. \n",
"But in practice, we can ignore reflection tokens and let LLM decides if each document is relevant or not.\n",
"\n",
"Corrective RAG (CRAG) is similar to Self-RAG, but Self-RAG focuses on self-reflection and self-evaluation, while CRAG focuses on refining the entire retrieval process including web search.\n",
"\n",
"- **Self-RAG**: Trains the LLM to be self-sufficient in managing retrieval and generation processes. By generating reflection tokens, the model controls its behavior during inference, deciding when to retrieve information and how to critique and improve its own responses, leading to more accurate and contextually appropriate outputs. \n",
"- **CRAG**: Focuses on refining the retrieval process by evaluating and correcting the retrieved documents before they are used in generation. It integrates additional retrievals, such as web searches, when initial retrievals are insufficient, ensuring that the generation is based on the most relevant and accurate information available.\n",
"\n",
"**Reference**\n",
"\n",
"- [Self-RAG paper](https://arxiv.org/abs/2310.11511) "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b6458235",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from dotenv import load_dotenv\n",
"from azure_genai_utils.tracer import get_langchain_api_key, set_langsmith\n",
"\n",
"load_dotenv(override=True)\n",
"\n",
"# If you want to trace your RAG API calls, please set the tracing=True. You need to have a valid Langchain API key.\n",
"langchain_key, has_langchain_key = get_langchain_api_key()\n",
"set_langsmith(\"[RAG Innv Lab] 1_Agentic-Design-Pattern\", tracing=False)\n",
"\n",
"azure_openai_chat_deployment_name = os.getenv(\"AZURE_OPENAI_CHAT_DEPLOYMENT_NAME\")\n",
"azure_openai_embedding_deployment_name = os.getenv(\"AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME\")"
]
},
{
"cell_type": "markdown",
"id": "5ec192a9",
"metadata": {},
"source": [
"<br>\n",
"\n",
"## 🧪 Step 1. Test and Construct each module\n",
"---\n",
"\n",
"Before building the entire the graph pipeline, we will test and construct each module separately.\n",
"\n",
"- **Retrieval Grader**\n",
"- **Answer Generator**\n",
"- **Groundedness Evaluator**\n",
"- **Relevance Evaluator**\n",
"- **Question Re-writer**\n",
"\n",
"### Construct Retrieval Chain based on PDF"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4cde195c",
"metadata": {},
"outputs": [],
"source": [
"from azure_genai_utils.rag.pdf import PDFRetrievalChain\n",
"\n",
"pdf_path = \"../../../sample-docs/AutoGen-paper.pdf\"\n",
"\n",
"pdf = PDFRetrievalChain(\n",
" source_uri=[pdf_path],\n",
" loader_type=\"PDFPlumber\",\n",
" model_name=azure_openai_chat_deployment_name,\n",
" embedding_name=azure_openai_embedding_deployment_name,\n",
" chunk_size=500,\n",
" chunk_overlap=50,\n",
").create_chain()\n",
"\n",
"pdf_retriever = pdf.retriever\n",
"pdf_chain = pdf.chain\n",
"\n",
"question = \"What is AutoGen's main features?\"\n",
"docs = pdf_retriever.invoke(question)\n",
"\n",
"# Non-streaming\n",
"# results = pdf_chain.invoke({\"chat_history\": \"\", \"question\": question, \"context\": docs})\n",
"\n",
"# Streaming\n",
"for text in pdf_chain.stream(\n",
" {\"chat_history\": \"\", \"question\": question, \"context\": docs}\n",
"):\n",
" print(text, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "083d39ad",
"metadata": {},
"source": [
"### Define your LLM\n",
"\n",
"This hands-on only uses the `gpt-4o-mini`, but you can utilize multiple models in the pipeline."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18d3a8f8",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import AzureChatOpenAI\n",
"\n",
"llm = AzureChatOpenAI(model=azure_openai_chat_deployment_name, temperature=0)"
]
},
{
"cell_type": "markdown",
"id": "65ec10e5",
"metadata": {},
"source": [
"### Question-Retrieval Grader\n",
"\n",
"Construct a retrieval grader that evaluates the relevance of the retrieved documents to the input question. The retrieval grader should take the input question and the retrieved documents as input and output a relevance score for each document.<br>\n",
"Note that the retrieval grader should be able to handle **multiple documents** as input."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5c85e4c2",
"metadata": {},
"outputs": [],
"source": [
"from pydantic import BaseModel, Field\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"\n",
"class GradeDocuments(BaseModel):\n",
" \"\"\"A binary score to determine the relevance of the retrieved documents.\"\"\"\n",
"\n",
" binary_score: str = Field(\n",
" description=\"Documents are relevant to the question, 'yes' or 'no'\"\n",
" )\n",
"\n",
"\n",
"structured_llm_grader = llm.with_structured_output(GradeDocuments)\n",
"\n",
"system = \"\"\"You are a grader assessing relevance of a retrieved document to a user question. \\n \n",
" It does not need to be a stringent test. The goal is to filter out erroneous retrievals. \\n\n",
" If the document contains keyword(s) or semantic meaning related to the user question, grade it as relevant. \\n\n",
" Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question.\"\"\"\n",
"\n",
"grade_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system),\n",
" (\"human\", \"Retrieved document: \\n\\n {document} \\n\\n User question: {question}\"),\n",
" ]\n",
")\n",
"\n",
"retrieval_grader = grade_prompt | structured_llm_grader"
]
},
{
"cell_type": "markdown",
"id": "a5d56838",
"metadata": {},
"source": [
"Test the retrieval grader. For testing, we only show the result of the a single document, not the entire document set. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2312e7a0",
"metadata": {},
"outputs": [],
"source": [
"question = \"What is AutoGen's main features?\"\n",
"docs = pdf_retriever.invoke(question)\n",
"\n",
"# Extract the page content of the second document retrieved\n",
"doc_txt = docs[1].page_content\n",
"print(retrieval_grader.invoke({\"question\": question, \"document\": doc_txt}))"
]
},
{
"cell_type": "markdown",
"id": "702b7f87",
"metadata": {},
"source": [
"### Answer Generator\n",
"\n",
"Construct a LLM Generation node. This is a Naive RAG chain that generates an answer based on the retrieved documents. \n",
"\n",
"We recommend you to use more advanced RAG chain for production"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d080f695",
"metadata": {},
"outputs": [],
"source": [
"from langchain import hub\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import load_prompt\n",
"\n",
"if has_langchain_key:\n",
" print(f\"Load prompt from LangChain Hub.\")\n",
" prompt = hub.pull(\"daekeun-ml/rag-baseline\")\n",
"else:\n",
" print(\"LANGCHAIN_API_KEY is not set. Load prompt from YAML file.\")\n",
" prompt = load_prompt(\"prompts/rag-baseline.yaml\")\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(\n",
" [\n",
" f'<document><content>{doc.page_content}</content><source>{doc.metadata[\"source\"]}</source><page>{doc.metadata[\"page\"]+1}</page></document>'\n",
" for doc in docs\n",
" ]\n",
" )\n",
"\n",
"\n",
"rag_chain = prompt | llm | StrOutputParser()\n",
"generation = rag_chain.invoke({\"context\": format_docs(docs), \"question\": question})\n",
"print(generation)"
]
},
{
"cell_type": "markdown",
"id": "41d4050d",
"metadata": {},
"source": [
"### Groundedness Evaluator\n",
"\n",
"Construct a `groundedness_grader` node to evaluate the **hallucination** of the generated answer based on the retrieved documents.<br>\n",
"\n",
"`yes` means the answer is relevant to the retrieved documents, and `no` means the answer is not relevant to the retrieved documents."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2f46b9ae",
"metadata": {},
"outputs": [],
"source": [
"from pydantic import BaseModel, Field\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"\n",
"class Groundednesss(BaseModel):\n",
" \"\"\"A binary score indicating whether the generated answer is grounded in the facts.\"\"\"\n",
"\n",
" binary_score: str = Field(\n",
" description=\"Answer is grounded in the facts, 'yes' or 'no'\"\n",
" )\n",
"\n",
"\n",
"structured_llm_grader = llm.with_structured_output(Groundednesss)\n",
"\n",
"system = \"\"\"You are a grader assessing whether an LLM generation is grounded in / supported by a set of retrieved facts. \\n \n",
"Give a binary score 'yes' or 'no'. 'Yes' means that the answer is grounded in / supported by the set of facts.\"\"\"\n",
"\n",
"groundedness_checking_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system),\n",
" (\"human\", \"Set of facts: \\n\\n {documents} \\n\\n LLM generation: {generation}\"),\n",
" ]\n",
")\n",
"groundedness_grader = groundedness_checking_prompt | structured_llm_grader"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fc4f276a",
"metadata": {},
"outputs": [],
"source": [
"groundedness_grader.invoke({\"documents\": format_docs(docs), \"generation\": generation})"
]
},
{
"cell_type": "markdown",
"id": "54c773d2",
"metadata": {},
"source": [
"### Relevance Evaluator\n",
"\n",
"Construct a `relevance_grader` node to evaluate the relevance of the generated answer to the question.<br>\n",
"`yes` means the answer is relevant to the question, and `no` means the answer is not relevant to the question."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "37e10188",
"metadata": {},
"outputs": [],
"source": [
"from pydantic import BaseModel, Field\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"\n",
"class GradeAnswer(BaseModel):\n",
" \"\"\"A binary score indicating whether the question is addressed.\"\"\"\n",
"\n",
" binary_score: str = Field(\n",
" description=\"Answer addresses the question, 'yes' or 'no'\"\n",
" )\n",
"\n",
"\n",
"structured_llm_grader = llm.with_structured_output(GradeAnswer)\n",
"\n",
"system = \"\"\"You are a grader assessing whether an answer addresses / resolves a question\n",
"Give a binary score 'yes' or 'no'. Yes' means that the answer resolves the question.\"\"\"\n",
"\n",
"answer_grader_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system),\n",
" (\"human\", \"User question: \\n\\n {question} \\n\\n LLM generation: {generation}\"),\n",
" ]\n",
")\n",
"\n",
"answer_grader = answer_grader_prompt | structured_llm_grader"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "07c593ae",
"metadata": {},
"outputs": [],
"source": [
"answer_grader.invoke({\"question\": question, \"generation\": generation})"
]
},
{
"cell_type": "markdown",
"id": "fd05275c",
"metadata": {},
"source": [
"### Question Re-writer\n",
"\n",
"Construct a `question_rewriter` node to rewrite the question based on the retrieved documents and the generated answer."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "806f6d04",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"\n",
"system = \"\"\"You a question re-writer that converts an input question to a better version that is optimized\n",
"for vectorstore retrieval. Look at the input and try to reason about the underlying semantic intent / meaning.\"\"\"\n",
"\n",
"re_write_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system),\n",
" (\n",
" \"human\",\n",
" \"Here is the initial question: \\n\\n {question} \\n Formulate an improved question.\",\n",
" ),\n",
" ]\n",
")\n",
"\n",
"question_rewriter = re_write_prompt | llm | StrOutputParser()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3e681966",
"metadata": {},
"outputs": [],
"source": [
"print(f\"[Original question] {question}\")\n",
"question_rewriter.invoke({\"question\": question})"
]
},
{
"cell_type": "markdown",
"id": "f08d720d",
"metadata": {},
"source": [
"<br>\n",
"\n",
"## 🧪 Step 2. Define the Graph\n",
"---\n",
"\n",
"### State Definition\n",
"\n",
"- `question`: Question from the user\n",
"- `generation`: Generated answer\n",
"- `documents`: Retrieved documents"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "52775711",
"metadata": {},
"outputs": [],
"source": [
"from typing import List\n",
"from typing_extensions import TypedDict, Annotated\n",
"\n",
"\n",
"class GraphState(TypedDict):\n",
" question: Annotated[str, \"Question\"]\n",
" generation: Annotated[str, \"LLM Generation\"]\n",
" documents: Annotated[List[str], \"Retrieved Documents\"]"
]
},
{
"cell_type": "markdown",
"id": "78a0836f",
"metadata": {},
"source": [
"### Define Nodes\n",
"\n",
"We will define the following nodes in the graph:\n",
"\n",
"- `retrieve`: Retrieve documents based on the user question.\n",
"- `grade_documents`: Generate an answer based on the retrieved documents and user question.\n",
"- `generate`: Grade documents based on their relevance to the user question.\n",
"- `rewrite_query`: Rewrite the user question to improve retrieval performance.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4cbd12b6",
"metadata": {},
"outputs": [],
"source": [
"def retrieve(state: GraphState):\n",
" \"\"\"\n",
" Retrieve documents based on the user question.\n",
" \"\"\"\n",
" print(\"==== [RETRIEVE] ====\")\n",
" question = state[\"question\"]\n",
"\n",
" documents = pdf_retriever.invoke(question)\n",
" return {\"documents\": documents}\n",
"\n",
"\n",
"def generate(state: GraphState):\n",
" \"\"\"Generate an answer based on the retrieved documents and user question.\"\"\"\n",
" print(\"==== [GENERATE] ====\")\n",
" question = state[\"question\"]\n",
" documents = state[\"documents\"]\n",
"\n",
" generation = rag_chain.invoke({\"context\": documents, \"question\": question})\n",
" return {\"generation\": generation}\n",
"\n",
"\n",
"def grade_documents(state: GraphState):\n",
" \"\"\"Grade documents based on their relevance to the user question.\"\"\"\n",
" print(\"==== [GRADE DOCUMENTS] ====\")\n",
" question = state[\"question\"]\n",
" documents = state[\"documents\"]\n",
"\n",
" filtered_docs = []\n",
" relevant_doc_count = 0\n",
"\n",
" for d in documents:\n",
" score = retrieval_grader.invoke(\n",
" {\"question\": question, \"document\": d.page_content}\n",
" )\n",
" grade = score.binary_score\n",
" if grade == \"yes\":\n",
" # Add related documents to filtered_docs\n",
" print(\"==== GRADE: DOCUMENT RELEVANT ====\")\n",
" filtered_docs.append(d)\n",
" relevant_doc_count += 1\n",
" else:\n",
" print(\"==== GRADE: DOCUMENT NOT RELEVANT ====\")\n",
" continue\n",
" return {\"documents\": filtered_docs}\n",
"\n",
"\n",
"def rewrite_query(state: GraphState):\n",
" \"\"\"Rewrite the user question to improve retrieval performance.\"\"\"\n",
" print(\"\\n==== [REWRITE QUERY] ====\\n\")\n",
" question = state[\"question\"]\n",
"\n",
" better_question = question_rewriter.invoke({\"question\": question})\n",
" return {\"question\": better_question}"
]
},
{
"cell_type": "markdown",
"id": "1ef3da3f",
"metadata": {},
"source": [
"### Define Conditional Nodes\n",
"\n",
"- `decide_to_generate`: Decide whether to generate an answer based on the retrieved documents. \n",
"- `grade_generation_v_documents_and_question`: Grade the generated answer based on its relevance to the user question and the retrieved documents."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "87d1f724",
"metadata": {},
"outputs": [],
"source": [
"def decide_to_generate(state):\n",
" \"\"\"\n",
" Assess whether to generate an answer based on the relevance of the retrieved documents to the user question\n",
" \"\"\"\n",
" print(\"==== [ASSESS GRADED DOCUMENTS] ====\")\n",
" state[\"question\"]\n",
" filtered_documents = state[\"documents\"]\n",
"\n",
" if not filtered_documents:\n",
" # If all documents are not relevant to the question, rewrite the query\n",
" print(\n",
" \"==== [DECISION: ALL DOCUMENTS ARE NOT RELEVANT TO QUESTION, REWRITE QUERY] ====\"\n",
" )\n",
" return \"rewrite_query\"\n",
" else:\n",
" # If there are relevant documents, generate an answer\n",
" print(\"==== [DECISION: GENERATE] ====\")\n",
" return \"generate\"\n",
"\n",
"\n",
"def grade_generation_v_documents_and_question(state):\n",
" \"\"\"\n",
" Grade the relevance of the generated answer to the user question and retrieved documents.\n",
" \"\"\"\n",
" print(\"==== [CHECK HALLUCINATIONS] ====\")\n",
" question = state[\"question\"]\n",
" documents = state[\"documents\"]\n",
" generation = state[\"generation\"]\n",
"\n",
" score = groundedness_grader.invoke(\n",
" {\"documents\": documents, \"generation\": generation}\n",
" )\n",
" grade = score.binary_score\n",
"\n",
" # Groundedness check\n",
" if grade == \"yes\":\n",
" print(\"==== [DECISION: GENERATION IS GROUNDED IN DOCUMENTS] ====\")\n",
" print(\"==== [GRADE GENERATION vs QUESTION] ====\")\n",
" score = answer_grader.invoke({\"question\": question, \"generation\": generation})\n",
" grade = score.binary_score\n",
" if grade == \"yes\":\n",
" print(\"==== [DECISION: GENERATION ADDRESSES QUESTION] ====\")\n",
" return \"relevant\"\n",
" else:\n",
" print(\"==== [DECISION: GENERATION DOES NOT ADDRESS QUESTION] ====\")\n",
" return \"not relevant\"\n",
" else:\n",
" print(\"==== [DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY] ====\")\n",
" return \"hallucination\""
]
},
{
"cell_type": "markdown",
"id": "ee9a8d0e",
"metadata": {},
"source": [
"### Construct the Graph"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "204852a3",
"metadata": {},
"outputs": [],
"source": [
"from langgraph.graph import END, StateGraph, START\n",
"from langgraph.checkpoint.memory import MemorySaver\n",
"\n",
"workflow = StateGraph(GraphState)\n",
"\n",
"# Node definition\n",
"workflow.add_node(\"retrieve\", retrieve)\n",
"workflow.add_node(\"grade_documents\", grade_documents)\n",
"workflow.add_node(\"generate\", generate)\n",
"workflow.add_node(\"rewrite_query\", rewrite_query)\n",
"\n",
"# Edge connections\n",
"workflow.add_edge(START, \"retrieve\")\n",
"workflow.add_edge(\"retrieve\", \"grade_documents\")\n",
"workflow.add_conditional_edges(\n",
" \"grade_documents\",\n",
" decide_to_generate,\n",
" {\n",
" \"rewrite_query\": \"rewrite_query\",\n",
" \"generate\": \"generate\",\n",
" },\n",
")\n",
"workflow.add_edge(\"rewrite_query\", \"retrieve\")\n",
"workflow.add_conditional_edges(\n",
" \"generate\",\n",
" grade_generation_v_documents_and_question,\n",
" {\n",
" \"hallucination\": \"generate\",\n",
" \"relevant\": END,\n",
" \"not relevant\": \"rewrite_query\",\n",
" },\n",
")\n",
"\n",
"# Compile the workflow\n",
"app = workflow.compile(checkpointer=MemorySaver())"
]
},
{
"cell_type": "markdown",
"id": "8ffa3bb2",
"metadata": {},
"source": [
"### Visualize the graph"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4cff2c2a",
"metadata": {},
"outputs": [],
"source": [
"from azure_genai_utils.graphs import visualize_langgraph\n",
"\n",
"visualize_langgraph(app, xray=True)"
]
},
{
"cell_type": "markdown",
"id": "2d8c305a",
"metadata": {},
"source": [
"<br>\n",
"\n",
"## 🧪 Step 3. Execute the Graph\n",
"---\n",
"\n",
"### Execute the graph"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bc37cfd2",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnableConfig\n",
"from langgraph.errors import GraphRecursionError\n",
"from azure_genai_utils.messages import stream_graph, invoke_graph, random_uuid\n",
"\n",
"config = RunnableConfig(recursion_limit=10, configurable={\"thread_id\": random_uuid()})\n",
"\n",
"inputs = {\n",
" \"question\": \"What is AutoGen's main features?\",\n",
"}\n",
"\n",
"try:\n",
" stream_graph(\n",
" app,\n",
" inputs,\n",
" config,\n",
" [\"retrieve\", \"rewrite_query\", \"grade_documents\", \"generate\"],\n",
" )\n",
"except GraphRecursionError as recursion_error:\n",
" print(f\"GraphRecursionError: {recursion_error}\")"
]
},
{
"cell_type": "markdown",
"id": "218f12e3",
"metadata": {},
"source": [
"### Define the Failure Condition\n",
"\n",
"The below execution graph shows a recursive state where the graph keeps generating answers for non-related questions without providing a satisfactory response to the user.<br>\n",
"To prevent this, you can define a web search node that searches for related questions and provides a list of related questions to the user.\n",
"\n",
"Corrective-RAG (CRAG) is a similar approach that focuses on refining the entire retrieval process, including web search, to ensure that the generation is based on the most relevant and accurate information available."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "20b4db9c",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnableConfig\n",
"from langgraph.errors import GraphRecursionError\n",
"from azure_genai_utils.messages import stream_graph, invoke_graph, random_uuid\n",
"\n",
"config = RunnableConfig(recursion_limit=10, configurable={\"thread_id\": random_uuid()})\n",
"\n",
"inputs = {\n",
" \"question\": \"Who is Daekeun?\",\n",
"}\n",
"\n",
"try:\n",
" stream_graph(\n",
" app,\n",
" inputs,\n",
" config,\n",
" [\"retrieve\", \"rewrite_query\", \"grade_documents\", \"generate\"],\n",
" )\n",
"except GraphRecursionError as recursion_error:\n",
" print(f\"GraphRecursionError: {recursion_error}\")"
]
},
{
"cell_type": "markdown",
"id": "2cda1a04",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "azureml_py310_sdkv2",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
}
},
"nbformat": 4,
"nbformat_minor": 5
}