sdk/python/foundation-models/nvidia-nim-llama3-8b/openaisdk.ipynb (154 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Use OpenAI SDK with Meta Llama 3 NIM - 8B Instruct in Azure AI Foundry and Azure ML\n", "\n", "Use `openai` SDK to consume Meta-llama-3.1-8B NIM deployments in Azure AI Foundry and Azure ML. The Nvidia Meta Llama 3 family of models in Azure AI and Azure ML offers an API compatible with the OpenAI Chat Completion API. It allows customers and users to transition seamlessly from OpenAI models to Meta LLama LLMs. \n", "\n", "The API can be directly used with OpenAI's client libraries or third-party tools, like LangChain or LlamaIndex.\n", "\n", "The example below shows how to make this transition using the OpenAI Python Library. Notice that Llama3 supports both text completions and chat completions API." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prerequisites\n", "\n", "Before we start, there are certain steps we need to take to deploy the models:\n", "\n", "* Register for a valid Azure account with subscription \n", "* Make sure you have access to [Azure AI Studio](https://learn.microsoft.com/en-us/azure/ai-studio/what-is-ai-studio?tabs=home)\n", "* Create a project and resource group\n", "* Select Nvidia NIM: Meta Llama 3.1 -8b Instruct NIM models from Model catalog\n", "\n", "![nim-models.png](nim-models.png)\n", "\n", "Once deployed successfully, you should be assigned for an API endpoint and a security key for inference. \n", "\n", "\n", "\n", "To complete this tutorial, you will need to:\n", "\n", "* Install `openai`:\n", "\n", " ```bash\n", " pip install openai\n", " ```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example\n", "\n", "The following is an example about how to use `openai` with a Meta Llama 3 chat model deployed in Azure AI and Azure ML:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "imports" }, "outputs": [], "source": [ "from openai import OpenAI" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You will need to have a Endpoint url and Authentication Key associated with that endpoint. This can be acquired from previous steps. \n", "To work with `openai`, configure the client as follows:\n", "\n", "- `base_url`: Use the endpoint URL from your deployment. Include `/v1` as part of the URL.\n", "- `api_key`: Use your API key." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "chat_client" }, "outputs": [], "source": [ "client = OpenAI(\n", " base_url=\"https://<endpoint>.<region>.inference.ml.azure.com/v1\", api_key=\"<key>\"\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Use the client to create chat completions requests:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "chat_invoke" }, "outputs": [], "source": [ "response = client.chat.completions.create(\n", " messages=[\n", " {\n", " \"role\": \"user\",\n", " \"content\": \"Who is the most renowned French painter? Provide a short answer.\",\n", " }\n", " ],\n", " model=\"meta/llama-3.1-8b-instruct\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The generated text can be accessed as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "name": "chat_response" }, "outputs": [], "source": [ "print(response.choices[0].message.content)" ] } ], "metadata": { "kernelspec": { "display_name": "dev", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.16" } }, "nbformat": 4, "nbformat_minor": 2 }