sdk/python/foundation-models/mistral/mistralai.ipynb (288 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Use Mistral in Azure AI and Azure ML\n",
"\n",
"Use `mistralai` client to consume Mistral deployments in Azure AI and Azure ML. Notice that Mistral supports only chat completions API.\n",
"\n",
"> Review the [documentation](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral) for the Mistral family of models at for AI Studio and for ML Studio for details on how to provision inference endpoints, regional availability, pricing and inference schema reference."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisites\n",
"\n",
"Before we start, there are certain steps we need to take to deploy the models:\n",
"\n",
"* Follow the steps listed in [this](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral?tabs=mistral-large#prerequisites) article to set up resources.\n",
"* Go to Azure AI Studio and select the model on Model Catalog.\n",
"\n",
" > Notice that some models may not be available in all the regions in Azure AI and Azure Machine Learning. On those cases, you can create a workspace or project in the region where the models are available and then consume it with a connection from a different one. To learn more about using connections see [Consume models with connections](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deployments-connections)\n",
"\n",
"* Create a Serverless deployment using the steps listed [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral?tabs=mistral-large#create-a-new-deployment).\n",
"\n",
"Once deployed successfully, you should be assigned for an API endpoint and a security key for inference.\n",
"\n",
"For more information, you should consult Azure's official documentation [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral?tabs=mistral-large) for model deployment and inference.\n",
"\n",
"To complete this tutorial, you will need to:\n",
"\n",
"* Install `mistralai`:\n",
"\n",
" ```bash\n",
" pip install mistralai\n",
" ```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example\n",
"\n",
"The following is an example about how to use `mistralai` with a Mistral model deployed in Azure AI and Azure ML:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"name": "imports"
},
"outputs": [],
"source": [
"from mistralai_azure import MistralAzure\n",
"from mistralai_azure.models import ChatCompletionRequest"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use `mistralai`, create a client and configure it as follows:\n",
"\n",
"- `endpoint`: Use the endpoint URL from your deployment. Do not include either `/chat/completions` as this is included automatically by the client.\n",
"- `api_key`: Use your API key."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"endpoint = os.getenv(\"AZURE_MISTRAL_ENDPOINT\", None)\n",
"api_key = os.getenv(\"AZURE_MISTRAL_API_KEY\", None)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"name": "chat_client"
},
"outputs": [],
"source": [
"client = MistralAzure(azure_endpoint=endpoint, azure_api_key=api_key)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> Tip: Alternatively, you can configure your API key in the environment variables `MISTRAL_API_KEY`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Use the client to create chat completions requests:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"name": "chat_invoke"
},
"outputs": [],
"source": [
"chat_response = client.chat.complete(\n",
" messages=[\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": \"Who is the best French painter? Answer in one short sentence.\",\n",
" }\n",
" ],\n",
" max_tokens=50,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The generated text can be accessed as follows:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"name": "chat_response"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Claude Monet.\n"
]
}
],
"source": [
"if chat_response:\n",
" print(chat_response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Mistral also support the parameter `safe_prompt`. Toggling the safe prompt will prepend your messages with the following system prompt:\n",
"\n",
"> Always assist with care, respect, and truth. Respond with utmost utility yet securely. Avoid harmful, unethical, prejudiced, or negative content. Ensure replies promote fairness and positivity."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"name": "chat_invoke_safe"
},
"outputs": [],
"source": [
"chat_response = client.chat.complete(\n",
" messages=[\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": \"Who is the best French painter? Answer in one short sentence.\",\n",
" }\n",
" ],\n",
" max_tokens=50,\n",
" safe_prompt=True,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The generated text can be accessed as follows:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Claude Monet is considered the best French painter.\n"
]
}
],
"source": [
"if chat_response:\n",
" print(chat_response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example (with image inputs)\n",
"\n",
"The following example shows how to handle multimodal use-cases by adding images to the model's text input. This feature is only compatible with multimodal models like `mistral-small-2503`. To run the code sample, you will need the following packages:\n",
"\n",
"- `httpx`"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"import httpx\n",
"import json\n",
"\n",
"url = \"\" # Add the URL to your mistral-small-2503 deployment here.\n",
"\n",
"headers = {\"Content-Type\": \"application/json\", \"Authorization\": f\"Bearer {api_key}\"}\n",
"payload = {\n",
" \"model\": \"mistral-small-2503\",\n",
" \"messages\": [\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\"type\": \"text\", \"text\": \"Describe this image in a short sentence.\"},\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"https://picsum.photos/id/237/200/300\"},\n",
" },\n",
" ],\n",
" }\n",
" ],\n",
"}\n",
"\n",
"resp = httpx.post(url=url, json=payload, headers=headers)\n",
"if resp:\n",
" json_out = json.loads(resp.json()[\"choices\"][0][\"message\"][\"content\"])\n",
" print(json_out)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Aditional resources\n",
"\n",
"Here are some additional reference: \n",
"\n",
"* [Plan and manage costs (marketplace)](https://learn.microsoft.com/azure/ai-studio/how-to/costs-plan-manage#monitor-costs-for-models-offered-through-the-azure-marketplace)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 4
}