2-notebooks/4-frameworks/3-autogen-personalized-heart-rate.ipynb (1,303 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "id": "title-cell", "metadata": {}, "source": [ "# šŸ‹ļø‍ā™‚ļø Personalized Multi-Agent RAG with Heart Rate Analysis using Azure AI Foundry šŸ„‘ā¤ļø\n", "\n", "Welcome to this advanced workshop where we'll build a personalized multi-agent Retrieval-Augmented Generation (RAG) pipeline using AutoGen 0.4.7 with Azure AI Foundry. Our team of agents will collaborate to provide fitness and health recommendations based on heart rate data analysis!" ] }, { "cell_type": "markdown", "id": "setup-packages-markdown", "metadata": {}, "source": [ "## 0. Install Required Packages\n", "\n", "Let's first install all the required packages for this notebook. This may take a few minutes." ] }, { "cell_type": "code", "execution_count": null, "id": "install-packages-code", "metadata": {}, "outputs": [], "source": [ "# Install required packages\n", "import sys\n", "import subprocess\n", "import importlib.util\n", "\n", "# Define required packages\n", "packages = [\n", " \"autogen==0.4.7\", # This should install autogen_* packages\n", " \"autogen-agentchat==0.4.7\",\n", " \"autogen-core==0.4.7\",\n", " \"autogen-ext==0.4.7\",\n", " \"pandas\",\n", " \"numpy\",\n", " \"matplotlib\",\n", " \"azure-core\",\n", " \"python-dotenv\" # For managing environment variables\n", "]\n", "\n", "def check_package(package_name):\n", " \"\"\"Check if package is installed\"\"\"\n", " package_name = package_name.split('==')[0] # Remove version specifier if present\n", " return importlib.util.find_spec(package_name) is not None\n", "\n", "# Install any missing packages\n", "missing_packages = [pkg for pkg in packages if not check_package(pkg.split('==')[0])]\n", "if missing_packages:\n", " print(f\"Installing missing packages: {', '.join(missing_packages)}\")\n", " for package in missing_packages:\n", " try:\n", " subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", package])\n", " print(f\"āœ… Successfully installed {package}\")\n", " except subprocess.CalledProcessError as e:\n", " print(f\"āŒ Failed to install {package}: {e}\")\n", "else:\n", " print(\"All required packages are already installed.\")\n", "\n", "# Alternative method using pip directly (uncomment if needed)\n", "# !pip install autogen==0.4.7 autogen-agentchat==0.4.7 autogen-core==0.4.7 autogen-ext==0.4.7 pandas numpy matplotlib azure-core python-dotenv\n", "\n", "print(\"\\nāœ… Package installation complete!\")" ] }, { "cell_type": "markdown", "id": "setup-markdown", "metadata": {}, "source": [ "## 1. Setup\n", "\n", "Let's import the necessary libraries and set up our model client using Azure AI Foundry. Make sure your environment variable `GITHUB_TOKEN` is set with your personal access token." ] }, { "cell_type": "code", "execution_count": null, "id": "setup-code", "metadata": {}, "outputs": [], "source": [ "import os\n", "import asyncio\n", "import pandas as pd\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "from datetime import datetime, timedelta\n", "\n", "# Import AutoGen agents and required modules\n", "from autogen_agentchat.agents import AssistantAgent\n", "from autogen_agentchat.teams import RoundRobinGroupChat\n", "from autogen_agentchat.messages import TextMessage\n", "from autogen_core import CancellationToken\n", "\n", "# Import the Azure AI Foundry model client from AutoGen extensions\n", "from autogen_ext.models.azure import AzureAIChatCompletionClient\n", "from autogen_core.models import UserMessage\n", "from azure.core.credentials import AzureKeyCredential\n", "\n", "# Create the model client using Azure AI Foundry\n", "try:\n", " model_client = AzureAIChatCompletionClient(\n", " model=os.environ[\"MODEL_DEPLOYMENT_NAME\"],\n", " endpoint=\"https://models.inference.ai.azure.com\",\n", " credential=AzureKeyCredential(os.environ[\"GITHUB_TOKEN\"]),\n", " model_info={\n", " \"json_output\": False,\n", " \"function_calling\": False,\n", " \"vision\": False,\n", " \"family\": \"unknown\"\n", " }\n", " )\n", " print(\"āœ… Azure AI Foundry model client created successfully!\")\n", "except Exception as e:\n", " print(f\"āš ļø Error creating Azure AI Foundry model client: {e}\")\n", " print(\"āš ļø Using a placeholder model client for demonstration purposes.\")\n", " # Create a simple placeholder for the model client\n", " class PlaceholderModelClient:\n", " def generate(self, messages, **kwargs):\n", " return [\"This is a placeholder response. Azure AI Foundry client not configured.\"]\n", " model_client = PlaceholderModelClient()" ] }, { "cell_type": "markdown", "id": "health-data-markdown", "metadata": {}, "source": [ "## 2. Create Sample Health Data and Retrieval Tool\n", "\n", "We'll define a small list of health tips and a simple retrieval function. This function simulates retrieving relevant health tips based on keywords in the user's query." ] }, { "cell_type": "code", "execution_count": null, "id": "health-data-code", "metadata": {}, "outputs": [], "source": [ "# Define sample health tips\n", "health_tips = [\n", " {\"id\": \"tip1\", \"content\": \"Do a 10-minute HIIT workout to boost your metabolism.\", \"source\": \"Fitness Guru\"},\n", " {\"id\": \"tip2\", \"content\": \"Take a brisk 15-minute walk to clear your mind and improve circulation.\", \"source\": \"Health Coach\"},\n", " {\"id\": \"tip3\", \"content\": \"Stretch for 5 minutes every hour if you're sitting at a desk.\", \"source\": \"Wellness Expert\"},\n", " {\"id\": \"tip4\", \"content\": \"Incorporate strength training twice a week for overall fitness.\", \"source\": \"Personal Trainer\"},\n", " {\"id\": \"tip5\", \"content\": \"Drink water regularly to stay hydrated during workouts.\", \"source\": \"Nutritionist\"},\n", " {\"id\": \"tip6\", \"content\": \"For low-intensity recovery days, try yoga or gentle swimming.\", \"source\": \"Recovery Specialist\"},\n", " {\"id\": \"tip7\", \"content\": \"If your heart rate was elevated yesterday, focus on slow, deep breathing exercises today.\", \"source\": \"Breathing Coach\"},\n", " {\"id\": \"tip8\", \"content\": \"Morning walks are ideal when your body is naturally in a recovery state.\", \"source\": \"Circadian Expert\"},\n", " {\"id\": \"tip9\", \"content\": \"Adjust your hydration based on your previous day's heart rate - higher rates mean more water today.\", \"source\": \"Hydration Specialist\"},\n", " {\"id\": \"tip10\", \"content\": \"Schedule high-intensity workouts during your body's natural energy peaks based on heart rate patterns.\", \"source\": \"Performance Coach\"}\n", "]\n", "\n", "def retrieve_tips(query: str) -> str:\n", " \"\"\"Return health tips whose content contains keywords from the query.\"\"\"\n", " query_lower = query.lower()\n", " relevant = []\n", " for tip in health_tips:\n", " # Check if any word in the query is in the tip content\n", " if any(word in tip[\"content\"].lower() for word in query_lower.split()):\n", " relevant.append(f\"Source: {tip['source']} => {tip['content']}\")\n", " if not relevant:\n", " # If no tips match, return all tips (for demo purposes)\n", " relevant = [f\"Source: {tip['source']} => {tip['content']}\" for tip in health_tips[:5]]\n", " return \"\\n\".join(relevant)\n", "\n", "print(\"āœ… Sample health tips and retrieval tool created!\")" ] }, { "cell_type": "markdown", "id": "synthetic-data-markdown", "metadata": {}, "source": [ "## 3. Generate or Import Heart Rate Data\n", "\n", "You can choose to generate synthetic heart rate data for testing purposes or import real data from a CSV file." ] }, { "cell_type": "code", "execution_count": null, "id": "synthetic-data-code", "metadata": {}, "outputs": [], "source": [ "# Set a seed for reproducibility\n", "np.random.seed(42)\n", "\n", "# Option to use synthetic data or load from CSV\n", "use_synthetic_data = False # Set to False to load from CSV file\n", "\n", "if use_synthetic_data:\n", " # Create date range for yesterday (24 hours with hourly data)\n", " end_time = datetime.now().replace(minute=0, second=0, microsecond=0)\n", " start_time = end_time - timedelta(days=1)\n", " date_range = pd.date_range(start=start_time, end=end_time, freq='H')\n", "\n", " # Create baseline heart rate pattern (higher during day, lower at night)\n", " hours = np.array([(t.hour + 1) for t in date_range])\n", " # Heart rate is lower at night (hours 0-6), rises during day, peaks in afternoon\n", " baseline_hr = 50 + 30 * np.sin(np.pi * (hours - 6) / 12) ** 2\n", "\n", " # Add some random variation\n", " heart_rates = baseline_hr + np.random.normal(0, 5, size=len(date_range))\n", " heart_rates = np.clip(heart_rates, 50, 120).astype(int) # Clip to reasonable heart rate range\n", "\n", " # Create a DataFrame with the synthetic data\n", " df_synthetic = pd.DataFrame({\n", " 'hour': date_range,\n", " 'hr_mean': heart_rates,\n", " 'hr_std': np.random.uniform(2, 8, size=len(date_range)),\n", " 'hr_min': [max(hr - np.random.randint(5, 15), 45) for hr in heart_rates],\n", " 'hr_max': [min(hr + np.random.randint(5, 20), 130) for hr in heart_rates],\n", " 'flow_intensity': np.random.uniform(30, 70, size=len(date_range)),\n", " 'likelihood_calm': np.random.beta(2, 2, size=len(date_range)),\n", " 'likelihood_excited': np.random.beta(1.5, 3, size=len(date_range)),\n", " 'likelihood_frustrated': np.random.beta(1, 4, size=len(date_range))\n", " })\n", "\n", " # Adjust the calm likelihood to be higher when heart rate is lower\n", " df_synthetic['likelihood_calm'] = 1 - (df_synthetic['hr_mean'] - df_synthetic['hr_mean'].min()) / (df_synthetic['hr_mean'].max() - df_synthetic['hr_mean'].min())\n", " df_synthetic['likelihood_calm'] = 0.3 + 0.6 * df_synthetic['likelihood_calm'] # Scale to 0.3-0.9\n", "\n", " # Adjust excited likelihood to be higher when heart rate is higher\n", " df_synthetic['likelihood_excited'] = (df_synthetic['hr_mean'] - df_synthetic['hr_mean'].min()) / (df_synthetic['hr_mean'].max() - df_synthetic['hr_mean'].min())\n", " df_synthetic['likelihood_excited'] = 0.2 + 0.7 * df_synthetic['likelihood_excited'] # Scale to 0.2-0.9\n", "\n", " # Display the synthetic data\n", " print(\"Generated synthetic heart rate data:\")\n", " display(df_synthetic.head())\n", "\n", " # Plot the synthetic heart rate data\n", " plt.figure(figsize=(12, 6))\n", " plt.plot(df_synthetic['hour'], df_synthetic['hr_mean'], label='Mean HR')\n", " plt.fill_between(df_synthetic['hour'], \n", " df_synthetic['hr_mean'] - df_synthetic['hr_std'],\n", " df_synthetic['hr_mean'] + df_synthetic['hr_std'],\n", " alpha=0.2)\n", " plt.title('Synthetic Heart Rate Over Time')\n", " plt.xlabel('Time')\n", " plt.ylabel('Heart Rate (BPM)')\n", " plt.legend()\n", " plt.grid(True, linestyle='--', alpha=0.7)\n", " plt.show()\n", "\n", " # Use this synthetic data as our hourly_data for analysis\n", " hourly_data = df_synthetic.copy()\n", " \n", " print(\"āœ… Synthetic heart rate data generated and visualized!\")\n", "\n", "else:\n", " # Load data from CSV file\n", " try:\n", " csv_file = \"example-data/20250417_health_data.csv\"\n", " print(f\"Loading heart rate data from {csv_file}...\")\n", " \n", " # Read the CSV file\n", " df_real = pd.read_csv(csv_file)\n", " \n", " # Convert timestamp column to datetime if it exists\n", " if 'timestamp' in df_real.columns:\n", " timezone_offset = 60*60*5\n", " df_real['hour'] = pd.to_datetime(df_real['timestamp']-timezone_offset, unit='s')\n", " elif 'date' in df_real.columns:\n", " df_real['hour'] = pd.to_datetime(df_real['date'])\n", " else:\n", " # If no timestamp column, create one from current time\n", " end_time = datetime.now().replace(minute=0, second=0, microsecond=0)\n", " start_time = end_time - timedelta(days=1)\n", " df_real['hour'] = pd.date_range(start=start_time, end=end_time, freq='H')[:len(df_real)]\n", " \n", " # Ensure all required columns exist\n", " if 'heart_rate' in df_real.columns:\n", " df_real['hr_mean'] = df_real['heart_rate']\n", " else:\n", " print(\"āš ļø No heart rate column found in CSV. Using random values.\")\n", " df_real['hr_mean'] = np.random.randint(60, 100, size=len(df_real))\n", " \n", " # Calculate or generate other required columns if they don't exist\n", " if 'hr_std' not in df_real.columns:\n", " df_real['hr_std'] = df_real['hr_mean'] * 0.1 # Estimate std as 10% of mean\n", " \n", " if 'hr_min' not in df_real.columns:\n", " df_real['hr_min'] = df_real['hr_mean'] - df_real['hr_std']\n", " \n", " if 'hr_max' not in df_real.columns:\n", " df_real['hr_max'] = df_real['hr_mean'] + df_real['hr_std']\n", " \n", " # Generate mood likelihood columns if they don't exist\n", " for col in ['likelihood_calm', 'likelihood_excited', 'likelihood_frustrated']:\n", " if col not in df_real.columns:\n", " if col == 'likelihood_calm':\n", " # Calm is higher when heart rate is lower\n", " normalized_hr = (df_real['hr_mean'] - df_real['hr_mean'].min()) / (df_real['hr_mean'].max() - df_real['hr_mean'].min())\n", " df_real[col] = 0.3 + 0.6 * (1 - normalized_hr)\n", " elif col == 'likelihood_excited':\n", " # Excited is higher when heart rate is higher\n", " normalized_hr = (df_real['hr_mean'] - df_real['hr_mean'].min()) / (df_real['hr_mean'].max() - df_real['hr_mean'].min())\n", " df_real[col] = 0.2 + 0.7 * normalized_hr\n", " else:\n", " # Frustrated is random for this example\n", " df_real[col] = np.random.beta(1, 4, size=len(df_real))\n", " \n", " if 'flow_intensity' not in df_real.columns:\n", " df_real['flow_intensity'] = np.random.uniform(30, 70, size=len(df_real))\n", " \n", " # Display the loaded data\n", " print(\"Loaded heart rate data from CSV:\")\n", " display(df_real.head())\n", " \n", " # Plot the heart rate data\n", " plt.figure(figsize=(12, 6))\n", " plt.plot(df_real['hour'], df_real['hr_mean'], label='Mean HR')\n", " if 'hr_std' in df_real.columns:\n", " plt.fill_between(df_real['hour'], \n", " df_real['hr_mean'] - df_real['hr_std'],\n", " df_real['hr_mean'] + df_real['hr_std'],\n", " alpha=0.2)\n", " plt.title('Heart Rate Over Time from CSV Data')\n", " plt.xlabel('Time')\n", " plt.ylabel('Heart Rate (BPM)')\n", " plt.legend()\n", " plt.grid(True, linestyle='--', alpha=0.7)\n", " plt.show()\n", " \n", " # Use this loaded data as our hourly_data for analysis\n", " hourly_data = df_real.copy()\n", " \n", " print(\"āœ… Heart rate data loaded from CSV and visualized!\")\n", " \n", " except Exception as e:\n", " print(f\"āŒ Error loading CSV data: {e}\")\n", " print(\"Falling back to synthetic data...\")\n", " \n", " # Fall back to synthetic data generation\n", " end_time = datetime.now().replace(minute=0, second=0, microsecond=0)\n", " start_time = end_time - timedelta(days=1)\n", " date_range = pd.date_range(start=start_time, end=end_time, freq='H')\n", " \n", " hours = np.array([(t.hour + 1) for t in date_range])\n", " baseline_hr = 50 + 30 * np.sin(np.pi * (hours - 6) / 12) ** 2\n", " heart_rates = baseline_hr + np.random.normal(0, 5, size=len(date_range))\n", " heart_rates = np.clip(heart_rates, 50, 120).astype(int)\n", " \n", " # Create a DataFrame with synthetic data as fallback\n", " df_synthetic = pd.DataFrame({\n", " 'hour': date_range,\n", " 'hr_mean': heart_rates,\n", " 'hr_std': np.random.uniform(2, 8, size=len(date_range)),\n", " 'hr_min': [max(hr - np.random.randint(5, 15), 45) for hr in heart_rates],\n", " 'hr_max': [min(hr + np.random.randint(5, 20), 130) for hr in heart_rates],\n", " 'flow_intensity': np.random.uniform(30, 70, size=len(date_range)),\n", " 'likelihood_calm': np.random.beta(2, 2, size=len(date_range)),\n", " 'likelihood_excited': np.random.beta(1.5, 3, size=len(date_range)),\n", " 'likelihood_frustrated': np.random.beta(1, 4, size=len(date_range))\n", " })\n", " \n", " # Adjust the calm and excited likelihoods\n", " df_synthetic['likelihood_calm'] = 1 - (df_synthetic['hr_mean'] - df_synthetic['hr_mean'].min()) / (df_synthetic['hr_mean'].max() - df_synthetic['hr_mean'].min())\n", " df_synthetic['likelihood_calm'] = 0.3 + 0.6 * df_synthetic['likelihood_calm']\n", " \n", " df_synthetic['likelihood_excited'] = (df_synthetic['hr_mean'] - df_synthetic['hr_mean'].min()) / (df_synthetic['hr_mean'].max() - df_synthetic['hr_mean'].min())\n", " df_synthetic['likelihood_excited'] = 0.2 + 0.7 * df_synthetic['likelihood_excited']\n", " \n", " hourly_data = df_synthetic.copy()\n", " \n", " print(\"āœ… Fallback to synthetic heart rate data complete!\")" ] }, { "cell_type": "markdown", "id": "heart-strain-analysis-markdown", "metadata": {}, "source": [ "## 4. Analyzing Heart Rate Volatility and Cyclical Patterns\n", "\n", "Now we'll analyze the heart rate data in three ways to understand heart strain and identify patterns:\n", "\n", "1. Volatility analysis to understand heart strain\n", "2. Cyclical pattern analysis to understand daily rhythms\n", "3. Recovery period identification to find optimal rest times" ] }, { "cell_type": "code", "execution_count": null, "id": "heart-strain-analysis", "metadata": {}, "outputs": [], "source": [ "# 1. HEART RATE VOLATILITY ANALYSIS\n", "# Calculate rolling volatility metrics with a 3-hour window (smaller for our synthetic data)\n", "window_size = 3 # 3-hour window\n", "hourly_data['hr_volatility'] = hourly_data['hr_std'].rolling(window=window_size, min_periods=1).mean()\n", "hourly_data['hr_range'] = (hourly_data['hr_max'] - hourly_data['hr_min']).rolling(window=window_size, min_periods=1).mean()\n", "\n", "# Create heart strain index based on volatility and range\n", "hourly_data['strain_index'] = (hourly_data['hr_volatility'] * 0.5 + \n", " hourly_data['hr_range'] * 0.3 + \n", " hourly_data['hr_mean'] * 0.2) / 10\n", "\n", "# Plot heart strain index\n", "plt.figure(figsize=(12, 6))\n", "plt.plot(hourly_data['hour'], hourly_data['strain_index'], 'r-', label='Heart Strain Index')\n", "plt.title('Heart Strain Index Over Time (Higher = More Strain)')\n", "plt.xlabel('Time')\n", "plt.ylabel('Strain Index')\n", "plt.legend()\n", "plt.grid(True, linestyle='--', alpha=0.7)\n", "plt.show()\n", "\n", "# 2. CYCLICAL PATTERN ANALYSIS\n", "# Add time features for cyclical analysis\n", "hourly_data['hour_of_day'] = hourly_data['hour'].dt.hour\n", "hourly_data['day_of_week'] = hourly_data['hour'].dt.dayofweek\n", "\n", "# Calculate average heart rate by hour of day\n", "hr_by_hour = hourly_data.groupby('hour_of_day')['hr_mean'].mean().reset_index()\n", "\n", "plt.figure(figsize=(12, 6))\n", "plt.bar(hr_by_hour['hour_of_day'], hr_by_hour['hr_mean'], color='skyblue')\n", "plt.title('Average Heart Rate by Hour of Day')\n", "plt.xlabel('Hour of Day')\n", "plt.ylabel('Average Heart Rate (BPM)')\n", "plt.xticks(range(0, 24, 2))\n", "plt.grid(True, axis='y', linestyle='--', alpha=0.7)\n", "plt.show()\n", "\n", "# 3. RECOVERY PERIODS IDENTIFICATION\n", "# Identify periods of low heart rate (recovery periods)\n", "recovery_threshold = hourly_data['hr_mean'].quantile(0.25) # Bottom 25% as recovery periods\n", "hourly_data['is_recovery'] = hourly_data['hr_mean'] <= recovery_threshold\n", "\n", "# Identify optimal recovery periods (low HR, low volatility, high calmness)\n", "hourly_data['optimal_recovery'] = ((hourly_data['hr_mean'] <= recovery_threshold) & \n", " (hourly_data['hr_volatility'] <= hourly_data['hr_volatility'].quantile(0.3)) &\n", " (hourly_data['likelihood_calm'] >= hourly_data['likelihood_calm'].quantile(0.7)))\n", "\n", "# Show recovery periods\n", "recovery_hours = hourly_data[hourly_data['optimal_recovery'] == True][['hour', 'hr_mean', 'hr_volatility', 'likelihood_calm']]\n", "print(\"\\nOptimal Recovery Periods Identified:\")\n", "display(recovery_hours)\n", "\n", "# Visualize recovery periods\n", "plt.figure(figsize=(12, 6))\n", "plt.scatter(hourly_data['hour'], hourly_data['hr_mean'], \n", " c=hourly_data['optimal_recovery'].map({True: 'green', False: 'gray'}),\n", " s=100, alpha=0.7)\n", "plt.title('Heart Rate with Optimal Recovery Periods Highlighted')\n", "plt.xlabel('Time')\n", "plt.ylabel('Heart Rate (BPM)')\n", "plt.grid(True, linestyle='--', alpha=0.5)\n", "plt.show()\n", "\n", "# Generate heart rate analysis summary\n", "print(\"\\nHeart Rate Analysis Summary:\")\n", "print(f\"Average Heart Rate: {hourly_data['hr_mean'].mean():.1f} BPM\")\n", "print(f\"Average Heart Rate Volatility: {hourly_data['hr_volatility'].mean():.2f}\")\n", "print(f\"Peak Heart Strain Period: {hourly_data.loc[hourly_data['strain_index'].idxmax(), 'hour']}\")\n", "print(f\"Optimal Recovery Hours: {', '.join([str(h.hour) + ':00' for h in recovery_hours['hour']])}\")\n", "\n", "print(\"āœ… Heart rate analysis complete!\")" ] }, { "cell_type": "markdown", "id": "phase-analysis-markdown", "metadata": {}, "source": [ "## 4.1. Analyzing Heart Rate Phase Patterns\n", "\n", "Now we'll identify in-phase and out-of-phase periods based on heart rate cyclical patterns. This can help identify when a person's heart rate is aligned with their expected daily rhythm versus when it deviates from the pattern." ] }, { "cell_type": "code", "execution_count": null, "id": "phase-analysis-code", "metadata": {}, "outputs": [], "source": [ "# HEART RATE PHASE PATTERN ANALYSIS\n", "# Calculate the expected heart rate pattern based on time of day\n", "# First, create a model of expected heart rate by hour\n", "expected_hr_by_hour = hr_by_hour.copy() # Using the hourly averages we calculated earlier\n", "\n", "# Create a lookup dictionary for expected HR by hour\n", "expected_hr_dict = dict(zip(expected_hr_by_hour['hour_of_day'], expected_hr_by_hour['hr_mean']))\n", "\n", "# Calculate the expected heart rate for each time point\n", "hourly_data['expected_hr'] = hourly_data['hour_of_day'].map(expected_hr_dict)\n", "\n", "# Calculate the deviation from expected pattern\n", "hourly_data['hr_deviation'] = hourly_data['hr_mean'] - hourly_data['expected_hr']\n", "\n", "# Define phase status\n", "# In-phase: deviation is within 10% of expected HR\n", "# Out-of-phase: deviation is greater than 10% of expected HR\n", "threshold_percent = 0.10\n", "hourly_data['threshold'] = hourly_data['expected_hr'] * threshold_percent\n", "hourly_data['phase_status'] = 'in-phase'\n", "hourly_data.loc[hourly_data['hr_deviation'].abs() > hourly_data['threshold'], 'phase_status'] = 'out-of-phase'\n", "\n", "# Calculate percentage of time in each phase\n", "phase_counts = hourly_data['phase_status'].value_counts(normalize=True) * 100\n", "print(\"Percentage of time in each phase:\")\n", "for phase, percentage in phase_counts.items():\n", " print(f\"{phase}: {percentage:.1f}%\")\n", "\n", "# Identify longest continuous out-of-phase period\n", "hourly_data['phase_change'] = hourly_data['phase_status'].ne(hourly_data['phase_status'].shift()).cumsum()\n", "phase_groups = hourly_data.groupby(['phase_status', 'phase_change'])\n", "\n", "out_of_phase_periods = []\n", "for (status, _), group in phase_groups:\n", " if status == 'out-of-phase' and len(group) >= 2: # At least 2 hours\n", " start_time = group['hour'].min()\n", " end_time = group['hour'].max()\n", " duration = len(group)\n", " out_of_phase_periods.append((start_time, end_time, duration))\n", "\n", "if out_of_phase_periods:\n", " # Sort by duration (longest first)\n", " out_of_phase_periods.sort(key=lambda x: x[2], reverse=True)\n", " longest_period = out_of_phase_periods[0]\n", " print(f\"\\nLongest out-of-phase period: {longest_period[2]}\")\n", " print(f\"From {longest_period[0]} to {longest_period[1]}\")\n", "else:\n", " print(\"\\nNo continuous out-of-phase periods found.\")\n", "\n", "# Visualize phases and deviations\n", "fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(12, 10), sharex=True)\n", "\n", "# Plot 1: Actual vs Expected HR with phase coloring\n", "colors = {'in-phase': 'green', 'out-of-phase': 'red'}\n", "for phase in ['in-phase', 'out-of-phase']:\n", " phase_data = hourly_data[hourly_data['phase_status'] == phase]\n", " ax1.scatter(phase_data['hour'], phase_data['hr_mean'], \n", " color=colors[phase], label=phase, alpha=0.7, s=50)\n", "\n", "ax1.plot(hourly_data['hour'], hourly_data['expected_hr'], 'b--', label='Expected HR Pattern', linewidth=2)\n", "ax1.set_title('Heart Rate: Actual vs Expected Pattern with Phase Status')\n", "ax1.set_ylabel('Heart Rate (BPM)')\n", "ax1.legend()\n", "ax1.grid(True, linestyle='--', alpha=0.5)\n", "\n", "# Plot 2: HR Deviation with threshold bands\n", "ax2.axhline(y=0, color='k', linestyle='-', alpha=0.3)\n", "ax2.fill_between(hourly_data['hour'], \n", " -hourly_data['threshold'], \n", " hourly_data['threshold'], \n", " color='green', alpha=0.2, label='In-phase threshold')\n", "\n", "# Plot deviation line colored by phase status\n", "for phase in ['in-phase', 'out-of-phase']:\n", " phase_data = hourly_data[hourly_data['phase_status'] == phase]\n", " ax2.plot(phase_data['hour'], phase_data['hr_deviation'], \n", " color=colors[phase], marker='o', linestyle='-', alpha=0.7)\n", "\n", "ax2.set_title('Heart Rate Deviation from Expected Pattern')\n", "ax2.set_xlabel('Time')\n", "ax2.set_ylabel('Deviation (BPM)')\n", "ax2.legend()\n", "ax2.grid(True, linestyle='--', alpha=0.5)\n", "\n", "plt.tight_layout()\n", "plt.show()\n", "\n", "# Update our personalization function to include phase information\n", "hourly_data['phase_status'] = hourly_data['phase_status']\n", "\n", "print(\"\\nāœ… Heart rate phase analysis complete!\")" ] }, { "cell_type": "markdown", "id": "heart-strain-indicator-markdown", "metadata": {}, "source": [ "## 4.2. Real-time Heart Strain Likelihood Indicator\n", "\n", "Now let's create a function that estimates heart strain likelihood based on a user's message sentiment and their most recent heart rate data. This can provide real-time guidance on potential heart strain risks." ] }, { "cell_type": "code", "execution_count": null, "id": "heart-strain-indicator-code", "metadata": {}, "outputs": [], "source": [ "def analyze_message_sentiment(message):\n", " \"\"\"Simple sentiment analysis for user messages to detect potential stress indicators.\"\"\"\n", " # List of words that might indicate stress or anxiety\n", " stress_words = ['stress', 'stressed', 'anxious', 'worried', 'tired', 'exhausted', \n", " 'overwhelmed', 'panic', 'afraid', 'fear', 'tension', 'pressure',\n", " 'difficult', 'hard', 'problem', 'trouble', 'issue', 'concerned']\n", " \n", " # Count stress indicators in message\n", " message_lower = message.lower()\n", " stress_indicators = sum(word in message_lower for word in stress_words)\n", " \n", " # Normalize to a 0-1 scale, with some dampening to avoid extremes\n", " sentiment_score = min(0.8, stress_indicators * 0.2) if stress_indicators > 0 else 0\n", " \n", " # Look for explicit mentions of feeling good/relaxed\n", " positive_words = ['relaxed', 'calm', 'happy', 'great', 'good', 'fine', 'well', 'rested']\n", " positive_indicators = sum(word in message_lower for word in positive_words)\n", " \n", " # Reduce score if positive indicators are present\n", " if positive_indicators > 0:\n", " sentiment_score = max(0, sentiment_score - 0.3 * positive_indicators)\n", " \n", " # Return sentiment score where higher is more stressed\n", " return sentiment_score\n", "\n", "def get_latest_hr_data(minutes=5):\n", " \"\"\"Get the latest heart rate data for the specified number of minutes.\n", " \n", " For the purpose of this demo, we'll randomly select data points from our existing dataset\n", " to simulate real-time data collection from a wearable device.\n", " \"\"\"\n", " # In a real implementation, this would fetch actual real-time data from a wearable device API\n", " # For demonstration, we'll sample from our existing data\n", " sample_size = min(minutes, len(hourly_data))\n", " latest_data = hourly_data.sample(sample_size).copy()\n", " \n", " # Add some randomness to simulate recent fluctuations\n", " latest_data['hr_mean'] = latest_data['hr_mean'] + np.random.normal(0, 3, size=len(latest_data))\n", " latest_data['hr_volatility'] = latest_data['hr_volatility'] * (1 + np.random.uniform(-0.1, 0.1, size=len(latest_data)))\n", " \n", " return latest_data\n", "\n", "def estimate_heart_strain_likelihood(message=\"\", recent_minutes=5):\n", " \"\"\"Estimate heart strain likelihood based on recent HR data and user message.\"\"\"\n", " # Get the latest heart rate data\n", " latest_data = get_latest_hr_data(minutes=recent_minutes)\n", " \n", " # Extract heart rate metrics\n", " current_hr = latest_data['hr_mean'].mean()\n", " current_volatility = latest_data['hr_volatility'].mean()\n", " is_out_of_phase = (latest_data['phase_status'] == 'out-of-phase').mean() > 0.5\n", " \n", " # Analyze the message sentiment (0-1 scale, higher means more stressed)\n", " sentiment_score = analyze_message_sentiment(message) if message else 0.0\n", " \n", " # Calculate heart strain factors\n", " hr_factor = (current_hr - hourly_data['hr_mean'].min()) / (hourly_data['hr_mean'].max() - hourly_data['hr_mean'].min())\n", " volatility_factor = (current_volatility - hourly_data['hr_volatility'].min()) / (hourly_data['hr_volatility'].max() - hourly_data['hr_volatility'].min())\n", " phase_factor = 0.2 if is_out_of_phase else 0.0\n", " \n", " # Combine factors (weighted average)\n", " strain_likelihood = (0.4 * hr_factor + \n", " 0.3 * volatility_factor + \n", " 0.2 * sentiment_score + \n", " 0.1 * phase_factor)\n", " \n", " # Scale to 0-100%\n", " strain_percent = min(100, max(0, round(strain_likelihood * 100)))\n", " \n", " # Determine strain level category\n", " if strain_percent < 30:\n", " level = \"Low\"\n", " color = \"green\"\n", " elif strain_percent < 60:\n", " level = \"Moderate\"\n", " color = \"orange\"\n", " else:\n", " level = \"High\"\n", " color = \"red\"\n", " \n", " return {\n", " \"strain_likelihood\": strain_percent,\n", " \"strain_level\": level,\n", " \"color\": color,\n", " \"current_hr\": current_hr,\n", " \"current_volatility\": current_volatility,\n", " \"is_out_of_phase\": is_out_of_phase,\n", " \"message_sentiment\": sentiment_score\n", " }\n", "\n", "def display_strain_indicator(result):\n", " \"\"\"Display a visual indicator of heart strain likelihood.\"\"\"\n", " # Create a visualization of the strain likelihood\n", " fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4), gridspec_kw={'width_ratios': [1, 2]})\n", " \n", " # Gauge-style visualization\n", " strain = result[\"strain_likelihood\"]\n", " color = result[\"color\"]\n", " \n", " # Create a half-circle gauge\n", " theta = np.linspace(0, np.pi, 100)\n", " r = 1.0\n", " x = r * np.cos(theta)\n", " y = r * np.sin(theta)\n", " \n", " # Draw the gauge background\n", " ax1.plot(x, y, 'k-', linewidth=2)\n", " ax1.fill_between(x, 0, y, color='lightgray', alpha=0.3)\n", " \n", " # Calculate the angle for the needle based on strain\n", " needle_angle = np.pi * strain / 100\n", " needle_x = [0, r * np.cos(needle_angle)]\n", " needle_y = [0, r * np.sin(needle_angle)]\n", " \n", " # Draw the needle\n", " ax1.plot(needle_x, needle_y, color=color, linewidth=3)\n", " \n", " # Add labels\n", " ax1.text(-1.1, -0.15, \"Low\", fontsize=12, ha='left')\n", " ax1.text(0, -0.15, \"Moderate\", fontsize=12, ha='center')\n", " ax1.text(1.1, -0.15, \"High\", fontsize=12, ha='right')\n", " ax1.text(0, 0.5, f\"{strain}%\", fontsize=18, ha='center', weight='bold', color=color)\n", " ax1.text(0, 0.3, f\"{result['strain_level']} Strain Likelihood\", fontsize=14, ha='center')\n", " \n", " # Adjust the aspect ratio and remove axes\n", " ax1.set_aspect('equal')\n", " ax1.axis('off')\n", " \n", " # Plot the contributing factors as a horizontal bar chart\n", " factors = {\n", " 'Heart Rate': result['current_hr'],\n", " 'HR Volatility': result['current_volatility'],\n", " 'Message Sentiment': result['message_sentiment'] * 10, # Scale up for visibility\n", " 'Phase Status': 8 if result['is_out_of_phase'] else 2 # Binary factor\n", " }\n", " \n", " y_pos = np.arange(len(factors))\n", " factor_values = list(factors.values())\n", " \n", " ax2.barh(y_pos, factor_values, color=['skyblue', 'lightgreen', 'plum', 'khaki'])\n", " ax2.set_yticks(y_pos)\n", " ax2.set_yticklabels(factors.keys())\n", " ax2.set_title('Contributing Factors')\n", " ax2.grid(True, linestyle='--', alpha=0.5, axis='x')\n", " \n", " plt.tight_layout()\n", " plt.show()\n", " \n", " # Print recommendations based on strain level\n", " print(f\"\\nHeart Strain Likelihood: {strain}% ({result['strain_level']})\")\n", " print(f\"Current Heart Rate: {result['current_hr']:.1f} BPM\")\n", " \n", " if result['strain_level'] == \"High\":\n", " print(\"\\nRecommendation: Consider taking a break. Practice deep breathing and hydrate.\")\n", " elif result['strain_level'] == \"Moderate\":\n", " print(\"\\nRecommendation: Monitor your heart rate. Take a short break if you can.\")\n", " else:\n", " print(\"\\nRecommendation: Your heart strain is low. Continue your activities as normal.\")\n", "\n", "# Interactive input for testing the heart strain indicator\n", "def test_heart_strain_indicator():\n", " user_message = input(\"How are you feeling right now? \")\n", " result = estimate_heart_strain_likelihood(message=user_message, recent_minutes=5)\n", " display_strain_indicator(result)\n", " return result\n", "\n", "# Test with a sample message\n", "sample_result = estimate_heart_strain_likelihood(message=\"I'm feeling a bit stressed about this project deadline\", recent_minutes=5)\n", "display_strain_indicator(sample_result)\n", "\n", "# Uncomment the line below to test with your own input\n", "# test_heart_strain_indicator()\n", "\n", "print(\"\\nāœ… Heart strain indicator created!\")" ] }, { "cell_type": "markdown", "id": "nutrition-recommendations-markdown", "metadata": {}, "source": [ "## 4.3. Personalized Nutrition Recommendations Based on Heart Rate Data\n", "\n", "Let's create a system that provides personalized food recommendations based on the user's heart rate data, recent meals, and desired mood states. This integrates with nutrition data that could be pulled from apps like Alma." ] }, { "cell_type": "code", "execution_count": null, "id": "nutrition-recommendations-code", "metadata": {}, "outputs": [], "source": [ "# Simulated nutrition database for demonstration purposes\n", "# In a real implementation, this would connect to an API like Alma's\n", "# reference a real nutrition database or use a tool such as https://www.alma.food/\n", "nutrition_database = {\n", " \"excited\": {\n", " \"foods\": [\"dark chocolate\", \"banana\", \"coffee\", \"green tea\", \"berries\", \"nuts\", \"oatmeal with fruit\", \n", " \"greek yogurt with honey\", \"smoothie with spinach and berries\", \"eggs with avocado\", \n", " \"quinoa bowl\", \"chia pudding\", \"kombucha\"],\n", " \"nutrients\": [\"vitamin B complex\", \"magnesium\", \"caffeine (moderate)\", \"complex carbohydrates\", \"protein\"],\n", " \"avoid\": [\"heavy, fatty meals\", \"simple sugars\", \"excessive caffeine\", \"processed foods\"]\n", " },\n", " \"calm\": {\n", " \"foods\": [\"chamomile tea\", \"warm milk\", \"turkey\", \"bananas\", \"oatmeal\", \"almonds\", \"fatty fish\", \n", " \"dark leafy greens\", \"lentil soup\", \"sweet potatoes\", \"whole grain pasta\", \"kiwi\", \"cherries\"],\n", " \"nutrients\": [\"tryptophan\", \"magnesium\", \"vitamin B6\", \"omega-3 fatty acids\", \"melatonin\"],\n", " \"avoid\": [\"caffeine\", \"alcohol\", \"spicy foods\", \"high sugar foods\", \"processed snacks\"]\n", " },\n", " \"focused\": {\n", " \"foods\": [\"blueberries\", \"fatty fish\", \"dark chocolate\", \"nuts\", \"avocados\", \"leafy greens\", \n", " \"eggs\", \"broccoli\", \"pumpkin seeds\", \"green tea\", \"coffee\", \"water\", \"beet juice\"],\n", " \"nutrients\": [\"omega-3\", \"antioxidants\", \"choline\", \"vitamin E\", \"polyphenols\", \"moderate caffeine\"],\n", " \"avoid\": [\"high glycemic carbs\", \"excessive sugar\", \"dehydration\", \"very heavy meals\"]\n", " },\n", " \"energized\": {\n", " \"foods\": [\"oatmeal\", \"quinoa\", \"sweet potatoes\", \"bananas\", \"apples\", \"oranges\", \"greek yogurt\", \n", " \"lentils\", \"chickpeas\", \"lean protein\", \"beets\", \"pomegranate\", \"nut butter\", \"honey\"],\n", " \"nutrients\": [\"complex carbohydrates\", \"B vitamins\", \"iron\", \"potassium\", \"protein\", \"fiber\"],\n", " \"avoid\": [\"simple sugars\", \"processed foods\", \"excessive fat\", \"dehydration\"]\n", " },\n", " \"recovery\": {\n", " \"foods\": [\"salmon\", \"tart cherry juice\", \"watermelon\", \"sweet potatoes\", \"eggs\", \"leafy greens\", \n", " \"turmeric (with black pepper)\", \"ginger\", \"berries\", \"pomegranate\", \"greek yogurt\"],\n", " \"nutrients\": [\"omega-3 fatty acids\", \"antioxidants\", \"protein\", \"vitamin C\", \"potassium\", \"magnesium\"],\n", " \"avoid\": [\"alcohol\", \"excessive caffeine\", \"processed foods\", \"fried foods\"]\n", " }\n", "}\n", "\n", "# Sample meal database with calorie and macronutrient information\n", "meal_database = {\n", " \"oatmeal with berries\": {\"calories\": 300, \"protein\": 10, \"carbs\": 50, \"fat\": 6, \"mood\": [\"excited\", \"energized\"]},\n", " \"greek yogurt with honey\": {\"calories\": 250, \"protein\": 20, \"carbs\": 30, \"fat\": 5, \"mood\": [\"excited\", \"focused\"]},\n", " \"salmon with sweet potatoes\": {\"calories\": 450, \"protein\": 35, \"carbs\": 40, \"fat\": 15, \"mood\": [\"recovery\", \"focused\"]},\n", " \"chicken salad\": {\"calories\": 350, \"protein\": 30, \"carbs\": 15, \"fat\": 20, \"mood\": [\"energized\", \"focused\"]},\n", " \"lentil soup\": {\"calories\": 300, \"protein\": 15, \"carbs\": 45, \"fat\": 5, \"mood\": [\"calm\", \"recovery\"]},\n", " \"avocado toast\": {\"calories\": 350, \"protein\": 12, \"carbs\": 30, \"fat\": 20, \"mood\": [\"focused\", \"energized\"]},\n", " \"smoothie with spinach and berries\": {\"calories\": 200, \"protein\": 5, \"carbs\": 35, \"fat\": 2, \"mood\": [\"excited\", \"energized\"]},\n", " \"eggs with avocado\": {\"calories\": 400, \"protein\": 20, \"carbs\": 10, \"fat\": 30, \"mood\": [\"focused\", \"energized\"]},\n", " \"dark chocolate\": {\"calories\": 150, \"protein\": 2, \"carbs\": 15, \"fat\": 10, \"mood\": [\"excited\", \"focused\"]},\n", " \"chamomile tea\": {\"calories\": 0, \"protein\": 0, \"carbs\": 0, \"fat\": 0, \"mood\": [\"calm\"]},\n", " \"grilled chicken sandwich\": {\"calories\": 450, \"protein\": 35, \"carbs\": 40, \"fat\": 15, \"mood\": [\"energized\", \"recovery\"]},\n", " \"pasta with tomato sauce\": {\"calories\": 400, \"protein\": 12, \"carbs\": 70, \"fat\": 8, \"mood\": [\"calm\", \"energized\"]},\n", " \"steak with vegetables\": {\"calories\": 550, \"protein\": 40, \"carbs\": 20, \"fat\": 35, \"mood\": [\"recovery\", \"energized\"]},\n", " \"fruit salad\": {\"calories\": 150, \"protein\": 2, \"carbs\": 35, \"fat\": 0, \"mood\": [\"excited\", \"energized\"]},\n", " \"quinoa bowl\": {\"calories\": 450, \"protein\": 15, \"carbs\": 65, \"fat\": 12, \"mood\": [\"excited\", \"focused\", \"energized\"]}\n", "}\n", "\n", "# User's last meal tracking\n", "user_last_meal = {\n", " \"meal\": \"chicken salad\",\n", " \"time\": datetime.now() - timedelta(hours=4), # Assume eaten 4 hours ago\n", " \"calories\": 350,\n", " \"protein\": 30,\n", " \"carbs\": 15,\n", " \"fat\": 20\n", "}\n", "\n", "# Daily calorie targets (would be personalized in a real app)\n", "user_calorie_targets = {\n", " \"daily_total\": 2000,\n", " \"breakfast\": 500,\n", " \"lunch\": 600,\n", " \"dinner\": 600,\n", " \"snacks\": 300\n", "}\n", "\n", "def get_mood_from_phrase(phrase):\n", " \"\"\"Extract the desired mood from a user's query phrase.\"\"\"\n", " mood_keywords = {\n", " \"excited\": [\"excited\", \"energetic\", \"upbeat\", \"lively\", \"enthusiastic\", \"pumped\", \"motivated\"],\n", " \"calm\": [\"calm\", \"relaxed\", \"peaceful\", \"tranquil\", \"serene\", \"chill\", \"soothing\", \"stress-free\"],\n", " \"focused\": [\"focused\", \"concentrate\", \"attentive\", \"alert\", \"productive\", \"sharp\", \"clarity\"],\n", " \"energized\": [\"energized\", \"active\", \"vibrant\", \"vigor\", \"stamina\", \"strength\", \"power\"],\n", " \"recovery\": [\"recovery\", \"rest\", \"restore\", \"replenish\", \"rejuvenate\", \"healing\", \"recuperate\"]\n", " }\n", " \n", " phrase_lower = phrase.lower()\n", " \n", " # Check for mood keywords in the phrase\n", " for mood, keywords in mood_keywords.items():\n", " if any(keyword in phrase_lower for keyword in keywords):\n", " return mood\n", " \n", " # Default to energized if no mood is detected\n", " return \"energized\"\n", "\n", "def get_meal_time_from_phrase(phrase):\n", " \"\"\"Extract the meal time from a user's query phrase.\"\"\"\n", " phrase_lower = phrase.lower()\n", " \n", " if \"breakfast\" in phrase_lower or \"morning\" in phrase_lower:\n", " return \"breakfast\"\n", " elif \"lunch\" in phrase_lower or \"noon\" in phrase_lower or \"midday\" in phrase_lower:\n", " return \"lunch\"\n", " elif \"dinner\" in phrase_lower or \"evening\" in phrase_lower or \"night\" in phrase_lower:\n", " return \"dinner\"\n", " elif \"snack\" in phrase_lower:\n", " return \"snack\"\n", " \n", " # Check for time of day references\n", " if \"afternoon\" in phrase_lower:\n", " current_hour = datetime.now().hour\n", " return \"lunch\" if current_hour < 14 else \"snack\"\n", " \n", " # Default to the next upcoming meal based on current time\n", " current_hour = datetime.now().hour\n", " if current_hour < 10:\n", " return \"breakfast\"\n", " elif current_hour < 14:\n", " return \"lunch\"\n", " elif current_hour < 18:\n", " return \"snack\"\n", " else:\n", " return \"dinner\"\n", "\n", "def get_remaining_calories():\n", " \"\"\"Calculate remaining calories for the day based on last meal.\"\"\"\n", " # This would be a more sophisticated calculation in a real app with full meal history\n", " # For demo purposes, we'll assume the user has consumed the last meal's calories\n", " # plus 50% of their daily target (simulating previous meals)\n", " consumed_so_far = user_last_meal[\"calories\"] + (user_calorie_targets[\"daily_total\"] * 0.5)\n", " remaining = max(0, user_calorie_targets[\"daily_total\"] - consumed_so_far)\n", " return round(remaining)\n", "\n", "def analyze_heart_rate_for_nutrition():\n", " \"\"\"Analyze heart rate patterns to provide nutritional guidance.\"\"\"\n", " # Get insights from yesterday's data and recent hours\n", " latest_hr_data = get_latest_hr_data(minutes=5)\n", " \n", " # Analyze current heart rate state\n", " current_hr = latest_hr_data['hr_mean'].mean()\n", " avg_hr = hourly_data['hr_mean'].mean()\n", " hr_state = \"elevated\" if current_hr > avg_hr * 1.1 else \"normal\" if current_hr > avg_hr * 0.9 else \"low\"\n", " \n", " # Check if user is in recovery period\n", " is_recovery = hourly_data['is_recovery'].mean() > 0.5\n", " \n", " # Check phase status\n", " is_out_of_phase = (latest_hr_data['phase_status'] == 'out-of-phase').mean() > 0.5\n", " \n", " return {\n", " \"current_hr\": current_hr,\n", " \"hr_state\": hr_state,\n", " \"is_recovery\": is_recovery,\n", " \"is_out_of_phase\": is_out_of_phase,\n", " \"strain_index\": hourly_data['strain_index'].mean()\n", " }\n", "\n", "def recommend_food(query):\n", " \"\"\"Generate food recommendations based on user query and heart rate data.\"\"\"\n", " # Extract desired mood and meal time from query\n", " desired_mood = get_mood_from_phrase(query)\n", " meal_time = get_meal_time_from_phrase(query)\n", " \n", " # Analyze heart rate data for nutritional guidance\n", " hr_analysis = analyze_heart_rate_for_nutrition()\n", " \n", " # Get remaining calorie budget\n", " remaining_calories = get_remaining_calories()\n", " target_meal_calories = user_calorie_targets[meal_time] if meal_time != \"snack\" else user_calorie_targets[\"snacks\"]\n", " adjusted_meal_calories = min(target_meal_calories, remaining_calories)\n", " \n", " # Consider last meal timing for recommendations\n", " hours_since_last_meal = (datetime.now() - user_last_meal[\"time\"]).total_seconds() / 3600\n", " \n", " # Adjust recommendations based on heart rate analysis\n", " recommended_foods = []\n", " nutrition_advice = []\n", " \n", " # Modify recommendations based on heart rate state\n", " if hr_analysis[\"hr_state\"] == \"elevated\":\n", " nutrition_advice.append(\"Your heart rate is elevated, suggesting foods that promote calmness and recovery.\")\n", " # Blend in some calming foods with the desired mood foods\n", " recommended_foods = nutrition_database[\"calm\"][\"foods\"][:2] + nutrition_database[desired_mood][\"foods\"][:3]\n", " nutrition_advice.append(f\"Focus on these nutrients: {', '.join(nutrition_database['calm']['nutrients'][:3])}.\")\n", " elif hr_analysis[\"is_recovery\"]:\n", " nutrition_advice.append(\"Your body is in a recovery state. Focusing on restorative nutrition.\")\n", " recommended_foods = nutrition_database[\"recovery\"][\"foods\"][:2] + nutrition_database[desired_mood][\"foods\"][:3]\n", " nutrition_advice.append(f\"Emphasize these nutrients: {', '.join(nutrition_database['recovery']['nutrients'][:3])}.\")\n", " else:\n", " # Standard recommendation based on desired mood\n", " nutrition_advice.append(f\"Based on your heart rate patterns and desire to feel {desired_mood}, here are some food suggestions.\")\n", " recommended_foods = nutrition_database[desired_mood][\"foods\"][:5]\n", " nutrition_advice.append(f\"These foods are rich in: {', '.join(nutrition_database[desired_mood]['nutrients'][:3])}.\")\n", " \n", " # Adjust calorie recommendations based on heart rate strain\n", " if hr_analysis[\"strain_index\"] > 7:\n", " nutrition_advice.append(\"Your heart has been under strain. Consider smaller, more frequent meals to maintain energy levels.\")\n", " adjusted_meal_calories = min(adjusted_meal_calories, target_meal_calories * 0.8) # Reduce meal size\n", " \n", " # Additional advice based on timing\n", " if hours_since_last_meal > 5:\n", " nutrition_advice.append(\"It's been over 5 hours since your last recorded meal. Consider having a balanced meal now to stabilize blood sugar.\")\n", " elif hours_since_last_meal < 2 and meal_time != \"snack\":\n", " nutrition_advice.append(\"You had a meal recently. Consider a lighter option or waiting a bit longer before your next substantial meal.\")\n", " \n", " # Find specific meal recommendations from our database that match the mood\n", " meal_recommendations = []\n", " for meal_name, meal_info in meal_database.items():\n", " if desired_mood in meal_info[\"mood\"] and meal_info[\"calories\"] <= adjusted_meal_calories * 1.2:\n", " meal_recommendations.append((meal_name, meal_info))\n", " \n", " # Sort by how close they are to the target calories\n", " meal_recommendations.sort(key=lambda x: abs(x[1][\"calories\"] - adjusted_meal_calories))\n", " \n", " # Prepare the response\n", " response = {\n", " \"query\": query,\n", " \"desired_mood\": desired_mood,\n", " \"meal_time\": meal_time,\n", " \"heart_rate_analysis\": hr_analysis,\n", " \"remaining_calories\": remaining_calories,\n", " \"target_meal_calories\": target_meal_calories,\n", " \"adjusted_meal_calories\": round(adjusted_meal_calories),\n", " \"food_suggestions\": recommended_foods,\n", " \"meal_recommendations\": meal_recommendations[:3], # Top 3 meal recommendations\n", " \"nutrition_advice\": nutrition_advice,\n", " \"foods_to_avoid\": nutrition_database[desired_mood][\"avoid\"][:3]\n", " }\n", " \n", " return response\n", "\n", "def display_food_recommendations(response):\n", " \"\"\"Display food recommendations in a user-friendly format.\"\"\"\n", " print(f\"šŸ“‹ PERSONALIZED NUTRITION RECOMMENDATIONS\")\n", " print(f\"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\")\n", " print(f\"šŸŽÆ Goal: Feel {response['desired_mood'].upper()} for {response['meal_time']}\")\n", " print(f\"šŸ’“ Current Heart Rate: {response['heart_rate_analysis']['current_hr']:.1f} BPM ({response['heart_rate_analysis']['hr_state']})\")\n", " print(f\"šŸ”‹ Calorie Budget: {response['remaining_calories']} calories remaining today\")\n", " print(f\"šŸ½ļø Target for this {response['meal_time']}: {response['adjusted_meal_calories']} calories\")\n", " print(f\"\\nšŸ’” NUTRITION INSIGHTS:\")\n", " for advice in response[\"nutrition_advice\"]:\n", " print(f\" • {advice}\")\n", " \n", " print(f\"\\nšŸ„— RECOMMENDED FOODS:\")\n", " for food in response[\"food_suggestions\"]:\n", " print(f\" • {food}\")\n", " \n", " print(f\"\\nāš ļø FOODS TO LIMIT:\")\n", " for food in response[\"foods_to_avoid\"]:\n", " print(f\" • {food}\")\n", " \n", " print(f\"\\nšŸ³ MEAL IDEAS:\")\n", " for meal_name, meal_info in response[\"meal_recommendations\"]:\n", " print(f\" • {meal_name.title()} ({meal_info['calories']} cal, {meal_info['protein']}g protein, {meal_info['carbs']}g carbs, {meal_info['fat']}g fat)\")\n", "\n", "# Test the recommendation system with a sample query\n", "sample_query = \"What type of food should I have to feel excited this afternoon?\"\n", "food_recommendations = recommend_food(sample_query)\n", "display_food_recommendations(food_recommendations)\n", "\n", "# Interactive query function\n", "def ask_food_question():\n", " user_query = input(\"\\nAsk a food recommendation question (e.g., 'What should I eat to stay focused tonight?'): \")\n", " if user_query.strip():\n", " recommendations = recommend_food(user_query)\n", " display_food_recommendations(recommendations)\n", " else:\n", " print(\"Please enter a valid question.\")\n", "\n", "# Uncomment to test with your own question\n", "# ask_food_question()\n", "\n", "print(\"\\nāœ… Nutrition recommendation system created!\")" ] }, { "cell_type": "markdown", "id": "personalization-markdown", "metadata": {}, "source": [ "## 5. Creating a Previous Day's Data Based Personalization Function\n", "\n", "Now we'll create a function that extracts insights from the previous day's heart rate data and generates personalized recommendations." ] }, { "cell_type": "code", "execution_count": null, "id": "personalization-function", "metadata": {}, "outputs": [], "source": [ "def get_yesterday_insights():\n", " \"\"\"Extract insights from yesterday's heart rate data to personalize recommendations.\"\"\"\n", " # We're already using synthetic data for yesterday\n", " yesterday_data = hourly_data.copy()\n", " \n", " # Calculate key insights\n", " try:\n", " max_strain_index = yesterday_data['strain_index'].idxmax()\n", " max_strain_hour = yesterday_data.loc[max_strain_index, 'hour'].hour\n", " max_strain_value = yesterday_data.loc[max_strain_index, 'strain_index']\n", " except:\n", " # Fallback if there's an issue with the strain index\n", " max_strain_hour = 14 # Default to 2pm\n", " max_strain_value = 7.5\n", " \n", " # Get recovery hours (optimal times for low-intensity activities)\n", " recovery_hours = [h.hour for h in yesterday_data[yesterday_data['optimal_recovery'] == True]['hour']]\n", " if not recovery_hours:\n", " recovery_hours = [22, 23, 0, 1, 2, 3] # Default to nighttime if none found\n", " \n", " # Get high heart rate periods\n", " high_hr_threshold = yesterday_data['hr_mean'].quantile(0.75)\n", " high_hr_periods = [h.hour for h in yesterday_data[yesterday_data['hr_mean'] > high_hr_threshold]['hour']]\n", " if not high_hr_periods:\n", " high_hr_periods = [12, 13, 14, 15] # Default to afternoon if none found\n", " \n", " # Calculate current hour to make recommendations based on time of day\n", " current_hour = 8 #datetime.now().hour, in 24 hour format\n", " \n", " # Store insights in a dictionary\n", " insights = {\n", " \"avg_heart_rate\": yesterday_data['hr_mean'].mean(),\n", " \"max_strain_hour\": max_strain_hour,\n", " \"max_strain_value\": max_strain_value,\n", " \"recovery_hours\": recovery_hours,\n", " \"high_hr_periods\": high_hr_periods,\n", " \"current_hour\": current_hour\n", " }\n", " \n", " return insights\n", "\n", "def generate_personalized_advice():\n", " \"\"\"Generate personalized health advice based on heart rate patterns.\"\"\"\n", " insights = get_yesterday_insights()\n", " \n", " # Generate time-appropriate advice\n", " advice = []\n", " current_hour = insights[\"current_hour\"]\n", " \n", " # Morning advice (6am - 11am)\n", " if 6 <= current_hour <= 11:\n", " if insights[\"max_strain_value\"] > 7:\n", " advice.append(f\"Your heart experienced high strain yesterday around {insights['max_strain_hour']}:00. Consider a gentler morning routine today with light stretching or yoga.\")\n", " \n", " if any(h in insights[\"high_hr_periods\"] for h in range(current_hour-3, current_hour+1)):\n", " advice.append(f\"Your heart rate tends to be higher around this time. Stay well hydrated this morning with at least 16oz of water.\")\n", " \n", " # Afternoon advice (12pm - 5pm)\n", " elif 12 <= current_hour <= 17:\n", " if insights[\"avg_heart_rate\"] > 80:\n", " advice.append(\"Your average heart rate was elevated yesterday. Consider a moderate-intensity workout today rather than high-intensity.\")\n", " \n", " if current_hour in insights[\"recovery_hours\"]:\n", " advice.append(f\"Your body seems to recover well around this time ({current_hour}:00). This would be a good time for a short mindfulness break or deep breathing exercises.\")\n", " \n", " # Evening advice (6pm - 10pm)\n", " elif 18 <= current_hour <= 22:\n", " if insights[\"max_strain_hour\"] >= 18:\n", " advice.append(\"Your heart experienced peak strain in the evening yesterday. Consider shifting intense activities to earlier in the day.\")\n", " \n", " recovery_evening_hours = [h for h in insights[\"recovery_hours\"] if h >= 18]\n", " if recovery_evening_hours:\n", " recovery_times = \", \".join([f\"{h}:00\" for h in recovery_evening_hours])\n", " advice.append(f\"Your optimal recovery periods in the evening are around {recovery_times}. These are great times for gentle stretching or relaxation.\")\n", " \n", " # Night advice (11pm - 5am)\n", " else:\n", " advice.append(\"Your heart rate data suggests you should be resting now. Focus on quality sleep to optimize recovery.\")\n", " \n", " # General advice based on yesterday's patterns\n", " if insights[\"max_strain_value\"] > 8:\n", " advice.append(f\"Yesterday had periods of high heart strain. Prioritize recovery today with extra hydration and lower-intensity activities.\")\n", " \n", " # Add hydration advice based on high HR periods\n", " high_hr_str = \", \".join([f\"{h}:00\" for h in insights[\"high_hr_periods\"]])\n", " advice.append(f\"Your heart rate was elevated at {high_hr_str} yesterday. Ensure you drink more water during these times today.\")\n", " \n", " # If no specific advice was generated, add a default message\n", " if not advice:\n", " advice.append(\"Based on your heart rate patterns, maintain your regular routine with adequate hydration and rest periods.\")\n", " \n", " return \"\\n\\n\".join(advice)\n", "\n", "# Test the personalization function\n", "personalized_advice = generate_personalized_advice()\n", "print(\"Personalized Advice Based on Heart Rate:\\n\")\n", "print(personalized_advice)\n", "\n", "print(\"\\nāœ… Personalization function created!\")" ] }, { "cell_type": "markdown", "id": "personalized-agent-markdown", "metadata": {}, "source": [ "## 6. Building a Personalized Multi-Agent RAG System\n", "\n", "Now let's create a personalized RAG system that leverages heart rate insights in its recommendations." ] }, { "cell_type": "code", "execution_count": null, "id": "personalized-agent-code", "metadata": {}, "outputs": [], "source": [ "# Create a personalized retrieval function that combines heart rate insights and health tips\n", "def retrieve_personalized_tips(query: str) -> str:\n", " \"\"\"Retrieve health tips and personalize them based on heart rate data.\"\"\"\n", " # Get standard tips\n", " standard_tips = retrieve_tips(query)\n", " \n", " # Get personalized advice based on yesterday's data\n", " personalized_advice = generate_personalized_advice()\n", " \n", " # Combine both\n", " return f\"PERSONALIZED ADVICE BASED ON YOUR HEART RATE DATA:\\n{personalized_advice}\\n\\nGENERAL HEALTH TIPS:\\n{standard_tips}\"\n", "\n", "# For testing without Azure AI Foundry, create a mock response for the agents\n", "class MockResponseGenerator:\n", " def __init__(self, name):\n", " self.name = name\n", " \n", " def generate_response(self, message):\n", " if self.name == \"PersonalizedRetrieverAgent\":\n", " # This agent would fetch the data\n", " return f\"I've retrieved the personalized health information based on your heart rate data.\"\n", " else:\n", " # This agent would craft a response\n", " insights = get_yesterday_insights()\n", " recovery_times = \", \".join([f\"{h}:00\" for h in insights[\"recovery_hours\"][:3]])\n", " return (f\"Based on your heart rate data, I recommend scheduling activities around your natural recovery periods at {recovery_times}. \"\n", " f\"Your heart experienced the most strain yesterday around {insights['max_strain_hour']}:00, so today would be good for gentler activities.\"\n", " f\"Remember to hydrate well, especially around {insights['high_hr_periods'][0]}:00 when your heart rate tends to be elevated.\")\n", "\n", "# Create the personalized agents\n", "try:\n", " # Create a personalized version of our agent team\n", " personalized_retriever = AssistantAgent(\n", " name=\"PersonalizedRetrieverAgent\",\n", " system_message=(\n", " \"You are a smart health data analyzer and retrieval agent. \"\n", " \"Your task is to fetch relevant fitness and health tips based on the user's query, \"\n", " \"and personalize them based on their heart rate data from yesterday. \"\n", " \"Use the provided tool 'retrieve_personalized_tips' to get the information.\"\n", " ),\n", " model_client=model_client,\n", " tools=[retrieve_personalized_tips],\n", " reflect_on_tool_use=True\n", " )\n", "\n", " personalized_responder = AssistantAgent(\n", " name=\"PersonalizedResponderAgent\",\n", " system_message=(\n", " \"You are a friendly and personalized fitness coach. \"\n", " \"Use both the general health tips and personalized heart rate insights to craft \"\n", " \"a response that is specifically tailored to the user's current physiological state. \"\n", " \"Reference their heart rate patterns and recovery needs when providing advice.\"\n", " ),\n", " model_client=model_client\n", " )\n", "\n", " personalized_group_chat = RoundRobinGroupChat(\n", " agents=[personalized_retriever, personalized_responder],\n", " termination_condition=None,\n", " max_turns=4\n", " )\n", " \n", " print(\"āœ… Personalized RAG system created with Azure AI Foundry!\")\n", " using_mock = False\n", " \n", "except Exception as e:\n", " print(f\"āš ļø Could not create agents with Azure AI Foundry: {e}\")\n", " print(\"āš ļø Using mock agents for demonstration purposes...\")\n", " \n", " # Create mock agents\n", " retriever_mock = MockResponseGenerator(\"PersonalizedRetrieverAgent\")\n", " responder_mock = MockResponseGenerator(\"PersonalizedResponderAgent\")\n", " using_mock = True\n", " \n", " print(\"āœ… Mock personalized RAG system created for demonstration!\")" ] } ], "metadata": { "kernelspec": { "display_name": "base", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.12" } }, "nbformat": 4, "nbformat_minor": 5 }