supporting-blog-content/fetch-surrounding-chunks/fetch-surrounding-chunks.ipynb (1,301 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "aAUkwshINwV7" }, "source": [ "# Fetch surrounding chunks (N-1, N+1)\n", "\n", "<a target=\"_blank\" href=\"https://colab.research.google.com/github/elastic/elasticsearch-labs/blob/main/supporting-blog-content/fetch-surrounding-chunks/fetch-surrounding-chunks.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n", "\n", "\n", "This notebook is designed to handle the ingestion of book text (Harry Potter and the Sorcerer's Stone) into an Elasticsearch Cloud instance. It includes partitioning the book text into chapters and chunking the chapter text, which are then ingested into Elasticsearch. The setup utilizes a nested structure, and for each chunk, it stores dense and sparse (ELSER) vector representations along with the text representation.\n", "\n", "Searches are performed using dense vector comparisons, sparse vector comparisons, and text search in parallel to demonstrate the power of hybrid search strategies. Additionally, the notebook is configured to retrieve adjacent chunks (n-1 and n+1), allowing for a more contextual understanding of the search results.\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "id": "MUEpppV7SeLu" }, "source": [ "## Install required python libraries\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "nXuL8wsQNq8G" }, "outputs": [], "source": [ "!pip install elasticsearch==8.13.2\n", "!pip install pandas\n", "!python -m pip install eland\n", "\n", "import json\n", "import time\n", "import urllib.request\n", "import re\n", "import pandas as pd\n", "from transformers import AutoTokenizer, BertTokenizer\n", "from elasticsearch import Elasticsearch, helpers, exceptions\n", "import textwrap" ] }, { "cell_type": "markdown", "metadata": { "id": "2w7uTCYdQ0m6" }, "source": [ "## Elasticsearch and Tokenizer Configuration\n", "\n", "This section sets up the necessary configurations for connecting to Elasticsearch and initializing the tokenizers used for text processing.\n", "\n", "### Configuration Details:\n", "1. **Elasticsearch Credentials**:\n", " - `ELASTIC_CLOUD_ID`: The Cloud ID for the Elasticsearch cluster, securely fetched using the `getpass` function.\n", " - `ELASTIC_API_KEY`: The API key for Elasticsearch authentication, securely fetched using the `getpass` function.\n", "\n", "2. **Index Settings**:\n", " - `raw_source_index`: The name of the index for the raw dataset (`harry_potter_dataset-raw`).\n", " - `index_name`: The name of the enriched dataset index (`harry_potter_dataset_enriched`).\n", "\n", "3. **Embedding Models**:\n", " - `dense_embedding_model_id`: Specifies the model used for generating dense embeddings (`sentence-transformers__all-minilm-l6-v2`).\n", " - `dense_huggingface_model_id`: The Hugging Face model ID for the dense embeddings (`sentence-transformers/all-MiniLM-L6-v2`).\n", " - `dense_model_number_of_allocators`: The number of allocators for the dense embedding model (2).\n", " \n", "\n", " - `elser_model_id`: Specifies the ELSER model ID (`.elser_model_2_linux-x86_64`).\n", " - `elser_model_number_of_allocators`: The number of allocators for the ELSER model (2).\n", "\n", "4. **Tokenizer Initialization**:\n", " - `bert_tokenizer`: Initializes the BERT tokenizer (`bert-base-uncased`) for English text processing.\n", "\n", "5. **Chunking Parameters**:\n", " - `SEMANTIC_SEARCH_TOKEN_LIMIT`: Sets the token limit for each chunk (500 tokens per chunk, considering space for special tokens).\n", " - `ELSER_TOKEN_OVERLAP`: Defines the overlap ratio between chunks (default is 0%, customizable for context continuity).\n", "\n", "These configurations ensure that the necessary components are properly set up for efficient text processing, indexing, and search operations in Elasticsearch.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "LGQAjG6PERfx" }, "outputs": [], "source": [ "from elasticsearch import Elasticsearch\n", "from getpass import getpass\n", "\n", "# https://www.elastic.co/search-labs/tutorials/install-elasticsearch/elastic-cloud#finding-your-cloud-id\n", "ELASTIC_CLOUD_ID = getpass(\"Elastic Cloud ID: \")\n", "\n", "# https://www.elastic.co/search-labs/tutorials/install-elasticsearch/elastic-cloud#creating-an-api-key\n", "ELASTIC_API_KEY = getpass(\"Elastic Api Key: \")\n", "\n", "raw_source_index = \"harry_potter_dataset-raw\"\n", "index_name = \"harry_potter_dataset_enriched\"\n", "\n", "dense_embedding_model_id = \"sentence-transformers__all-minilm-l6-v2\"\n", "dense_huggingface_model_id = \"sentence-transformers/all-MiniLM-L6-v2\"\n", "dense_model_number_of_allocators = 2\n", "\n", "elser_model_id = \".elser_model_2_linux-x86_64\"\n", "elser_model_number_of_allocators = 2\n", "\n", "bert_tokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\n", "\n", "\n", "SEMANTIC_SEARCH_TOKEN_LIMIT = 500\n", "ELSER_TOKEN_OVERLAP = 0.0\n", "\n", "\n", "# Create the client instance\n", "esclient = Elasticsearch(\n", " cloud_id=ELASTIC_CLOUD_ID,\n", " api_key=ELASTIC_API_KEY,\n", ")\n", "print(esclient.info())" ] }, { "cell_type": "markdown", "metadata": { "id": "rOWheQ-uJE2C" }, "source": [ "\n", "## Import model\n", "Using the eland_import_hub_model script, download and install all-MiniLM-L6-v2 transformer model. Setting the NLP --task-type as text_embedding.\n", "\n", "To get the cloud id, go to Elastic cloud and On the deployment overview page, copy down the Cloud ID.\n", "\n", "To authenticate your request, You could use API key. Alternatively, you can use your cloud deployment username and password." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "4NH8JJkQJDit" }, "outputs": [], "source": [ "!eland_import_hub_model --cloud-id $ELASTIC_CLOUD_ID --es-model-id {dense_embedding_model_id} --hub-model-id {dense_huggingface_model_id} --task-type text_embedding --es-api-key $ELASTIC_API_KEY --start --clear-previous\n", "resp = esclient.ml.update_trained_model_deployment(\n", " model_id=dense_embedding_model_id,\n", " body={\"number_of_allocations\": dense_model_number_of_allocators},\n", ")\n", "print(resp)" ] }, { "cell_type": "markdown", "metadata": { "id": "f1SXd1uhhhhe" }, "source": [ "# Download and Deploy ELSER Model\n", "\n", "In this example, we are going to download and deploy the ELSER model in our ML node. Make sure you have an ML node in order to run the ELSER model." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "vL68fse9hhAN" }, "outputs": [], "source": [ "# delete model if already downloaded and deployed\n", "try:\n", " esclient.ml.delete_trained_model(model_id=elser_model_id, force=True)\n", " print(\"Model deleted successfully, We will proceed with creating one\")\n", "except exceptions.NotFoundError:\n", " print(\"Model doesn't exist, but We will proceed with creating one\")\n", "\n", "# Creates the ELSER model configuration. Automatically downloads the model if it doesn't exist.\n", "esclient.ml.put_trained_model(\n", " model_id=elser_model_id, input={\"field_names\": [\"text_field\"]}\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "2R54LYIqwC-f" }, "source": [ "The above command will download the ELSER model. This will take a few minutes to complete. Use the following command to check the status of the model download." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "wE3KHB3BwCVk" }, "outputs": [], "source": [ "while True:\n", " status = esclient.ml.get_trained_models(\n", " model_id=elser_model_id, include=\"definition_status\"\n", " )\n", "\n", " if status[\"trained_model_configs\"][0][\"fully_defined\"]:\n", " print(\"ELSER Model is downloaded and ready to be deployed.\")\n", " break\n", " else:\n", " print(\"ELSER Model is downloaded but not ready to be deployed.\")\n", " time.sleep(5)" ] }, { "cell_type": "markdown", "metadata": { "id": "_8-mvOj5wanm" }, "source": [ "Once the model is downloaded, we can deploy the model in our ML node. Use the following command to deploy the model. This also will take a few minutes to complete.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "xzdANHzxwaSf" }, "outputs": [], "source": [ "# Start ELSER model deployment if not already deployed\n", "esclient.ml.start_trained_model_deployment(\n", " model_id=elser_model_id,\n", " number_of_allocations=elser_model_number_of_allocators,\n", " wait_for=\"starting\",\n", ")\n", "\n", "while True:\n", " status = esclient.ml.get_trained_models_stats(\n", " model_id=elser_model_id,\n", " )\n", " if status[\"trained_model_stats\"][0][\"deployment_stats\"][\"state\"] == \"started\":\n", " print(\"ELSER Model has been successfully deployed.\")\n", " break\n", " else:\n", " print(\"ELSER Model is currently being deployed.\")\n", " time.sleep(5)" ] }, { "cell_type": "markdown", "metadata": { "id": "3LlGP3aJP1ce" }, "source": [ "##Helper Methods/Functions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "xB2a9-qtONbQ" }, "outputs": [], "source": [ "def whitespace_tokenize(text):\n", " return text.split()\n", "\n", "\n", "def manage_index(es, index_name, settings, mappings, delete_index=False):\n", " if es.indices.exists(index=index_name):\n", " if delete_index:\n", " print(f\"Index {index_name} exists. Deleting it...\")\n", " es.indices.delete(index=index_name)\n", " print(f\"Index {index_name} deleted!\")\n", " else:\n", " print(f\"Index {index_name} already exists. Skipping creation.\")\n", " return\n", " es.indices.create(index=index_name, settings=settings, mappings=mappings)\n", " print(f\"Index {index_name} created successfully!\")\n", "\n", "\n", "def generate_actions(df, index_name):\n", " for _, row in df.iterrows():\n", " chunks = chunk(row[\"chapter_full_text\"])\n", " passages = [\n", " {\"text\": ch[\"text\"], \"chunk_number\": ch[\"chunk_number\"]} for ch in chunks\n", " ]\n", " doc = {\n", " \"_index\": index_name,\n", " \"_source\": {\n", " \"book_title\": row[\"book_title\"],\n", " \"chapter\": row[\"chapter\"],\n", " \"chapter_full_text\": row[\"chapter_full_text\"],\n", " \"passages\": passages,\n", " },\n", " }\n", " yield doc\n", "\n", "\n", "def index_dataframe(es, index_name, df, thread_count=1, chunk_size=200):\n", " print(f\"Indexing documents to {index_name}...\")\n", " success_count = 0\n", " failed_count = 0\n", " try:\n", " for success, _ in helpers.parallel_bulk(\n", " es,\n", " generate_actions(df, index_name),\n", " thread_count=thread_count,\n", " chunk_size=chunk_size,\n", " ):\n", " if success:\n", " success_count += 1\n", " else:\n", " failed_count += 1\n", " except helpers.BulkIndexError as e:\n", " print(\"Bulk indexing error:\", e)\n", " for error_detail in e.errors:\n", " print(error_detail)\n", " print(f\"Successfully indexed {success_count} documents.\")\n", " print(f\"Failed to index {failed_count} documents.\")\n", "\n", "\n", "def build_vector(text):\n", " docs = [{\"text_field\": text}]\n", " response = esclient.ml.infer_trained_model(\n", " model_id=dense_embedding_model_id, docs=docs\n", " )\n", " return response.get(\"inference_results\", [{}])[0].get(\"predicted_value\", [])\n", "\n", "\n", "def build_rrf_query(\n", " embeddings, user_query, rrf_rank_constant, rrf_window_size, debug=False\n", "):\n", " query = {\n", " \"_source\": False,\n", " \"sub_searches\": [\n", " {\n", " \"query\": {\n", " \"nested\": {\n", " \"path\": \"passages\",\n", " \"query\": {\"match\": {\"passages.text\": user_query}},\n", " \"inner_hits\": {\n", " \"name\": \"text_hits\",\n", " \"size\": 1,\n", " \"_source\": [\"passages.text\", \"passages.chunk_number\"],\n", " },\n", " }\n", " }\n", " },\n", " {\n", " \"query\": {\n", " \"nested\": {\n", " \"path\": \"passages\",\n", " \"query\": {\n", " \"knn\": {\n", " \"query_vector\": embeddings,\n", " \"field\": \"passages.vector.predicted_value\",\n", " \"num_candidates\": 50,\n", " }\n", " },\n", " \"inner_hits\": {\n", " \"name\": \"dense_hit\",\n", " \"size\": 1,\n", " \"_source\": [\"passages.text\", \"passages.chunk_number\"],\n", " },\n", " }\n", " }\n", " },\n", " {\n", " \"query\": {\n", " \"nested\": {\n", " \"path\": \"passages\",\n", " \"query\": {\n", " \"bool\": {\n", " \"should\": [\n", " {\n", " \"text_expansion\": {\n", " \"passages.content_embedding.predicted_value\": {\n", " \"model_id\": elser_model_id,\n", " \"model_text\": user_query,\n", " }\n", " }\n", " }\n", " ]\n", " }\n", " },\n", " \"inner_hits\": {\n", " \"name\": \"sparse_hits\",\n", " \"size\": 1,\n", " \"_source\": [\"passages.text\", \"passages.chunk_number\"],\n", " },\n", " }\n", " }\n", " },\n", " ],\n", " \"rank\": {\n", " \"rrf\": {\"window_size\": rrf_window_size, \"rank_constant\": rrf_rank_constant}\n", " },\n", " }\n", " if debug:\n", " print(json.dumps(query, indent=4))\n", " return query\n", "\n", "\n", "def build_custom_query(\n", " query_vector, user_query, knn_boost_factor, text_expansion_boost, debug=False\n", "):\n", " query = {\n", " \"_source\": False,\n", " \"fields\": [\"chapter\"],\n", " \"query\": {\n", " \"function_score\": {\n", " \"query\": {\n", " \"bool\": {\n", " \"should\": [\n", " {\n", " \"nested\": {\n", " \"path\": \"passages\",\n", " \"query\": {\"match\": {\"passages.text\": user_query}},\n", " \"inner_hits\": {\n", " \"name\": \"text_hits\",\n", " \"size\": 1,\n", " \"_source\": [\n", " \"passages.text\",\n", " \"passages.chunk_number\",\n", " ],\n", " },\n", " }\n", " },\n", " {\n", " \"nested\": {\n", " \"path\": \"passages\",\n", " \"query\": {\n", " \"script_score\": {\n", " \"query\": {\n", " \"knn\": {\n", " \"field\": \"passages.vector.predicted_value\",\n", " \"query_vector\": query_vector,\n", " \"num_candidates\": 50,\n", " }\n", " },\n", " \"script\": {\n", " \"source\": \"Math.log(1 + _score * params.boost_factor)\",\n", " \"params\": {\n", " \"boost_factor\": knn_boost_factor\n", " },\n", " },\n", " }\n", " },\n", " \"inner_hits\": {\n", " \"name\": \"dense_hit\",\n", " \"size\": 1,\n", " \"_source\": [\n", " \"passages.text\",\n", " \"passages.chunk_number\",\n", " ],\n", " },\n", " }\n", " },\n", " {\n", " \"nested\": {\n", " \"path\": \"passages\",\n", " \"query\": {\n", " \"script_score\": {\n", " \"query\": {\n", " \"bool\": {\n", " \"should\": [\n", " {\n", " \"text_expansion\": {\n", " \"passages.content_embedding.predicted_value\": {\n", " \"model_id\": \".elser_model_2_linux-x86_64\",\n", " \"model_text\": user_query,\n", " }\n", " }\n", " }\n", " ]\n", " }\n", " },\n", " \"script\": {\n", " \"source\": \"_score * params.boost_factor\",\n", " \"params\": {\n", " \"boost_factor\": text_expansion_boost\n", " },\n", " },\n", " }\n", " },\n", " \"inner_hits\": {\n", " \"name\": \"sparse_hits\",\n", " \"size\": 1,\n", " \"_source\": [\n", " \"passages.text\",\n", " \"passages.chunk_number\",\n", " ],\n", " },\n", " }\n", " },\n", " ]\n", " }\n", " },\n", " \"score_mode\": \"sum\",\n", " \"boost_mode\": \"sum\",\n", " }\n", " },\n", " }\n", " if debug:\n", " print(json.dumps(query, indent=4))\n", " return query\n", "\n", "\n", "def get_adjacent_chunks_query(doc_id, base_chunk_number, max_chunk_number, debug=False):\n", " # Determine the chunk numbers to query based on the base_chunk_number\n", " if base_chunk_number == 1:\n", " chunk_numbers = [\n", " base_chunk_number,\n", " base_chunk_number + 1,\n", " base_chunk_number + 2,\n", " ]\n", " elif base_chunk_number == max_chunk_number:\n", " chunk_numbers = [\n", " base_chunk_number,\n", " base_chunk_number - 1,\n", " base_chunk_number - 2,\n", " ]\n", " else:\n", " chunk_numbers = [\n", " base_chunk_number - 1,\n", " base_chunk_number,\n", " base_chunk_number + 1,\n", " ]\n", "\n", " # Construct the query\n", " query = {\n", " \"_source\": False,\n", " \"query\": {\n", " \"bool\": {\n", " \"must\": [\n", " {\"term\": {\"_id\": doc_id}},\n", " {\n", " \"nested\": {\n", " \"path\": \"passages\",\n", " \"query\": {\n", " \"bool\": {\n", " \"should\": [\n", " {\"term\": {\"passages.chunk_number\": num}}\n", " for num in chunk_numbers\n", " ]\n", " }\n", " },\n", " \"inner_hits\": {\n", " \"_source\": [\"passages.text\", \"passages.chunk_number\"]\n", " },\n", " }\n", " },\n", " ]\n", " }\n", " },\n", " }\n", "\n", " if debug:\n", " print(json.dumps(query, indent=4))\n", "\n", " return query\n", "\n", "\n", "def get_max_chunk_number_query(chapter_number, debug=False):\n", " # Construct the query\n", " query = {\n", " \"size\": 0,\n", " \"query\": {\"term\": {\"chapter\": chapter_number}},\n", " \"aggs\": {\n", " \"max_chunk_number\": {\n", " \"nested\": {\"path\": \"passages\"},\n", " \"aggs\": {\"max_chunk\": {\"max\": {\"field\": \"passages.chunk_number\"}}},\n", " }\n", " },\n", " }\n", "\n", " if debug:\n", " print(json.dumps(query, indent=4))\n", "\n", " return query\n", "\n", "\n", "def print_text_from_results(results):\n", " if results[\"hits\"][\"hits\"]:\n", " for hit in results[\"hits\"][\"hits\"]:\n", " if \"inner_hits\" in hit and \"passages\" in hit[\"inner_hits\"]:\n", " nested_hits = hit[\"inner_hits\"][\"passages\"][\"hits\"][\"hits\"]\n", " for nested_hit in nested_hits:\n", " chunk_number = nested_hit[\"_source\"][\"chunk_number\"]\n", " text = nested_hit[\"_source\"][\"text\"]\n", " # print(f\"Text from Chunk {chunk_number}: {text}\")\n", " print(\n", " f\"\\n\\nText from Chunk {chunk_number}: {textwrap.fill(text, width=200)}\"\n", " )\n", " else:\n", " print(\"No hits found.\")\n", "\n", "\n", "def chunk(\n", " text, chunk_size=SEMANTIC_SEARCH_TOKEN_LIMIT, overlap_ratio=ELSER_TOKEN_OVERLAP\n", "):\n", " step_size = round(chunk_size * (1 - overlap_ratio))\n", " tokens = bert_tokenizer.encode(text)\n", " tokens = tokens[1:-1] # remove special beginning and end tokens\n", " result = []\n", " chunk_number = 1\n", " for i in range(0, len(tokens), step_size):\n", " end = i + chunk_size\n", " chunk_text = bert_tokenizer.decode(tokens[i:end])\n", " result.append({\"text\": chunk_text, \"chunk_number\": chunk_number})\n", " chunk_number += 1\n", " if end >= len(tokens):\n", " break\n", " return result\n", "\n", "\n", "def check_task_status(es, task_id):\n", " while True:\n", " task_response = es.tasks.get(task_id=task_id)\n", " if task_response[\"completed\"]:\n", " print(\"Reindexing complete.\")\n", " break\n", " else:\n", " print(\"Indexing...\")\n", " time.sleep(10)" ] }, { "cell_type": "markdown", "metadata": { "id": "izMU8HqqP7ld" }, "source": [ "##Ingest Pipelines" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "iUOFJK48OamP", "outputId": "b7feb26f-a084-4d48-dbba-4a53cc0b0255" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Ingest pipeline 'books_dataset_chunker' created/updated successfully.\n" ] } ], "source": [ "# Define the ingest pipeline configuration\n", "pipeline_body = {\n", " \"description\": \"Pipeline for processing book passages\",\n", " \"processors\": [\n", " {\n", " \"foreach\": {\n", " \"field\": \"passages\",\n", " \"processor\": {\n", " \"inference\": {\n", " \"field_map\": {\"_ingest._value.text\": \"text_field\"},\n", " \"model_id\": dense_embedding_model_id,\n", " \"target_field\": \"_ingest._value.vector\",\n", " \"on_failure\": [\n", " {\n", " \"append\": {\n", " \"field\": \"_source._ingest.inference_errors\",\n", " \"value\": [\n", " {\n", " \"message\": \"Processor 'inference' in pipeline 'ml-inference-title-vector' failed with message '{{ _ingest.on_failure_message }}'\",\n", " \"pipeline\": \"ml-inference-title-vector\",\n", " \"timestamp\": \"{{{ _ingest.timestamp }}}\",\n", " }\n", " ],\n", " }\n", " }\n", " ],\n", " }\n", " },\n", " }\n", " },\n", " {\n", " \"foreach\": {\n", " \"field\": \"passages\",\n", " \"processor\": {\n", " \"inference\": {\n", " \"field_map\": {\"_ingest._value.text\": \"text_field\"},\n", " \"model_id\": elser_model_id,\n", " \"target_field\": \"_ingest._value.content_embedding\",\n", " \"on_failure\": [\n", " {\n", " \"append\": {\n", " \"field\": \"_source._ingest.inference_errors\",\n", " \"value\": [\n", " {\n", " \"message\": \"Processor 'inference' in pipeline 'ml-inference-title-vector' failed with message '{{ _ingest.on_failure_message }}'\",\n", " \"pipeline\": \"ml-inference-title-vector\",\n", " \"timestamp\": \"{{{ _ingest.timestamp }}}\",\n", " }\n", " ],\n", " }\n", " }\n", " ],\n", " }\n", " },\n", " }\n", " },\n", " ],\n", "}\n", "\n", "# Create or update the pipeline\n", "pipeline_id = \"books_dataset_chunker\"\n", "esclient.ingest.put_pipeline(id=pipeline_id, body=pipeline_body)\n", "print(f\"Ingest pipeline '{pipeline_id}' created/updated successfully.\")" ] }, { "cell_type": "markdown", "metadata": { "id": "6ZkRwEGdQBRP" }, "source": [ "##Index Settings" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "vZ3Z5gZbOgjF", "outputId": "5a1ed103-d9be-42ae-daac-bab2daca51be" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Index harry_potter_dataset_enriched exists. Deleting it...\n", "Index harry_potter_dataset_enriched deleted!\n", "Index harry_potter_dataset_enriched created successfully!\n", "Index harry_potter_dataset-raw exists. Deleting it...\n", "Index harry_potter_dataset-raw deleted!\n", "Index harry_potter_dataset-raw created successfully!\n" ] } ], "source": [ "index_settings = {\n", " \"settings\": {\n", " \"number_of_shards\": 2,\n", " \"number_of_replicas\": 0,\n", " \"default_pipeline\": \"books_dataset_chunker\",\n", " },\n", " \"mappings\": {\n", " \"dynamic\": \"false\",\n", " \"properties\": {\n", " \"book_title\": {\"type\": \"keyword\"},\n", " \"chapter\": {\"type\": \"keyword\"},\n", " \"chapter_full_text\": {\"type\": \"text\", \"index\": False},\n", " \"passages\": {\n", " \"type\": \"nested\",\n", " \"properties\": {\n", " \"content_embedding\": {\n", " \"properties\": {\n", " \"is_truncated\": {\"type\": \"boolean\"},\n", " \"model_id\": {\n", " \"type\": \"text\",\n", " \"fields\": {\n", " \"keyword\": {\"type\": \"keyword\", \"ignore_above\": 256}\n", " },\n", " },\n", " \"predicted_value\": {\"type\": \"sparse_vector\"},\n", " }\n", " },\n", " \"text\": {\n", " \"type\": \"text\",\n", " \"fields\": {\"keyword\": {\"type\": \"keyword\", \"ignore_above\": 256}},\n", " },\n", " \"vector\": {\n", " \"properties\": {\n", " \"is_truncated\": {\"type\": \"boolean\"},\n", " \"model_id\": {\n", " \"type\": \"text\",\n", " \"fields\": {\n", " \"keyword\": {\"type\": \"keyword\", \"ignore_above\": 256}\n", " },\n", " },\n", " \"predicted_value\": {\n", " \"type\": \"dense_vector\",\n", " \"dims\": 384,\n", " \"index\": True,\n", " \"similarity\": \"dot_product\",\n", " },\n", " }\n", " },\n", " \"chunk_number\": {\"type\": \"integer\"},\n", " },\n", " },\n", " },\n", " },\n", "}\n", "\n", "raw_source_index_settings = {\n", " \"settings\": {\"number_of_shards\": 2, \"number_of_replicas\": 0},\n", " \"mappings\": {\n", " \"dynamic\": \"false\",\n", " \"properties\": {\n", " \"book_title\": {\"type\": \"keyword\"},\n", " \"chapter\": {\"type\": \"keyword\"},\n", " \"chapter_full_text\": {\"type\": \"text\", \"index\": False},\n", " \"passages\": {\n", " \"type\": \"nested\",\n", " \"properties\": {\n", " \"text\": {\n", " \"type\": \"text\",\n", " \"fields\": {\"keyword\": {\"type\": \"keyword\", \"ignore_above\": 256}},\n", " },\n", " \"chunk_number\": {\"type\": \"integer\"},\n", " },\n", " },\n", " },\n", " },\n", "}\n", "\n", "# Manage indices\n", "manage_index(\n", " esclient,\n", " index_name,\n", " index_settings[\"settings\"],\n", " index_settings[\"mappings\"],\n", " delete_index=True,\n", ")\n", "manage_index(\n", " esclient,\n", " raw_source_index,\n", " raw_source_index_settings[\"settings\"],\n", " raw_source_index_settings[\"mappings\"],\n", " delete_index=True,\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "NPtbLhVOQUF3" }, "source": [ "## Fetch and Process the Book Text\n", "\n", "This section downloads the full text of \"Harry Potter and the Sorcerer's Stone\" from a specified URL and processes it to extract chapters and their titles. The text is then structured into a pandas DataFrame for further analysis and indexing.\n", "\n", "### Key Steps:\n", "1. **Download Text**: The book is fetched using `urllib.request` from the provided URL.\n", "2. **Extract Chapters**: The text is split into chapters based on predefined patterns, omitting the text before the first chapter.\n", "3. **Capture Chapter Titles**: Chapter titles are extracted and paired with their respective texts.\n", "4. **Data Structuring**:\n", " - Convert the list of chapter titles and texts into a DataFrame.\n", " - Assign sequential numbers to chapters.\n", " - Add the book title as metadata.\n", " - Apply a text chunking function to split each chapter into manageable passages.\n", "\n", "This prepares the text data for efficient indexing and advanced search operations in Elasticsearch.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "0L4YI96xOuKn", "outputId": "68318a23-b10f-49ab-a329-ec32b1d49993" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Token indices sequence length is longer than the specified maximum sequence length for this model (6535 > 512). Running this sequence through the model will result in indexing errors\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Total chapters found: 17\n", "First chapter title: CHAPTER ONE\n", "Text sample from first chapter: \n", "\n", "THE BOY WHO LIVED\n", "\n", "Mr. and Mrs. Dursley, of number four, Privet Drive, were proud to say\n", "that they were perfectly normal, thank you very much. They were the last\n", "people you'd expect to be involved in anything strange or mysterious,\n", "because they just didn't hold with such nonsense.\n", "\n", "Mr. Dursley was the director of a firm called Grunnings, which made\n", "drills. He was a big, beefy man with hardly any neck, although he did\n", "have a very large mustache. Mrs. Dursley was thin and blonde and had\n", "nearly t\n" ] } ], "source": [ "# Fetch and process the book text\n", "potter_book_url = \"https://raw.githubusercontent.com/amephraim/nlp/master/texts/J.%20K.%20Rowling%20-%20Harry%20Potter%201%20-%20Sorcerer's%20Stone.txt\"\n", "response = urllib.request.urlopen(potter_book_url)\n", "harry_potter_book_text = response.read().decode(\"utf-8\")\n", "chapter_pattern = re.compile(r\"CHAPTER [A-Z]+\", re.IGNORECASE)\n", "chapters = chapter_pattern.split(harry_potter_book_text)[1:]\n", "chapter_titles = re.findall(chapter_pattern, harry_potter_book_text)\n", "chapters_with_titles = list(zip(chapter_titles, chapters))\n", "\n", "print(\"Total chapters found:\", len(chapters))\n", "if chapters_with_titles:\n", " print(\"First chapter title:\", chapters_with_titles[0][0])\n", " print(\"Text sample from first chapter:\", chapters_with_titles[0][1][:500])\n", "\n", "\n", "# Structuring chapters into a DataFrame\n", "df = pd.DataFrame(chapters_with_titles, columns=[\"chapter_title\", \"chapter_full_text\"])\n", "df[\"chapter\"] = df.index + 1\n", "df[\"book_title\"] = \"Harry Potter and the Sorcerer’s Stone\"\n", "df[\"passages\"] = df[\"chapter_full_text\"].apply(lambda text: chunk(text))" ] }, { "cell_type": "markdown", "metadata": { "id": "DKK4574EQaTl" }, "source": [ "## Indexing DataFrame into Elasticsearch\n", "\n", "This section uploads the structured data from a pandas DataFrame into a specified Elasticsearch index. The DataFrame contains chapter information from \"Harry Potter and the Sorcerer's Stone\", including chapter titles, full texts, and additional metadata.\n", "\n", "### Key Operation:\n", "- **Index Data**: The `index_dataframe` function is called with the Elasticsearch client, the raw source index name, and the DataFrame as arguments. This operation effectively uploads the data into Elasticsearch, making it searchable and ready for further processing.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "7ReLAtz1O1HF", "outputId": "e07cace3-8c74-4a72-b2a9-10a7f22d99fd" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Indexing documents to harry_potter_dataset-raw...\n", "Successfully indexed 17 documents.\n", "Failed to index 0 documents.\n" ] } ], "source": [ "index_dataframe(esclient, raw_source_index, df)" ] }, { "cell_type": "markdown", "metadata": { "id": "pA5QroYdQgcM" }, "source": [ "## Asynchronous Reindexing in Elasticsearch\n", "\n", "This section initiates an asynchronous reindex operation to transfer data from the raw source index to the enriched index in Elasticsearch. This process runs in the background, allowing other operations to continue without waiting for completion.\n", "\n", "### Key Steps:\n", "1. **Start Reindex**: The reindex operation is triggered from the `raw_source_index` to the `index_name`, with `wait_for_completion` set to `False` to allow asynchronous execution.\n", "2. **Retrieve Task ID**: The task ID of the reindex operation is captured and printed for monitoring purposes.\n", "3. **Monitor Progress**: The `check_task_status` function continuously checks the status of the reindex task, providing updates every 10 seconds until the operation is complete.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "HOCX_lbmO3zl", "outputId": "4e8a2859-6c28-42ff-b956-7183c80ede9e" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Task ID: _m32HYljRgqsVl7G-4wPtw:23883\n", "Indexing...\n", "Indexing...\n", "Indexing...\n", "Reindexing complete.\n" ] } ], "source": [ "# Start the reindex operation asynchronously\n", "response = esclient.reindex(\n", " body={\"source\": {\"index\": raw_source_index}, \"dest\": {\"index\": index_name}},\n", " wait_for_completion=False,\n", ")\n", "task_id = response[\"task\"]\n", "print(\"Task ID:\", task_id)\n", "check_task_status(esclient, task_id)" ] }, { "cell_type": "markdown", "metadata": { "id": "xJBDwRmDQq4n" }, "source": [ "## Custom Search Query Construction and Execution\n", "\n", "This section constructs and executes a custom search query in Elasticsearch, utilizing a hybrid approach combining vector and text-based search methods to enhance search accuracy and relevance. The specific example used is a user query about the \"Nimbus 2000\".\n", "\n", "### Key Steps:\n", "1. **Define User Query**: The user query is specified as \"what is a nimbus 2000\".\n", "2. **Set Boost Factors**:\n", " - `knn_boost_factor`: A value to amplify the importance of the vector-based search component.\n", " - `text_expansion_boost`: A value to modify the weight of the text-based search component.\n", "3. **Build Query**: The `build_custom_query` function constructs the search query, incorporating both dense vector and text expansion components.\n", "4. **Execute Search**: The query is executed against the specified Elasticsearch index.\n", "5. **Identify Relevant Passages**:\n", " - The search results are analyzed to find the passage with the highest relevance score.\n", " - The ID and chunk number of the best matching passage are captured and printed.\n", "6. **Fetch Surrounding Chunks**: Constructs and executes a query to retrieve chunks adjacent to the identified passage for broader context. If the matched chunk is the first chunk, fetches n, n+1, and n+2. If the chunk is the last chunk in the chapter, fetches n, n-1, and n-2. For other chunks, fetches n-1, n, and n+1.\n", "7. **Display Results**: Outputs text from the relevant and adjacent passages." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "u7NFZBRJO3t7", "outputId": "01d444cf-17f6-40c1-f5af-e24db219e581" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Matched Chunk ID: rV8Y5Y8BQsZxvNJ9cO4t, Chunk Number: 3, Text:\n", "t speaking to us? \" said harry. \" yes, don't stop now, \" said ron, \" it's doing us so much good. \" hermione marched away with her nose in the air. harry had a lot of trouble keeping his mind on his\n", "lessons that day. it kept wandering up to the dormitory where his new broomstick was lying under his bed, or straying off to the quidditch field where he'd be learning to play that night. he bolted\n", "his dinner that evening without noticing what he was eating, and then rushed upstairs with ron to unwrap the nimbus two thousand at last. \" wow, \" ron sighed, as the broomstick rolled onto harry's\n", "bedspread. even harry, who knew nothing about the different brooms, thought it looked wonderful. sleek and shiny, with a mahogany handle, it had a long tail of neat, straight twigs and nimbus two\n", "thousand written in gold near the top. as seven o'clock drew nearer, harry left the castle and set off in the dusk toward the quidditch field. held never been inside the stadium before. hundreds of\n", "seats were raised in stands around the field so that the spectators were high enough to see what was going on. at either end of the field were three golden poles with hoops on the end. they reminded\n", "harry of the little plastic sticks muggle children blew bubbles through, except that they were fifty feet high. too eager to fly again to wait for wood, harry mounted his broomstick and kicked off\n", "from the ground. what a feeling - - he swooped in and out of the goal posts and then sped up and down the field. the nimbus two thousand turned wherever he wanted at his lightest touch. \" hey, potter,\n", "come down!'oliver wood had arrived. fie was carrying a large wooden crate under his arm. harry landed next to him. \" very nice, \" said wood, his eyes glinting. \" i see what mcgonagall meant... you\n", "really are a natural. i'm just going to teach you the rules this evening, then you'll be joining team practice three times a week. \" he opened the crate. inside were four different - sized balls. \"\n", "right, \" said wood. \" now, quidditch is easy enough to understand, even if it's not too easy to play. there are seven players on each side.\n", "\n", "\n", "Fetch Surrounding Chunks\n", "------------------------\n", "\n", "\n", "Text from Chunk 2: ##wrap the broomstick in private before their first class, but halfway across the entrance hall they found the way upstairs barred by crabbe and goyle. malfoy seized the package from harry and felt\n", "it. \" that's a broomstick, \" he said, throwing it back to harry with a mixture of jealousy and spite on his face. \" you'll be in for it this time, potter, first years aren't allowed them. \" ron\n", "couldn't resist it. \" it's not any old broomstick, \" he said, \" it's a nimbus two thousand. what did you say you've got at home, malfoy, a comet two sixty? \" ron grinned at harry. \" comets look\n", "flashy, but they're not in the same league as the nimbus. \" \" what would you know about it, weasley, you couldn't afford half the handle, \" malfoy snapped back. \" i suppose you and your brothers have\n", "to save up twig by twig. \" before ron could answer, professor flitwick appeared at malfoy's elbow. \" not arguing, i hope, boys? \" he squeaked. \" potter's been sent a broomstick, professor, \" said\n", "malfoy quickly. \" yes, yes, that's right, \" said professor flitwick, beaming at harry. \" professor mcgonagall told me all about the special circumstances, potter. and what model is it? \" \" a nimbus\n", "two thousand, sit, \" said harry, fighting not to laugh at the look of horror on malfoy's face. \" and it's really thanks to malfoy here that i've got it, \" he added. harry and ron headed upstairs,\n", "smothering their laughter at malfoy's obvious rage and confusion. \" well, it's true, \" harry chortled as they reached the top of the marble staircase, \" if he hadn't stolen neville's remembrall i\n", "wouln't be on the team.... \" \" so i suppose you think that's a reward for breaking rules? \" came an angry voice from just behind them. hermione was stomping up the stairs, looking disapprovingly at\n", "the package in harry's hand. \" i thought you weren '\n", "\n", "\n", "Text from Chunk 3: t speaking to us? \" said harry. \" yes, don't stop now, \" said ron, \" it's doing us so much good. \" hermione marched away with her nose in the air. harry had a lot of trouble keeping his mind on his\n", "lessons that day. it kept wandering up to the dormitory where his new broomstick was lying under his bed, or straying off to the quidditch field where he'd be learning to play that night. he bolted\n", "his dinner that evening without noticing what he was eating, and then rushed upstairs with ron to unwrap the nimbus two thousand at last. \" wow, \" ron sighed, as the broomstick rolled onto harry's\n", "bedspread. even harry, who knew nothing about the different brooms, thought it looked wonderful. sleek and shiny, with a mahogany handle, it had a long tail of neat, straight twigs and nimbus two\n", "thousand written in gold near the top. as seven o'clock drew nearer, harry left the castle and set off in the dusk toward the quidditch field. held never been inside the stadium before. hundreds of\n", "seats were raised in stands around the field so that the spectators were high enough to see what was going on. at either end of the field were three golden poles with hoops on the end. they reminded\n", "harry of the little plastic sticks muggle children blew bubbles through, except that they were fifty feet high. too eager to fly again to wait for wood, harry mounted his broomstick and kicked off\n", "from the ground. what a feeling - - he swooped in and out of the goal posts and then sped up and down the field. the nimbus two thousand turned wherever he wanted at his lightest touch. \" hey, potter,\n", "come down!'oliver wood had arrived. fie was carrying a large wooden crate under his arm. harry landed next to him. \" very nice, \" said wood, his eyes glinting. \" i see what mcgonagall meant... you\n", "really are a natural. i'm just going to teach you the rules this evening, then you'll be joining team practice three times a week. \" he opened the crate. inside were four different - sized balls. \"\n", "right, \" said wood. \" now, quidditch is easy enough to understand, even if it's not too easy to play. there are seven players on each side.\n", "\n", "\n", "Text from Chunk 4: three of them are called chasers. \" \" three chasers, \" harry repeated, as wood took out a bright red ball about the size of a soccer ball. \" this ball's called the quaffle, \" said wood. \" the chasers\n", "throw the quaffle to each other and try and get it through one of the hoops to score a goal. ten points every time the quaffle goes through one of the hoops. follow me? \" \" the chasers throw the\n", "quaffle and put it through the hoops to score, \" harry recited. \" so - - that's sort of like basketball on broomsticks with six hoops, isn't it? \" \" what's basketball? \" said wood curiously. \" never\n", "mind, \" said harry quickly. \" now, there's another player on each side who's called the keeper - i'm keeper for gryffindor. i have to fly around our hoops and stop the other team from scoring. \" \"\n", "three chasers, one keeper, \" said harry, who was determined to remember it all. \" and they play with the quaffle. okay, got that. so what are they for? \" he pointed at the three balls left inside the\n", "box. \" i'll show you now, \" said wood. \" take this. \" he handed harry a small club, a bit like a short baseball bat. \" i'm going to show you what the bludgers do, \" wood said. \" these two are the\n", "bludgers. \" he showed harry two identical balls, jet black and slightly smaller than the red quaffle. harry noticed that they seemed to be straining to escape the straps holding them inside the box. \"\n", "stand back, \" wood warned harry. he bent down and freed one of the bludgers. at once, the black ball rose high in the air and then pelted straight at harry's face. harry swung at it with the bat to\n", "stop it from breaking his nose, and sent it zigzagging away into the air - - it zoomed around their heads and then shot at wood, who dived on top of it and managed to pin it to the ground. \" see? \"\n", "wood panted, forcing the struggling bludger back into the crate and strapping it down safely. \" the bludgers rocket around, trying to knock players off their\n" ] } ], "source": [ "# Custom Search Query Construction\n", "user_query = \"what is a nimbus 2000\"\n", "\n", "\n", "knn_boost_factor = 20\n", "text_expansion_boost = 1\n", "query = build_custom_query(\n", " build_vector(user_query),\n", " user_query,\n", " knn_boost_factor,\n", " text_expansion_boost,\n", " debug=False,\n", ")\n", "\n", "# Searching and identifying relevant passages\n", "results = esclient.search(index=index_name, body=query, _source=False)\n", "\n", "hit_id = None\n", "chunk_number = None\n", "chapter_number = None\n", "max_chunk_number = None\n", "max_chapter_chunk_result = None\n", "max_chunk_query = None\n", "\n", "\n", "if results and results.get(\"hits\") and results[\"hits\"].get(\"hits\"):\n", " highest_score = -1\n", " best_hit = None\n", " hit_id = results[\"hits\"][\"hits\"][0][\"_id\"]\n", " chapter_number = results[\"hits\"][\"hits\"][0][\"fields\"][\"chapter\"][0]\n", " if \"inner_hits\" in results[\"hits\"][\"hits\"][0]:\n", " for hit_type in [\"text_hits\", \"dense_hit\", \"sparse_hits\"]:\n", " if hit_type in results[\"hits\"][\"hits\"][0][\"inner_hits\"]:\n", " inner_hit = results[\"hits\"][\"hits\"][0][\"inner_hits\"][hit_type][\"hits\"]\n", " if inner_hit[\"hits\"]:\n", " max_score = inner_hit[\"max_score\"]\n", " if max_score and max_score > highest_score:\n", " highest_score = max_score\n", " best_hit = inner_hit[\"hits\"][0]\n", "\n", " if best_hit:\n", " first_passage_text = best_hit[\"_source\"][\"text\"]\n", " chunk_number = best_hit[\"_source\"][\"chunk_number\"]\n", " # print(f\"Matched Chunk ID: {hit_id}, Chunk Number: {chunk_number}, Text: {first_passage_text}\")\n", " print(\n", " f\"Matched Chunk ID: {hit_id}, Chunk Number: {chunk_number}, Text:\\n{textwrap.fill(first_passage_text, width=200)}\"\n", " )\n", " print(f\"\\n\")\n", " else:\n", " print(f\"ID: {hit_id}, No relevant passages found.\")\n", "else:\n", " print(\"No results found.\")\n", "\n", "# Fetch Surrounding Chunks if chapter_number is not None\n", "if chapter_number is not None:\n", " print(f\"Fetch Surrounding Chunks\")\n", " print(f\"------------------------\")\n", "\n", " # max_chunk_query = get_max_chunk_number_query(chapter_number, debug=False)\n", " # max_chapter_chunk_result = esclient.search(index=index_name, body=max_chunk_query, _source=False)\n", " max_chapter_chunk_result = esclient.search(\n", " index=index_name,\n", " body=get_max_chunk_number_query(chapter_number, debug=False),\n", " _source=False,\n", " )\n", " max_chunk_number = max_chapter_chunk_result[\"aggregations\"][\"max_chunk_number\"][\n", " \"max_chunk\"\n", " ][\"value\"]\n", "\n", " adjacent_chunks_query = get_adjacent_chunks_query(\n", " hit_id, chunk_number, max_chunk_number, debug=False\n", " )\n", " results = esclient.search(\n", " index=index_name, body=adjacent_chunks_query, _source=False\n", " )\n", " print_text_from_results(results)\n", "else:\n", " print(\"Skipping fetch of surrounding chunks due to no initial results.\")\n", "\n", "\n", "# max_chapter_chunk_result = esclient.search(index=index_name, body=get_max_chunk_number_query(chapter_number, debug=False), _source=False)" ], "metadata": { "id": "u7NFZBRJO3t7", "colab": { "base_uri": "https://localhost:8080/" }, "outputId": "01d444cf-17f6-40c1-f5af-e24db219e581" }, "execution_count": null, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Matched Chunk ID: rV8Y5Y8BQsZxvNJ9cO4t, Chunk Number: 3, Text:\n", "t speaking to us? \" said harry. \" yes, don't stop now, \" said ron, \" it's doing us so much good. \" hermione marched away with her nose in the air. harry had a lot of trouble keeping his mind on his\n", "lessons that day. it kept wandering up to the dormitory where his new broomstick was lying under his bed, or straying off to the quidditch field where he'd be learning to play that night. he bolted\n", "his dinner that evening without noticing what he was eating, and then rushed upstairs with ron to unwrap the nimbus two thousand at last. \" wow, \" ron sighed, as the broomstick rolled onto harry's\n", "bedspread. even harry, who knew nothing about the different brooms, thought it looked wonderful. sleek and shiny, with a mahogany handle, it had a long tail of neat, straight twigs and nimbus two\n", "thousand written in gold near the top. as seven o'clock drew nearer, harry left the castle and set off in the dusk toward the quidditch field. held never been inside the stadium before. hundreds of\n", "seats were raised in stands around the field so that the spectators were high enough to see what was going on. at either end of the field were three golden poles with hoops on the end. they reminded\n", "harry of the little plastic sticks muggle children blew bubbles through, except that they were fifty feet high. too eager to fly again to wait for wood, harry mounted his broomstick and kicked off\n", "from the ground. what a feeling - - he swooped in and out of the goal posts and then sped up and down the field. the nimbus two thousand turned wherever he wanted at his lightest touch. \" hey, potter,\n", "come down!'oliver wood had arrived. fie was carrying a large wooden crate under his arm. harry landed next to him. \" very nice, \" said wood, his eyes glinting. \" i see what mcgonagall meant... you\n", "really are a natural. i'm just going to teach you the rules this evening, then you'll be joining team practice three times a week. \" he opened the crate. inside were four different - sized balls. \"\n", "right, \" said wood. \" now, quidditch is easy enough to understand, even if it's not too easy to play. there are seven players on each side.\n", "\n", "\n", "Fetch Surrounding Chunks\n", "------------------------\n", "\n", "\n", "Text from Chunk 2: ##wrap the broomstick in private before their first class, but halfway across the entrance hall they found the way upstairs barred by crabbe and goyle. malfoy seized the package from harry and felt\n", "it. \" that's a broomstick, \" he said, throwing it back to harry with a mixture of jealousy and spite on his face. \" you'll be in for it this time, potter, first years aren't allowed them. \" ron\n", "couldn't resist it. \" it's not any old broomstick, \" he said, \" it's a nimbus two thousand. what did you say you've got at home, malfoy, a comet two sixty? \" ron grinned at harry. \" comets look\n", "flashy, but they're not in the same league as the nimbus. \" \" what would you know about it, weasley, you couldn't afford half the handle, \" malfoy snapped back. \" i suppose you and your brothers have\n", "to save up twig by twig. \" before ron could answer, professor flitwick appeared at malfoy's elbow. \" not arguing, i hope, boys? \" he squeaked. \" potter's been sent a broomstick, professor, \" said\n", "malfoy quickly. \" yes, yes, that's right, \" said professor flitwick, beaming at harry. \" professor mcgonagall told me all about the special circumstances, potter. and what model is it? \" \" a nimbus\n", "two thousand, sit, \" said harry, fighting not to laugh at the look of horror on malfoy's face. \" and it's really thanks to malfoy here that i've got it, \" he added. harry and ron headed upstairs,\n", "smothering their laughter at malfoy's obvious rage and confusion. \" well, it's true, \" harry chortled as they reached the top of the marble staircase, \" if he hadn't stolen neville's remembrall i\n", "wouln't be on the team.... \" \" so i suppose you think that's a reward for breaking rules? \" came an angry voice from just behind them. hermione was stomping up the stairs, looking disapprovingly at\n", "the package in harry's hand. \" i thought you weren '\n", "\n", "\n", "Text from Chunk 3: t speaking to us? \" said harry. \" yes, don't stop now, \" said ron, \" it's doing us so much good. \" hermione marched away with her nose in the air. harry had a lot of trouble keeping his mind on his\n", "lessons that day. it kept wandering up to the dormitory where his new broomstick was lying under his bed, or straying off to the quidditch field where he'd be learning to play that night. he bolted\n", "his dinner that evening without noticing what he was eating, and then rushed upstairs with ron to unwrap the nimbus two thousand at last. \" wow, \" ron sighed, as the broomstick rolled onto harry's\n", "bedspread. even harry, who knew nothing about the different brooms, thought it looked wonderful. sleek and shiny, with a mahogany handle, it had a long tail of neat, straight twigs and nimbus two\n", "thousand written in gold near the top. as seven o'clock drew nearer, harry left the castle and set off in the dusk toward the quidditch field. held never been inside the stadium before. hundreds of\n", "seats were raised in stands around the field so that the spectators were high enough to see what was going on. at either end of the field were three golden poles with hoops on the end. they reminded\n", "harry of the little plastic sticks muggle children blew bubbles through, except that they were fifty feet high. too eager to fly again to wait for wood, harry mounted his broomstick and kicked off\n", "from the ground. what a feeling - - he swooped in and out of the goal posts and then sped up and down the field. the nimbus two thousand turned wherever he wanted at his lightest touch. \" hey, potter,\n", "come down!'oliver wood had arrived. fie was carrying a large wooden crate under his arm. harry landed next to him. \" very nice, \" said wood, his eyes glinting. \" i see what mcgonagall meant... you\n", "really are a natural. i'm just going to teach you the rules this evening, then you'll be joining team practice three times a week. \" he opened the crate. inside were four different - sized balls. \"\n", "right, \" said wood. \" now, quidditch is easy enough to understand, even if it's not too easy to play. there are seven players on each side.\n", "\n", "\n", "Text from Chunk 4: three of them are called chasers. \" \" three chasers, \" harry repeated, as wood took out a bright red ball about the size of a soccer ball. \" this ball's called the quaffle, \" said wood. \" the chasers\n", "throw the quaffle to each other and try and get it through one of the hoops to score a goal. ten points every time the quaffle goes through one of the hoops. follow me? \" \" the chasers throw the\n", "quaffle and put it through the hoops to score, \" harry recited. \" so - - that's sort of like basketball on broomsticks with six hoops, isn't it? \" \" what's basketball? \" said wood curiously. \" never\n", "mind, \" said harry quickly. \" now, there's another player on each side who's called the keeper - i'm keeper for gryffindor. i have to fly around our hoops and stop the other team from scoring. \" \"\n", "three chasers, one keeper, \" said harry, who was determined to remember it all. \" and they play with the quaffle. okay, got that. so what are they for? \" he pointed at the three balls left inside the\n", "box. \" i'll show you now, \" said wood. \" take this. \" he handed harry a small club, a bit like a short baseball bat. \" i'm going to show you what the bludgers do, \" wood said. \" these two are the\n", "bludgers. \" he showed harry two identical balls, jet black and slightly smaller than the red quaffle. harry noticed that they seemed to be straining to escape the straps holding them inside the box. \"\n", "stand back, \" wood warned harry. he bent down and freed one of the bludgers. at once, the black ball rose high in the air and then pelted straight at harry's face. harry swung at it with the bat to\n", "stop it from breaking his nose, and sent it zigzagging away into the air - - it zoomed around their heads and then shot at wood, who dived on top of it and managed to pin it to the ground. \" see? \"\n", "wood panted, forcing the struggling bludger back into the crate and strapping it down safely. \" the bludgers rocket around, trying to knock players off their\n" ] } ] } ], "metadata": { "colab": { "provenance": [] }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 0 }