notebooks/sentiment_analysis.ipynb (816 lines of code) (raw):
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Text Classification for Sentiment Analysis on IPU using Multiple NLP Models - Inference\n",
"\n",
"Integration of the Graphcore Intelligence Processing Unit (IPU) and the [🤗 Transformers library](https://huggingface.co/docs/transformers/index) means that it only takes a few lines of code to perform complex tasks which require deep learning.\n",
"\n",
"In this notebook, we perform sentiment analysis by using natural language processing models to classify text prompts. \n",
"We follow [this blog post by Federico Pascual](https://huggingface.co/blog/sentiment-analysis-python) and test five different models available on 🤗 Hub to highlight different properties of the models that can be used for downstream tasks.\n",
"\n",
"The ease of use of the `pipeline` interface lets us quickly experiment with the pre-trained models and identify which will work best.\n",
"This simple interface means that it is extremely easy to access the fast inference performance of the IPU on your application.\n",
"\n",
"<img src=\"images/text_classification.png\" alt=\"Widget inference on a text classification task\" style=\"width:500px;\">\n",
"\n",
"While this notebook is focused on using the model (inference), our Text Classification `other-use-cases/text_classification.ipynb` notebook will show you how to fine-tune a model for a specific task using the [🤗 Datasets library](https://huggingface.co/docs/datasets/index)."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"| Domain | Tasks | Model | Datasets | Workflow | Number of IPUs | Execution time |\n",
"|---------|-------|-------|----------|----------|--------------|--------------|\n",
"| Natural language processing | Sentiment analysis |distilbert-base-uncased | - |Inference | 4 | ~4min |\n",
"\n",
"[](https://www.graphcore.ai/join-community)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Environment setup\n",
"\n",
"The best way to run this demo is on Paperspace Gradient's cloud IPUs because everything is already set up for you.\n",
"\n",
"[](https://ipu.dev/3X5wL0a)\n",
"\n",
"To run the demo using other IPU hardware, you need to have the Poplar SDK enabled and a Python virtual environment with the PopTorch wheel installed. Refer to the [Getting Started guide](https://docs.graphcore.ai/en/latest/getting-started.html#getting-started) for your system for details. Also refer to the [Jupyter Quick Start guide](https://docs.graphcore.ai/projects/jupyter-notebook-quick-start/en/latest/index.html) for how to set up Jupyter to be able to run this notebook on a remote IPU machine."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Dependencies and configuration\n",
"\n",
"In order to improve usability and support for future users, Graphcore would like to collect information about the\n",
"applications and code being run in this notebook. The following information will be anonymised before being sent to Graphcore:\n",
"\n",
"- User progression through the notebook\n",
"- Notebook details: number of cells, code being run and the output of the cells\n",
"- Environment details\n",
"\n",
"You can disable logging at any time by running `%unload_ext graphcore_cloud_tools.notebook_logging.gc_logger` from any cell."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Install the dependencies for this notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install \"optimum-graphcore==0.7\"\n",
"%pip install emoji==0.6.0\n",
"%pip install graphcore-cloud-tools[logger]@git+https://github.com/graphcore/graphcore-cloud-tools\n",
"%load_ext graphcore_cloud_tools.notebook_logging.gc_logger"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The value for cache directories can be configured through environment variables or directly in the notebook:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"executable_cache_dir = os.getenv(\"POPLAR_EXECUTABLE_CACHE_DIR\", \"./exe_cache/\") + \"/sentiment_analysis\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using the transformers pipelines on the IPU\n",
"\n",
"The simplest way to use a model on the IPU is to use the `pipeline` function. It provides a set of models which have been validated to work on a given task. To get started, choose the task and call the `pipeline` function:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from optimum.graphcore import pipelines\n",
"sentiment_pipeline = pipelines.pipeline(\"sentiment-analysis\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The pipeline defaults to `distilbert-base-uncased-finetuned-sst-2-english`, a checkpoint managed by 🤗 because we did not provide a specific model.\n",
"We are helpfully warned that we should explicitly specify a maximum sequence length through the `max_length` argument if we were to put this model in production, but since we are experimenting we will leave it as is.\n",
"\n",
"Now it's time to test our first prompts. Let's start with some very easy-to-classify text:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"simple_test_data = [\"I love you\", \"I hate you\"]\n",
"sentiment_pipeline(simple_test_data)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Reassuringly, the model got it right! And with a high degree of confidence, more than 99.9% in both cases.\n",
"\n",
"The first call to the pipeline was a bit slow, it took several seconds to provide the answer. This behaviour is due to compilation of the model which happens on the first call.\n",
"On subsequent prompts it is much faster:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sentiment_pipeline(simple_test_data)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that was much faster! We can use the `%%timeit` cell magic to check how fast:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%timeit\n",
"sentiment_pipeline(simple_test_data)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"It takes on the order of ~1ms per prompt, which is fast!"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Other tasks supported by IPU pipelines\n",
"\n",
"This simple interface provides access to a number of other tasks which can be listed through the `list_tasks` function:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipelines.list_tasks()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Customising the IPU resources\n",
"\n",
"Depending on your system you may have 4, 16 or 64 IPUs available. IPUs are designed from the ground up to make it easy to scale applications to large numbers of processors working together. However, in this case that is not needed and 1 IPU is sufficient.\n",
"\n",
"We're going to make sure that we are using a single IPU so that other users or other applications that we are running in the background are not affected. To do that, we define an `inference_config` dictionary which contains arguments that will be passed to the `optimum.graphcore.IPUConfig` object which controls the accelerator. \n",
"\n",
"We recreate our pipeline with the new settings:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"inference_config = dict(layers_per_ipu=[40], ipus_per_replica=1, enable_half_partials=True,\n",
" executable_cache_dir=executable_cache_dir)\n",
"sentiment_pipeline = pipelines.pipeline(\"sentiment-analysis\", ipu_config=inference_config)\n",
"data = [\"I love you\", \"I hate you\"]\n",
"sentiment_pipeline(simple_test_data)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"It still works as expected.\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Asking more complex questions\n",
"\n",
"Our initial prompts were trivial to classify. What if we asked the pipeline to classify more ambiguous sentences?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ambiguous_prompts = [\n",
" \"How are you today?\", \n",
" \"I'm a little tired, I didn't sleep well, but I hope it gets better\"\n",
"]"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The first sentence is perfectly neutral, while the second is a mix of negative and positive sentiment.\n",
"A good answer from the model would reflect the ambiguous nature of the prompt."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sentiment_pipeline(ambiguous_prompts)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The model only supports two labels: \"POSITIVE\" and \"NEGATIVE\"; neither of which really captures the sentiment of those messages.\n",
"We see a slight drop in the confidence of the model, but it does not feel sufficient to reflect the messages.\n",
"\n",
"The imprecise classification of these prompts would affect any downstream task: if we were trying to derive some insights from the model on customer satisfaction we may have an overly optimistic view of performance and derive the wrong conclusions.\n",
"\n",
"To resolve this issue we need to try more models, which accommodate finer grained classification."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Trying more models\n",
"\n",
"[The blog post](https://huggingface.co/blog/sentiment-analysis-python) we are following suggests a number of other models. Let's try them all to see if they perform better on our ambiguous prompts!\n",
"\n",
"The first one is `finiteautomata/bertweet-base-sentiment-analysis` a [RoBERTa model trained on 40 thousand tweets](https://huggingface.co/finiteautomata/bertweet-base-sentiment-analysis) collected before 2018. Using it is as simple as giving the name of the 🤗 Hub repository as the model argument:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tweet18_pipeline = pipelines.pipeline(\n",
" \"sentiment-analysis\",\n",
" model=\"finiteautomata/bertweet-base-sentiment-analysis\", ipu_config=inference_config,\n",
")\n",
"print(simple_test_data)\n",
"tweet18_pipeline(simple_test_data)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Unsurprisingly, the model correctly classifies our simple prompt. Now let's try our more ambiguous prompt. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tweet18_pipeline([\"How are you today?\", \"I'm a little tired, I didn't sleep well, but I hope it gets better\"])"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"That is much better: the model classifies the first prompt as neutral and the second as negative. The addition of a \"NEU\" (neutral) class gives the model the flexibility to correctly identify statements which do not fit as positive or negative.\n",
"\n",
"The challenge of the second prompt is that it has multiple clauses which capture different sentiments. To get a better result on it you might separate it out into multiple prompts that are better suited to the model. For example we can split on `,` to classify each clause of the sentence on its own:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"split_prompts_answer = tweet18_pipeline([\n",
" \"How are you today?\", \n",
" *\"I'm a little tired, I didn't sleep well, but I hope it gets better\".split(\",\"),\n",
"])\n",
"split_prompts_answer"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Here each part of the sentence is correctly classified. How you choose to process those sentence parts will depend on what you need to use the results of the sentiment analysis for.\n",
"\n",
"We can make small changes to the prompt to get a feeling for how the model responds to changes in grammar. Below, the last part of the ambiguous sentence is changed to be more optimistic:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(f\"Previous score: {split_prompts_answer[-1]}\")\n",
"\n",
"tweet18_pipeline([\"but it is getting better\"])"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"As a consequence of that change, the score associated with the positive label has gone up, matching the desired behaviour of the model."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### A model fine-tuned on tweets\n",
"\n",
"The next model discussed in the blog post is the [`cardiffnlp/twitter-roberta-base-sentiment-latest`](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) model, it is a RoBERTa-Base which was trained on 124M tweets collected between 2018 and 2021. This data makes the model much more recent than the previous pre-trained checkpoint.\n",
"\n",
"As before this model is trivial to load through the pipeline API:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from pprint import pprint\n",
"tweet21_pipeline = pipelines.pipeline(\n",
" model=\"cardiffnlp/twitter-roberta-base-sentiment-latest\", ipu_config=inference_config\n",
")\n",
"out = tweet21_pipeline(simple_test_data + ambiguous_prompts)\n",
"# print prompts and predicted labels side by side\n",
"pprint([*zip(simple_test_data + ambiguous_prompts, out)])"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"This model performs similarly, on these prompts, to the model explored in the previous section. Differences in the model may not be apparent until we start prompting about recent events.\n",
"\n",
"If we ask the previous model to classify a statement about the Coronavirus pandemic we get different results between the models:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"coronavirus_prompt = [\"Coronavirus is increasing\"]\n",
"old = tweet18_pipeline(coronavirus_prompt)\n",
"new = tweet21_pipeline(coronavirus_prompt)\n",
"print(f\"Older model score: {old}\")\n",
"print(f\"Newer model score: {new}\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The newer model has a strong negative connotation for Coronavirus while the older model sees it as a neutral statement. This simple experiment shows the importance of testing and fine-tuning models regularly to make sure that sentiment analysis continues to be accurate as connotations of certain words evolve."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We are done using these pipelines in the rest of the notebook so we detach from the IPU devices:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!gc-monitor --no-card-info | grep ${os.getpid()}\n",
"tweet18_pipeline.model.detachFromDevice()\n",
"tweet21_pipeline.model.detachFromDevice()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"This will allow us to test additional models. For more details on managing IPU resources from a notebook you can consult our [notebook on managing IPU resources](managing_ipu_resources.ipynb)."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Multi-lingual model\n",
"\n",
"The next model has an interesting feature: it is multi-lingual. It was trained on a dataset of English, Dutch, German, Spanish, Italian and French text, it can be prompted in any of these languages and should correctly classify the text inputs.\n",
"\n",
"The model is `nlptown/bert-base-multilingual-uncased-sentiment` a BERT checkpoint fine-tuned on ~700k reviews in six languages, which gives a rating between 1 and 5 stars for each prompt. 1 star indicates a very negative sentiment, while 5 stars correspond to a very positive text."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_name = \"nlptown/bert-base-multilingual-uncased-sentiment\"\n",
"multilingual_pipeline = pipelines.pipeline(\n",
" model=model_name, ipu_config=inference_config\n",
")\n",
"print(simple_test_data)\n",
"multilingual_pipeline(simple_test_data)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"It successfully classifies our simple input. Now let's see how it fares with our ambiguous input:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"multilingual_pipeline([\n",
" \"How are you today?\", # How are you today?\n",
" \"I'm a little tired, I didn't sleep well, but I hope it gets better\"\n",
" # \"I'm a little tired, I didn't sleep well, but I hope it gets better\n",
"])"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"While it is a bit optimistic about our first prompt, its guess is given with a fairly low confidence, and it identifies the second prompt as neutral with a median score of 3.\n",
"\n",
"Now let's translate our prompts and ask it the same questions in French:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ambiguous_in_french = [\n",
" \"Comment vas-tu aujourd'hui?\",\n",
" \"Je suis un peu fatigue, je n'ai pas bien dormi mais j'espere que la journee s'ameliore\",\n",
"]\n",
"multilingual_pipeline(ambiguous_in_french)\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The model gives very similar results in both languages!\n",
"\n",
"Now let us revisit our first model, could it also work with this multi-lingual input?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(sentiment_pipeline(ambiguous_prompts))\n",
"print(sentiment_pipeline(ambiguous_in_french))"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Unsurprisingly it cannot, its prediction are not stable across the two languages. However, this model did not perform very well on the ambiguous prompts. If we re-use some of the more precise English-only models and run them on more obvious prompts: "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"simple_french_input = [\n",
" \"Ce film est excellent\", # This film is excellent\n",
" \"Le produit ne marche pas du tout\" # The tool does not work at all\n",
"]\n",
"print(tweet21_pipeline(simple_french_input))\n",
"print(multilingual_pipeline(simple_french_input))"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"In this case we see the multi-lingual model correctly predicts strongly positive and negative labels, while the other model predicts a positive message (correct) and a neutral (expected LABEL_0)."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We detach from the pipelines we are no longer using."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tweet21_pipeline.model.detachFromDevice()\n",
"multilingual_pipeline.model.detachFromDevice()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Other models\n",
"\n",
"Models can be fine-tuned to extract different classes from text. The `bhadresh-savani/distilbert-base-uncased-emotion` checkpoint is a DistilBERT checkpoint tuned to identify the emotion associated with a prompt:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_name = \"bhadresh-savani/distilbert-base-uncased-emotion\"\n",
"emotion_pipeline = pipelines.pipeline(model=model_name, ipu_config=inference_config)\n",
"emotion_pipeline(simple_test_data)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"This model can be prompted with sentences which include different emotions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"emotion_pipeline([\n",
" \"How are you today?\", \n",
" \"Don't make me go out, it's too cold!\",\n",
" \"What is happening, I don't understand\",\n",
" \"Where did you come from?\",\n",
"])\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We detach from the remaining pipelines."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sentiment_pipeline.model.detachFromDevice()\n",
"emotion_pipeline.model.detachFromDevice()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using a larger model\n",
"\n",
"The optimum library supports several sizes of models for many of the standard architectures, in this section we load a checkpoint which uses roBERTa large to perform the same task.\n",
"\n",
"Larger models will take longer to execute but may provide better predictions in a broader range of cases. As an example we load the [`j-hartmann/sentiment-roberta-large-english-3-classes`](https://huggingface.co/j-hartmann/sentiment-roberta-large-english-3-classes) checkpoint which uses the roBERTa-Large model trained on 5000 manually annotated social media posts. \n",
"\n",
"As this model is too large to fit on a single IPU, we shard it over 2 IPUs (model parallelism), placing 12 layers on each IPU."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"inference_config.update(layers_per_ipu=[12,12], ipus_per_replica=2)\n",
"larger_pipeline = pipelines.pipeline(\n",
" \"sentiment-analysis\",\n",
" \"j-hartmann/sentiment-roberta-large-english-3-classes\",\n",
" ipu_config=inference_config\n",
")\n",
"larger_pipeline(simple_test_data)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"As before, the model succeeds on our simple example. We can check the latency of that model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%timeit\n",
"larger_pipeline(simple_test_data)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The model takes about 4.5 ms to execute per prompt. As expected this is slower than the earlier pipelines which used a smaller model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"larger_pipeline([\"How are you today?\", \"I'm a little tired, I didn't sleep well, but I hope it gets better\"])"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We detach from this pipeline as we will no longer be using it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"larger_pipeline.model.detachFromDevice()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"This notebook has followed the steps of the blog post [Getting Started with Sentiment Analysis using Python](https://huggingface.co/blog/sentiment-analysis-python), showing how easy it is to use IPUs for sentiment analysis.\n",
"The integration with the `pipeline` interface makes the thousands of models available on the 🤗 Hub easy to use on the IPU. While following the blog post, we experimented with six different checkpoints, testing the properties of the different models.\n",
"\n",
"There are [hundreds more models available on the Hub](https://huggingface.co/models?pipeline_tag=text-classification&sort=downloads&search=sentiment). Try them out. We think they'll work but, if you hit an error, [raise an issue or open a pull request](https://github.com/huggingface/optimum-graphcore/issues) and we'll do our best to fix it! \n",
"If you have your own dataset, the text classification notebook `other-use-cases/text_classification.ipynb` will show you how to fine-tune a classification model tailored to your use case."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"You may also want to check out one of the other tasks available through the `pipeline` API:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipelines.list_tasks()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.6.9 ('3.0.0+1145_poptorch')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
},
"vscode": {
"interpreter": {
"hash": "9409b80169a82c0207afe9a460d7f88a38094a708839df55a31910312ecdb1ee"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}