notebooks/zh-CN/analyzing_art_with_hf_and_fiftyone.ipynb (840 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 分析艺术风格与多模态嵌入" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*作者: [Jacob Marks](https://huggingface.co/jamarks)*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Art Analysis Cover Image](https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/art_analysis_cover_image.jpg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**视觉数据如图像非常富含信息,但其非结构化的性质使得分析变得困难。**\n", "\n", "在这个 Notebook 中,我们将探索如何使用多模态嵌入和计算属性来分析图像中的艺术风格。我们将使用来自 🤗 Hub 的 [WikiArt 数据集](https://huggingface.co/datasets/huggan/wikiart),并将其加载到 **FiftyOne** 中进行数据分析和可视化。我们将以多种方式深入分析数据:\n", "\n", "- **图像相似性搜索与语义搜索**:我们将使用预训练的 [CLIP](https://huggingface.co/openai/clip-vit-base-patch32) 模型,从 🤗 Transformers 生成数据集中的图像的多模态嵌入,并索引数据以进行无结构搜索。\n", "\n", "- **聚类与可视化**:我们将基于艺术风格使用嵌入对图像进行聚类,并使用 UMAP 降维算法可视化结果。\n", "\n", "- **独特性分析**:我们将使用嵌入为每张图像分配一个独特性评分,依据它与数据集中其他图像的相似度。\n", "\n", "- **图像质量分析**:我们将计算每张图像的亮度、对比度和饱和度等质量指标,并查看这些指标与图像的艺术风格如何相关。\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 让我们开始吧! 🚀" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "为了运行这个 Notebook ,你需要安装下面的库:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install -U transformers huggingface_hub fiftyone umap-learn" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "为了下载的更加轻量快捷,安装 [HF Transfer](https://pypi.org/project/hf-transfer/):\n", "\n", "```bash\n", "pip install hf-transfer\n", "```\n", "\n", "并且通过设置环境变量 `HF_HUB_ENABLE_HF_TRANSFER` 来启用它:\n", "\n", "```bash\n", "import os\n", "os.environ[\"HF_HUB_ENABLE_HF_TRANSFER\"] = \"1\"\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<div class=\"alert alert-block alert-info\">\n", "<b>注意:</b> 本 Notebook 在以下版本下进行了测试:<code>transformers==4.40.0</code>,<code>huggingface_hub==0.22.2</code>,和 <code>fiftyone==0.23.8</code>。\n", "</div>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "现在我们来导入本 Notebook 中需要的模块:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import fiftyone as fo # base library and app\n", "import fiftyone.zoo as foz # zoo datasets and models\n", "import fiftyone.brain as fob # ML routines\n", "from fiftyone import ViewField as F # for defining custom views\n", "import fiftyone.utils.huggingface as fouh # for loading datasets from Hugging Face" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "我们首先通过 🤗 Hub 将 WikiArt 数据集加载到 **FiftyOne** 中。这个数据集也可以通过 Hugging Face 的 `datasets` 库加载,但我们将使用 [FiftyOne 的 🤗 Hub 集成](https://docs.voxel51.com/integrations/huggingface.html#huggingface-hub) 直接从 Datasets 服务器获取数据。为了加速计算,我们只下载前 1,000 个样本。" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "dataset = fouh.load_from_hub(\n", " \"huggan/wikiart\", ## repo_id\n", " format=\"parquet\", ## for Parquet format\n", " classification_fields=[\"artist\", \"style\", \"genre\"], # columns to store as classification fields\n", " max_samples=1000, # number of samples to load\n", " name=\"wikiart\", # name of the dataset in FiftyOne\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "打印数据集的摘要,以查看其包含的内容:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Name: wikiart\n", "Media type: image\n", "Num samples: 1000\n", "Persistent: False\n", "Tags: []\n", "Sample fields:\n", " id: fiftyone.core.fields.ObjectIdField\n", " filepath: fiftyone.core.fields.StringField\n", " tags: fiftyone.core.fields.ListField(fiftyone.core.fields.StringField)\n", " metadata: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.metadata.ImageMetadata)\n", " artist: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Classification)\n", " style: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Classification)\n", " genre: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Classification)\n", " row_idx: fiftyone.core.fields.IntField\n" ] } ], "source": [ "print(dataset)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "在 [FiftyOne 应用](https://docs.voxel51.com/user_guide/app.html) 中可视化数据集:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "session = fo.launch_app(dataset)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![WikiArt Dataset](https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/art_analysis_wikiart_dataset.jpg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "让我们列出我们将要分析其风格的艺术家名字:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['Unknown Artist', 'albrecht-durer', 'boris-kustodiev', 'camille-pissarro', 'childe-hassam', 'claude-monet', 'edgar-degas', 'eugene-boudin', 'gustave-dore', 'ilya-repin', 'ivan-aivazovsky', 'ivan-shishkin', 'john-singer-sargent', 'marc-chagall', 'martiros-saryan', 'nicholas-roerich', 'pablo-picasso', 'paul-cezanne', 'pierre-auguste-renoir', 'pyotr-konchalovsky', 'raphael-kirchner', 'rembrandt', 'salvador-dali', 'vincent-van-gogh']\n" ] } ], "source": [ "artists = dataset.distinct(\"artist.label\")\n", "print(artists)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 查找相似的艺术作品" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "当你发现一件你喜欢的艺术作品时,想要找到相似的作品是很自然的。我们可以通过向量嵌入来实现这一点!更重要的是,通过使用多模态嵌入,我们将能够根据给定的文本查询(例如描述一幅画或甚至是一首诗)找到与之相似的画作。\n", "\n", "让我们使用来自 🤗 Transformers 的预训练 **CLIP Vision Transformer (ViT)** 模型为图像生成多模态嵌入。运行 [FiftyOne Brain](https://docs.voxel51.com/user_guide/brain.html) 中的 `compute_similarity()` 函数,将计算这些嵌入并基于这些嵌入生成数据集的相似性索引。" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Computing embeddings...\n", " 100% |███████████████| 1000/1000 [5.0m elapsed, 0s remaining, 3.3 samples/s] \n" ] }, { "data": { "text/plain": [ "<fiftyone.brain.internal.core.sklearn.SklearnSimilarityIndex at 0x2ad67ecd0>" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fob.compute_similarity(\n", " dataset, \n", " model=\"zero-shot-classification-transformer-torch\", ## type of model to load from model zoo\n", " name_or_path=\"openai/clip-vit-base-patch32\", ## repo_id of checkpoint\n", " embeddings=\"clip_embeddings\", ## name of the field to store embeddings\n", " brain_key=\"clip_sim\", ## key to store similarity index info\n", " batch_size=32, ## batch size for inference\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<div style=\"padding: 10px; border-left: 5px solid #0078d4; font-family: Arial, sans-serif; margin: 10px 0;\">\n", "\n", "或者,您可以直接从 🤗 Transformers 库加载模型并将模型传递进去:\n", "\n", "```python\n", "from transformers import CLIPModel\n", "model = CLIPModel.from_pretrained(\"openai/clip-vit-base-patch32\")\n", "fob.compute_similarity(\n", " dataset, \n", " model=model,\n", " embeddings=\"clip_embeddings\", ## 存储嵌入的字段名称\n", " brain_key=\"clip_sim\" ## 存储相似性索引信息的键\n", ")\n", "```\n", "\n", "有关详细指南,请查看 <a href=\"https://docs.voxel51.com/integrations/huggingface.html#transformers-library\">FiftyOne 的 🤗 Transformers 集成</a>。\n", "</div>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "刷新 **FiftyOne 应用**,在样本网格中选择一个图像的复选框,然后点击照片图标,即可查看数据集中与该图像最相似的图像。在后台,点击此按钮会触发对相似性索引的查询,基于预计算的嵌入,找到与所选图像最相似的图像,并在应用中显示它们。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Image Similarity Search](https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/art_analysis_image_search.gif)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "我们可以用这个方法查看与给定艺术作品最相似的艺术作品。这对于寻找相似的艺术作品(推荐给用户或添加到收藏)或为新作品获取灵感非常有用。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "但这还不止于此!由于 CLIP 是多模态的,我们还可以利用它进行语义搜索。这意味着我们可以根据文本查询来搜索图像。例如,我们可以搜索“粉彩树木”,并查看数据集中与该查询相似的所有图像。要做到这一点,请点击 **FiftyOne 应用** 中的搜索图标,并输入文本查询:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Semantic Search](https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/art_analysis_semantic_search.gif)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "在后台,文本会被标记化,并通过 CLIP 的文本编码器进行嵌入,然后用来查询相似性索引,从而找到数据集中与之最相似的图像。这是一种基于文本查询搜索图像的强大方法,非常适合用于寻找符合特定主题或风格的图像。而且,这不仅仅局限于 CLIP;你可以使用任何可以为图像和文本生成嵌入的 CLIP 类模型,从 🤗 Transformers 中获取!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<div class=\"alert alert-block alert-info\">\n", "💡 为了在大型数据集上进行高效的向量搜索和索引,FiftyOne 提供了与开源向量数据库的原生 <a href=\"https://voxel51.com/vector-search\">集成</a>。\n", "</div>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 通过聚类和可视化揭示艺术主题" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "通过执行相似性和语义搜索,我们可以更有效地与数据进行交互。但我们还可以更进一步,加入一些无监督学习的元素。这将帮助我们在 WikiArt 数据集中识别艺术模式,包括风格、主题,甚至是那些难以用语言描述的艺术主题。\n", "\n", "我们将通过两种方式进行:\n", "\n", "1. **降维**:我们将使用 UMAP 将嵌入的维度减少到 2D,并在散点图中可视化数据。这将帮助我们观察图像如何基于风格、流派和艺术家进行聚类。\n", " \n", "2. **聚类**:我们将使用 K-Means 聚类算法,基于图像的嵌入进行聚类,并观察哪些组别会自然形成。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "对于降维,我们将运行 **FiftyOne Brain** 中的 `compute_visualization()`,并传入之前计算的嵌入。我们指定 `method=\"umap\"` 来使用 UMAP 进行降维,但也可以选择使用 PCA 或 t-SNE:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Generating visualization...\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/opt/homebrew/Caskroom/miniforge/base/envs/fdev/lib/python3.9/site-packages/numba/cpython/hashing.py:482: UserWarning: FNV hashing is not implemented in Numba. See PEP 456 https://www.python.org/dev/peps/pep-0456/ for rationale over not using FNV. Numba will continue to work, but hashes for built in types will be computed using siphash24. This will permit e.g. dictionaries to continue to behave as expected, however anything relying on the value of the hash opposed to hash as a derived property is likely to not work as expected.\n", " warnings.warn(msg)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "UMAP( verbose=True)\n", "Tue Apr 30 11:51:45 2024 Construct fuzzy simplicial set\n", "Tue Apr 30 11:51:46 2024 Finding Nearest Neighbors\n", "Tue Apr 30 11:51:47 2024 Finished Nearest Neighbor Search\n", "Tue Apr 30 11:51:48 2024 Construct embedding\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "98dde3df324249df91f3336c913b409a", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Epochs completed: 0%| 0/500 [00:00]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\tcompleted 0 / 500 epochs\n", "\tcompleted 50 / 500 epochs\n", "\tcompleted 100 / 500 epochs\n", "\tcompleted 150 / 500 epochs\n", "\tcompleted 200 / 500 epochs\n", "\tcompleted 250 / 500 epochs\n", "\tcompleted 300 / 500 epochs\n", "\tcompleted 350 / 500 epochs\n", "\tcompleted 400 / 500 epochs\n", "\tcompleted 450 / 500 epochs\n", "Tue Apr 30 11:51:49 2024 Finished embedding\n" ] }, { "data": { "text/plain": [ "<fiftyone.brain.visualization.VisualizationResults at 0x29f468760>" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fob.compute_visualization(dataset, embeddings=\"clip_embeddings\", method=\"umap\", brain_key=\"clip_vis\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "现在我们可以在 **FiftyOne 应用** 中打开一个面板,在那里我们将看到数据集中每张图像对应的一个 2D 点。我们可以通过数据集中的任何字段(例如艺术家或流派)为这些点着色,以查看这些属性在我们的图像特征中是如何被捕捉的:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![UMAP Visualization](https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/art_analysis_visualize_embeddings.gif)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "我们还可以在嵌入上运行聚类,将相似的图像分为一组——也许这些艺术作品的主导特征并未通过现有标签捕捉到,或者我们可能想要识别出不同的子流派。为了对我们的数据进行聚类,我们需要下载 [FiftyOne Clustering 插件](https://github.com/jacobmarks/clustering-plugin):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!fiftyone plugins download https://github.com/jacobmarks/clustering-plugin" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "再次刷新应用后,我们可以通过应用中的操作符访问聚类功能。按下反引号键(`)打开操作符列表,输入“cluster”并从下拉菜单中选择该操作符。这将打开一个交互式面板,我们可以在其中指定聚类算法、超参数以及要聚类的字段。为了简化操作,我们将使用 K-Means 聚类,并设置为 10 个簇。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "然后,我们可以在应用中可视化这些聚类,查看图像如何根据它们的嵌入进行分组:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![K-means Clustering](https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/art_analysis_clustering.gif)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "我们可以看到,部分聚类是按艺术家进行分组的;其他聚类则是按流派或风格进行分组的。还有一些聚类更为抽象,可能代表了子流派或其他不容易从数据中直接看出的分组。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 识别最独特的艺术作品" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "我们可以问一个关于数据集的有趣问题,那就是每张图像的**独特性**如何。这对于许多应用场景都非常重要,比如推荐相似图像、检测重复图像或识别异常值。在艺术的语境中,一幅画的独特性可能是决定其价值的重要因素。\n", "\n", "尽管有很多方法可以表征独特性,我们的图像嵌入允许我们基于每张图像与数据集中其他图像的相似度,定量地为每个样本分配一个独特性分数。具体而言,**FiftyOne Brain** 的 `compute_uniqueness()` 函数会查看每个样本的嵌入与其最近邻之间的距离,并基于这个距离计算一个介于 0 和 1 之间的分数。分数为 0 意味着样本没有特色或与其他样本非常相似,而分数为 1 意味着样本非常独特。" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Computing uniqueness...\n", "Uniqueness computation complete\n" ] } ], "source": [ "fob.compute_uniqueness(dataset, embeddings=\"clip_embeddings\") # compute uniqueness using CLIP embeddings" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "然后,我们可以在嵌入面板中根据独特性分数为图像着色,或者通过独特性分数进行筛选,甚至按分数排序,以查看数据集中最独特的图像。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "most_unique_view = dataset.sort_by(\"uniqueness\", reverse=True)\n", "session.view = most_unique_view.view() # Most unique images" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Most Unique Images](https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/art_analysis_most_unique.jpg)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "least_unique_view = dataset.sort_by(\"uniqueness\", reverse=False)\n", "session.view = least_unique_view.view() # Least unique images" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Least Unique Images](https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/art_analysis_least_unique.jpg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "更进一步,我们还可以回答哪个艺术家通常创作出最独特的作品这个问题。我们可以计算每个艺术家在其所有作品中的平均独特性分数:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Unknown Artist: 0.7932221632002723\n", "boris-kustodiev: 0.7480731948424676\n", "salvador-dali: 0.7368807620414014\n", "raphael-kirchner: 0.7315448102204755\n", "ilya-repin: 0.7204744626806383\n", "marc-chagall: 0.7169373812321908\n", "rembrandt: 0.715205220292227\n", "martiros-saryan: 0.708560775790436\n", "childe-hassam: 0.7018343391132756\n", "edgar-degas: 0.699912746806587\n", "albrecht-durer: 0.6969358680800216\n", "john-singer-sargent: 0.6839955708720844\n", "pablo-picasso: 0.6835137858302969\n", "pyotr-konchalovsky: 0.6780653000855895\n", "nicholas-roerich: 0.6676504687452387\n", "ivan-aivazovsky: 0.6484361530090199\n", "vincent-van-gogh: 0.6472004520699081\n", "gustave-dore: 0.6307283287457358\n", "pierre-auguste-renoir: 0.6271467146993583\n", "paul-cezanne: 0.6251076007168186\n", "eugene-boudin: 0.6103397516167454\n", "camille-pissarro: 0.6046182609119615\n", "claude-monet: 0.5998234558947573\n", "ivan-shishkin: 0.589796389836674\n" ] } ], "source": [ "artist_unique_scores = {\n", " artist: dataset.match(F(\"artist.label\") == artist).mean(\"uniqueness\")\n", " for artist in artists\n", "}\n", "\n", "sorted_artists = sorted(\n", " artist_unique_scores, key=artist_unique_scores.get, reverse=True\n", ")\n", "\n", "for artist in sorted_artists:\n", " print(f\"{artist}: {artist_unique_scores[artist]}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "看起来,在我们的数据集中,创作出最独特作品的艺术家是 **Boris Kustodiev**!让我们来看看他的部分作品:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "kustodiev_view = dataset.match(F(\"artist.label\") == \"boris-kustodiev\")\n", "session.view = kustodiev_view.view()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Boris Kustodiev Artwork](https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/art_analysis_kustodiev_view.jpg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 通过视觉特征描述艺术作品" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "为了全面分析,我们将回归基础,分析数据集中图像的一些核心特征。我们将计算每张图像的标准指标,如亮度、对比度和饱和度,并查看这些指标与艺术作品的艺术风格和流派之间的相关性。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "为了进行这些分析,我们需要下载 [FiftyOne 图像质量插件](https://github.com/jacobmarks/image-quality-issues):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!fiftyone plugins download https://github.com/jacobmarks/image-quality-issues/" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "再次刷新应用并打开操作符列表。这次输入 `compute` 并选择其中一个图像质量操作符。我们将从 **亮度** 开始:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Compute Brightness](https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/art_analysis_compute_brightness.gif)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "当操作符运行完成后,我们的数据集中将新增一个字段,其中包含每张图像的亮度分数。然后,我们可以在应用中可视化这些数据:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Brightness](https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/art_analysis_brightness.gif)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "我们还可以根据亮度为图像着色,甚至查看亮度与数据集中的其他字段(如风格)之间的相关性:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Style by Brightness](https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/art_analysis_style_by_brightness.gif)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "现在对比度和饱和度也做相同的操作。以下是饱和度的结果:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![Filter by Saturation](https://huggingface.co/datasets/huggingface/cookbook-images/resolve/main/art_analysis_filter_by_saturation.jpg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "希望这能说明,并非所有的分析都依赖于将深度神经网络应用于数据。有时候,简单的指标同样可以提供有价值的信息,并能为你的数据提供不同的视角 🤓!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<div class=\"alert alert-block alert-info\">\n", "📚 对于更大的数据集,你可能希望 <a href=\"https://docs.voxel51.com/plugins/using_plugins.html#delegated-operations\">委托操作</a> 以便稍后执行。\n", "</div>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 下一步是什么?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "在本 Notebook 中,我们探讨了如何使用多模态嵌入、无监督学习和传统图像处理技术来分析图像中的艺术风格。我们展示了如何执行图像相似性和语义搜索,基于风格对图像进行聚类,分析图像的独特性,并计算图像质量指标。这些技术可以应用于广泛的视觉数据集,从艺术收藏到医学图像再到卫星图像。试着 [加载来自 Hugging Face Hub 的不同数据集](https://docs.voxel51.com/integrations/huggingface.html#loading-datasets-from-the-hub),看看你能发现哪些有趣的见解!\n", "\n", "如果你想进一步探索,以下是一些可以尝试的附加分析:\n", "\n", "- **零样本分类**:使用 🤗 Transformers 提供的预训练视觉语言模型,通过主题或对象对数据集中的图像进行分类,无需任何训练数据。更多信息请参见这个 [零样本分类教程](https://docs.voxel51.com/tutorials/zero_shot_classification.html)。\n", "- **图像标注**:使用 🤗 Transformers 提供的预训练视觉语言模型为数据集中的图像生成标注。然后,可以利用这些标注进行主题建模,或根据标注的嵌入对艺术作品进行聚类。更多信息请查看 FiftyOne 的 [图像标注插件](https://github.com/jacobmarks/fiftyone-image-captioning-plugin)。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 📚 资源" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- [FiftyOne 🤝 🤗 Hub 集成](https://docs.voxel51.com/integrations/huggingface.html#huggingface-hub)\n", "- [FiftyOne 🤝 🤗 Transformers 集成](https://docs.voxel51.com/integrations/huggingface.html#transformers-library)\n", "- [FiftyOne 向量搜索集成](https://voxel51.com/vector-search/)\n", "- [使用降维技术可视化数据](https://docs.voxel51.com/tutorials/dimension_reduction.html)\n", "- [使用嵌入对图像进行聚类](https://docs.voxel51.com/tutorials/clustering.html)\n", "- [使用 FiftyOne 探索图像独特性](https://docs.voxel51.com/tutorials/uniqueness.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## FiftyOne 开源项目" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[FiftyOne](https://github.com/voxel51/fiftyone/) 是构建高质量数据集和计算机视觉模型的领先开源工具包。FiftyOne 已经拥有超过 200 万次下载,全球的开发者和研究人员都信任并使用它。\n", "\n", "💪 FiftyOne 团队欢迎开源社区的贡献!如果你有兴趣为 FiftyOne 做贡献,可以查看 [贡献指南](https://github.com/voxel51/fiftyone/blob/develop/CONTRIBUTING.md)。" ] } ], "metadata": { "kernelspec": { "display_name": "fdev", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.0" } }, "nbformat": 4, "nbformat_minor": 2 }