course/en/chapter11/section3.ipynb (4 lines): - line 120: "# TODO: define your dataset and config using the path and name parameters\n", - line 130: "# TODO: 🦁 If your dataset is not in a format that TRL can convert to the chat template, you will need to process it. Refer to the [module](../chat_templates.md)" - line 173: "# TODO: 🦁 🐕 align the SFTTrainer params with your chosen dataset. For example, if you are using the `bigcode/the-stack-smol` dataset, you will need to choose the `content` column`" - line 235: "# TODO: use the fine-tuned to model generate a response, just like with the base example." transformers_doc/de/pytorch/llm_tutorial.ipynb (3 lines): - line 90: "\n", - line 394: "" - line 410: "" transformers_doc/de/llm_tutorial.ipynb (3 lines): - line 90: "\n", - line 394: "" - line 410: "" transformers_doc/de/tensorflow/llm_tutorial.ipynb (3 lines): - line 90: "\n", - line 394: "" - line 410: "" transformers_doc/ko/pytorch/llm_tutorial.ipynb (2 lines): - line 395: "" - line 411: "" course/en/chapter11/section2.ipynb (2 lines): - line 492: " # TODO: 🐢 Convert the sample into a chat format\n", - line 557: " # TODO: 🐕 Convert the sample into a chat format\n", transformers_doc/ja/llm_tutorial.ipynb (2 lines): - line 81: "\n", - line 394: "" transformers_doc/ko/llm_tutorial.ipynb (2 lines): - line 395: "" - line 411: "" transformers_doc/ko/tensorflow/llm_tutorial.ipynb (2 lines): - line 395: "" - line 411: "" transformers_doc/ja/tensorflow/llm_tutorial.ipynb (2 lines): - line 81: "\n", - line 394: "" transformers_doc/ja/pytorch/llm_tutorial.ipynb (2 lines): - line 81: "\n", - line 394: "" course/en/chapter11/section4.ipynb (2 lines): - line 95: "# TODO: define your dataset and config using the path and name parameters\n", - line 191: "# TODO: Configure LoRA parameters\n", transformers_doc/ar/peft.ipynb (1 line): - line 482: "TODO: (@younesbelkada @stevhliu)\n", transformers_doc/ar/pytorch/peft.ipynb (1 line): - line 482: "TODO: (@younesbelkada @stevhliu)\n", transformers_doc/zh/pytorch/peft.ipynb (1 line): - line 381: "TODO: (@younesbelkada @stevhliu)\n", transformers_doc/de/pytorch/peft.ipynb (1 line): - line 391: "TODO: (@younesbelkada @stevhliu)\n", transformers_doc/ar/tensorflow/peft.ipynb (1 line): - line 482: "TODO: (@younesbelkada @stevhliu)\n", transformers_doc/de/peft.ipynb (1 line): - line 391: "TODO: (@younesbelkada @stevhliu)\n", transformers_doc/zh/tensorflow/peft.ipynb (1 line): - line 381: "TODO: (@younesbelkada @stevhliu)\n", transformers_doc/zh/peft.ipynb (1 line): - line 381: "TODO: (@younesbelkada @stevhliu)\n", transformers_doc/de/tensorflow/peft.ipynb (1 line): - line 391: "TODO: (@younesbelkada @stevhliu)\n", diffusers/CLIP_Guided_Stable_diffusion_with_diffusers.ipynb (1 line): - line 202: " custom_revision=\"main\", # TODO: remove if diffusers>=0.12.0\n",