src/peft/tuners/lora/layer.py (8 lines): - line 1019: # TODO: no dtype conversion here, unlike in Linear, is that correct? - line 1408: # TODO work with separate weights - line 1416: # TODO: probably not so hard to implement - line 1526: # TODO: work with separate weights - line 1532: # TODO: work with separate weights - line 1559: # TODO: work with separate weights - line 1584: # TODO work with separate weights - line 1729: # TODO work with separate weights src/peft/tuners/lora/torchao.py (3 lines): - line 43: # TODO: Not required once int4_weight_only is properly supported by torchao - line 85: # TODO: once (if) torchao supports directly mutating the data, use that instead. - line 120: # TODO: once (if) torchao supports directly mutating the data, use that instead. src/peft/tuners/xlora/layer.py (3 lines): - line 112: # TODO: implement X-LoRA with Lora+Dora layers - line 159: # TODO: implement X-LoRA with Lora+Dora layers - line 206: # TODO: implement X-LoRA with Lora+Dora layers src/peft/peft_model.py (3 lines): - line 907: # TODO: consider replacing this patching of methods with a more robust mechanism: setting a flag and - line 1950: # TODO: starting with transformers 4.38, all architectures should support caching. - line 3190: # TODO: Remove after 2026-01 src/peft/tuners/trainable_tokens/layer.py (2 lines): - line 219: # TODO: the isinstance checks, especially the one for nn.Linear, may not hold for quantized layers; - line 220: # TODO: we may need to find a better way to detect quantized layers. src/peft/tuners/mixed/model.py (2 lines): - line 111: # TODO maybe not necessary to have special treatment? - line 148: # TODO: check if this is needed for other supported types src/peft/utils/hotswap.py (2 lines): - line 410: # TODO: there is probably a more precise way to identify the adapter keys - line 533: # TODO: This is a very rough check only for LoRA at the moment. Also, there might be some options that don't src/peft/config.py (2 lines): - line 122: # Avoid circular dependency .. TODO: fix this with a larger refactor - line 125: # TODO: this hack is needed to fix the following issue (on commit 702f937): src/peft/utils/save_and_load.py (2 lines): - line 159: # TODO: adding vera_A and vera_B to `self.get_base_layer` would - line 481: # TODO: remove this function, use vanilla torch.load as soon as torch < 2.6.0 is no longer supported src/peft/tuners/adaption_prompt/layer.py (2 lines): - line 52: # TODO: remove this clause after 2026-01-01 - line 93: # TODO: remove this clause after 2026-01-01 src/peft/utils/other.py (2 lines): - line 631: # TODO: not sure if this is still a sensible thing to do. We would basically have to - line 708: # TODO this does not support deepspeed/fsdp since it is missing a context manager method_comparison/MetaMathQA/run.py (2 lines): - line 201: # TODO: Note that the longest sequence in the batch won't have any PAD/EOS token at the end, this is fine if - line 294: # # TODO is this needed? src/peft/tuners/adaption_prompt/utils.py (2 lines): - line 71: # TODO: remove this clause after 2026-01-01 - line 100: # TODO we assume that position_ids is not None here, not sure if that is safe but the old code also did that src/peft/tuners/vblora/model.py (2 lines): - line 93: # TODO: there should be a check if any of the existing adapters actually has bias != "none", or else the check - line 123: # TODO: add quantization support src/peft/tuners/ia3/layer.py (2 lines): - line 175: # TODO: weight.dtype can be != self.ia3_l[self.active_adapters].dtype - line 304: # TODO: weight.dtype can be != self.ia3_l[self.active_adapters].dtype src/peft/tuners/hra/model.py (1 line): - line 94: # TODO: there should be a check if any of the existing adapters actually has bias != "none", or else the check src/peft/tuners/lycoris_utils.py (1 line): - line 102: # TODO: refactor LoRA to use the same approach src/peft/tuners/loha/layer.py (1 line): - line 183: # TODO: Investigate if there should be a scaler like in normal dropout during training src/peft/tuners/vera/model.py (1 line): - line 170: # TODO: there should be a check if any of the existing adapters actually has bias != "none", or else the check src/peft/tuners/randlora/model.py (1 line): - line 231: # TODO: there should be a check if any of the existing adapters actually has bias != "none", or else the check src/peft/tuners/cpt/config.py (1 line): - line 84: # TODO: adjust this to raise an error with PEFT v0.18.0 src/peft/tuners/ln_tuning/model.py (1 line): - line 81: # TODO: here need to handle the modules_to_save rather than the target_modules src/peft/tuners/oft/model.py (1 line): - line 112: # TODO: there should be a check if any of the existing adapters actually has bias != "none", or else the check src/peft/tuners/boft/model.py (1 line): - line 87: # TODO: there should be a check if any of the existing adapters actually has bias != "none", or else the check src/peft/tuners/_buffer_dict.py (1 line): - line 8: # TODO: To be removed once (if) https://github.com/pytorch/pytorch/pull/37385 lands src/peft/mixed_model.py (1 line): - line 372: # TODO: not quite clear why this is necessary but tests fail without it src/peft/tuners/lora/aqlm.py (1 line): - line 91: # TODO: Check if it is better as suggested by users https://github.com/PanQiWei/AutoGPTQ/pull/102 src/peft/tuners/lora/model.py (1 line): - line 152: # TODO: there should be a check if any of the existing adapters actually has bias != "none", or else the check src/peft/tuners/fourierft/model.py (1 line): - line 71: # TODO: there should be a check if any of the existing adapters actually has bias != "none", or else the check src/peft/tuners/bone/model.py (1 line): - line 94: # TODO: there should be a check if any of the existing adapters actually has bias != "none", or else the check src/peft/tuners/lora/gptq.py (1 line): - line 120: # TODO: Check if it is better as suggested by users https://github.com/PanQiWei/AutoGPTQ/pull/102 src/peft/tuners/tuners_utils.py (1 line): - line 1068: # TODO: It's still unclear how empty layers_pattern (None, [], or "") should behave src/peft/tuners/adalora/gptq.py (1 line): - line 61: # TODO: here, the dtype conversion is applied on the *whole expression*,