src/accelerate/accelerator.py (6 lines): - line 489: # TODO: S1ro - this is probably gonna be a problem with other fp8 backends too - line 1350: # TODO: Look at enabling native TP training directly with a proper config - line 1557: # TODO: Look at enabling native TP training directly with a proper config - line 1641: # TODO: Look at enabling native TP training directly with a proper config - line 2646: # TODO: this unscales all optimizers where we should only unscale the one where parameters are. - line 3817: # TODO: should the `yield` be in a try/finally block? src/accelerate/utils/imports.py (2 lines): - line 492: # TODO: Remove this function once stateful_dataloader is a stable feature in torchdata. - line 517: # TODO: Rework this into `utils.deepspeed` and migrate the "core" chunks into `accelerate.deepspeed` src/accelerate/utils/modeling.py (1 line): - line 1972: # TODO: group all errors and raise at the end. src/accelerate/utils/operations.py (1 line): - line 319: # FIXME: the below 2 lines are added to work-aound a bug related to INT64 collectives in oneCCL. Remove them once pytorch-2.9 is released.