bitsandbytes/research/autograd/_functions.py (5 lines): - line 73: # TODO: Fix blocksize to be output_dim - line 85: # not supported by PyTorch. TODO: create work-around - line 157: # TODO: Fix blocksize to be output_dim - line 169: # not supported by PyTorch. TODO: create work-around - line 187: # TODO: the B008 on the line below is a likely bug; the current implementation will pyproject.toml (4 lines): - line 25: "B007", # Loop control variable not used within the loop body (TODO: enable) - line 26: "B028", # Warning without stacklevel (TODO: enable) - line 28: "E701", # Multiple statements on one line (TODO: enable) - line 31: "F841", # Local assigned but not used (TODO: enable, these are likely bugs) include/BinAlgo.h (2 lines): - line 49: // FIXME: use SSE2? - line 61: // FIXME: merge these two loops benchmarking/switchback/make_plot_with_jsonl.py (1 line): - line 19: # TODO: change this to what you want. bitsandbytes/diagnostics/cuda.py (1 line): - line 147: # TODO: bitsandbytes/cextension.py (1 line): - line 4: [ ] TODO: Q - What if we have multiple GPUs of different makes? _typos.toml (1 line): - line 11: "transation" = "transation" # TODO: is this transition, transaction, translation..? bitsandbytes/triton/quantize_columnwise_and_transpose.py (1 line): - line 17: # TODO: autotune this better. benchmarking/switchback/speed_benchmark.py (1 line): - line 158: # TODO: change this to what you want. bitsandbytes/triton/dequantize_rowwise.py (1 line): - line 17: # TODO: autotune this better. bitsandbytes/triton/quantize_rowwise.py (1 line): - line 17: # TODO: autotune this better. bitsandbytes/autograd/_functions.py (1 line): - line 537: # not supported by PyTorch. TODO: create work-around bitsandbytes/nn/modules.py (1 line): - line 405: # self.persistent_buffers = [] # TODO consider as way to save quant state