torch_xla/csrc/aten_xla_type.cpp (3 lines): - line 1315: // TODO: for now route to native, which dispatches supported XLA operations. - line 2852: // TODO: implement scatter_mul - line 2872: // TODO: implement scatter_mul torch_xla/csrc/cross_replica_reduces.cpp (2 lines): - line 91: // TODO: We use pseudo-tokens ATM, which are real values. This need to be - line 162: // TODO: This is missing layout pinning ATM. If XLA scheduling is not exactly torch_xla/csrc/tensor.cpp (1 line): - line 1072: // TODO: This can be optimized via proper XRT/XLA computation.