captum/influence/_core/tracincp.py (3 lines): - line 48: TODO: Support for checkpoint type. Currently only supports model parameters as saved - line 480: ): # TODO: allow loss_fn to be Callable - line 519: TODO: Either restore model state after done (would have to place functionality captum/influence/_core/tracincp_fast_rand_proj.py (3 lines): - line 54: TODO: Support for checkpoint type. Currently only supports model parameters as saved - line 158: # TODO: restore prior state - line 172: ): # TODO: allow loss_fn to be Callable captum/insights/attr_vis/attribution_calculation.py (2 lines): - line 125: # TODO support multiple baselines - line 163: # TODO support batch size > 1 captum/insights/attr_vis/app.py (1 line): - line 261: # TODO: Output widget only captures beginning of server logs. It seems captum/attr/_core/layer/layer_integrated_gradients.py (1 line): - line 446: # TODO: captum/attr/_utils/common.py (1 line): - line 228: # FIXME: GradientAttribution is provided as a string due to a circular import. captum/influence/_core/similarity_influence.py (1 line): - line 265: TODO: For models that can have tuples as activations, we should captum/attr/_models/pytext.py (1 line): - line 48: TODO: we can potentally also output tuples of attributions. This might be captum/_utils/gradient.py (1 line): - line 827: # TODO: allow loss_fn to be Callable captum/insights/attr_vis/frontend/src/components/Visualization.tsx (1 line): - line 47: //TODO: Refactor the visualization table as a