captum/influence/_core/tracincp.py (3 lines): - line 48: TODO: Support for checkpoint type. Currently only supports model parameters as saved - line 480: ): # TODO: allow loss_fn to be Callable - line 519: TODO: Either restore model state after done (would have to place functionality captum/influence/_core/tracincp_fast_rand_proj.py (3 lines): - line 54: TODO: Support for checkpoint type. Currently only supports model parameters as saved - line 158: # TODO: restore prior state - line 172: ): # TODO: allow loss_fn to be Callable captum/insights/attr_vis/attribution_calculation.py (2 lines): - line 125: # TODO support multiple baselines - line 163: # TODO support batch size > 1 captum/insights/attr_vis/app.py (1 line): - line 261: # TODO: Output widget only captures beginning of server logs. It seems captum/attr/_core/layer/layer_integrated_gradients.py (1 line): - line 446: # TODO: captum/attr/_utils/common.py (1 line): - line 228: # FIXME: GradientAttribution is provided as a string due to a circular import. captum/influence/_core/similarity_influence.py (1 line): - line 265: TODO: For models that can have tuples as activations, we should captum/attr/_models/pytext.py (1 line): - line 48: TODO: we can potentally also output tuples of attributions. This might be captum/_utils/gradient.py (1 line): - line 827: # TODO: allow loss_fn to be Callable captum/insights/attr_vis/frontend/src/components/Visualization.tsx (1 line): - line 47: //TODO: Refactor the visualization table as a instead of columns, in order to have cleaner styling captum/attr/_utils/batching.py (1 line): - line 145: # TODO Reconsider this check if _batched_generator is used for non gradient-based captum/influence/_utils/common.py (1 line): - line 109: # TODO: allow loss_fn to be Callable captum/_utils/av.py (1 line): - line 408: r"""TODO: captum/insights/attr_vis/config.py (1 line): - line 50: help_info: Optional[str] = None # TODO fill out help for each method captum/attr/_core/lrp.py (1 line): - line 402: #TODO: Remove when bugs are fixed captum/attr/_core/deep_lift.py (1 line): - line 529: # TODO find a better way of checking if a module is a container or not captum/attr/_core/noise_tunnel.py (1 line): - line 211: # FIXME it look like it is very difficult to make torch.normal