para_graph_sampler/graph_engine/frontend/graph.py (5 lines): - line 20: TODO: support graph classification. - line 26: node_set: Optional[Dict[Any, Union[np.ndarray, torch.tensor]]] # TODO: merge node_set and edge_set into a single entity_set - line 49: TODO: now only based on homogeneous graph, and thus we have a simple csr. - line 199: # TODO merge the classes of Subgraph and OneBatchSubgraph - line 212: # TODO store subgraph profiling data shaDow/minibatch.py (5 lines): - line 259: # TODO: may move this to epoch_start reset. - line 287: # TODO use valedge as input, then concat undirected valedges to be excluded in neg sample - line 303: # TODO: set label_epoch for node task as well. - line 365: # TODO: support different sampler config in val and test - line 484: if ret_raw_idx: # TODO: this should support ens as well. multiple subg should have the same raw target idx shaDow/layers.py (4 lines): - line 24: # TODO: can make F_ACT a @dataclass - line 358: breakpoint() # TODO: bug here. should have 1/n = 1/|V_sub| factor - line 535: breakpoint() # TODO compute complexity in MLP - line 742: # TODO: there seems to be one more dropout here? para_graph_sampler/graph_engine/frontend/samplers_ensemble.py (3 lines): - line 64: # TODO: may not need the fix_target and sequential_traversal arguments - line 131: # TODO: unify config and common_config - line 159: # TODO: handle python sampler para_graph_sampler/graph_engine/backend/ParallelSampler.h (2 lines): - line 48: // NOTE TODO: right now we don't use any info from data array (we simply fill in 1. in subgraph) - line 146: std::vector> top_ppr_scores; // may not essentially need it // TODO may change it to be even more compact, since we only need to preserve relative values shaDow/models.py (1 line): - line 103: # TODO re-structure yaml config so that pooling params become a dict shaDow/main.py (1 line): - line 319: while num_roots_eval < num_roots_budget: # TODO: replace with budget check shaDow/postproc_ens.py (1 line): - line 105: # TODO: set tensor dtype by torch.get_default_dtype para_graph_sampler/graph_engine/frontend/samplers_cpp.py (1 line): - line 181: if is_found_m: # TODO: symb link to calculated ppr files, check fname_* para_graph_sampler/graph_engine/backend/setup.py (1 line): - line 108: # TODO: C++ interface should not be exposed, so install as _ParallelSampler? para_graph_sampler/graph_engine/backend/ParallelSampler.cpp (1 line): - line 23: // TODO in ensemble, even if you cannot drop indices, you can still drop top_ppr_*