hugegraph-llm/src/hugegraph_llm/operators/hugegraph_op/graph_rag_query.py (9 lines): - line 28: # TODO: remove 'as('subj)' step - line 31: # TODO: we could use a simpler query (like kneighbor-api to get the edges) - line 32: # TODO: test with profile()/explain() to speed up the query - line 174: # TODO: enhance the limit logic later - line 188: # TODO: use generator or asyncio to speed up the query logic - line 204: # TODO: we may need to optimize the logic here with global deduplication (may lack some single vertex) - line 243: # TODO: move this method to a util file for reuse (remove self param) - line 340: # TODO: we may remove label id or replace with label name - line 388: # TODO: rename to vertex (also need update in the schema) hugegraph-llm/src/hugegraph_llm/operators/hugegraph_op/commit_to_hugegraph.py (6 lines): - line 50: # TODO: ensure the function works correctly (update the logic later) - line 107: # TODO: transform to Enum first (better in earlier step) - line 125: # TODO: transform to Enum first (better in earlier step) - line 135: # TODO: we could try batch add vertices first, setback to single-mode if failed - line 149: # TODO: we could try batch add edges first, setback to single-mode if failed - line 269: # TODO: check ok below hugegraph-llm/src/hugegraph_llm/api/rag_api.py (2 lines): - line 69: # TODO: we need more info in the response for users to understand the query logic - line 146: # TODO: restructure the implement of llm to three types, like "/config/chat_llm" hugegraph-llm/src/hugegraph_llm/operators/llm_op/info_extract.py (2 lines): - line 105: # TODO: use a more efficient way to compare the extract & input property - line 170: # TODO: make 'max_length' be a configurable param in settings.py/settings.cfg hugegraph-llm/src/hugegraph_llm/operators/index_op/build_semantic_index.py (2 lines): - line 40: # TODO: use asyncio for IO tasks - line 52: # TODO: We should build vid vector index separately, especially when the vertices may be very large hugegraph-llm/src/hugegraph_llm/models/llms/openai.py (2 lines): - line 195: # TODO: log.info("Token usage: %s", completions.usage.model_dump_json()) - line 216: # TODO: list all models and their max tokens from api hugegraph-llm/src/hugegraph_llm/operators/llm_op/answer_synthesize.py (2 lines): - line 29: TODO: It is not clear whether there is any other dependence on the SCHEMA_EXAMPLE_PROMPT variable. - line 257: # FIXME: Expected type 'AsyncIterable', got 'Coroutine[Any, Any, AsyncGenerator[str, None]]' instead hugegraph-llm/src/hugegraph_llm/operators/index_op/vector_index_query.py (2 lines): - line 38: # TODO: why set dis_threshold=2? - line 40: # TODO: check format results hugegraph-llm/src/hugegraph_llm/config/prompt_config.py (2 lines): - line 153: # TODO: we should provide a better example to reduce the useless information - line 250: # TODO: we should switch the prompt automatically based on the language (like using context['language']) hugegraph-llm/src/hugegraph_llm/middleware/middleware.py (2 lines): - line 26: # TODO: we could use middleware(AOP) in the future (dig out the lifecycle of gradio & fastapi) - line 33: # TODO: handle time record for async task pool in gradio hugegraph-llm/src/hugegraph_llm/indices/graph_index.py (2 lines): - line 40: # TODO: replace triples with a more specific graph element type & implement it - line 44: # TODO: replace triples with a more specific graph element type & implement it hugegraph-llm/setup.py (1 line): - line 21: # TODO: remove this file later (replace by poetry/uv configs) hugegraph-python-client/setup.py (1 line): - line 21: # TODO: replace it by poetry/uv configs (e.g. pyproject.toml) hugegraph-python-client/src/pyhugegraph/api/schema_manage/property_key.py (1 line): - line 26: # TODO: support UpdateStrategy for PropertyKey (refer java-client/rest-api) hugegraph-llm/src/hugegraph_llm/api/models/rag_requests.py (1 line): - line 67: # TODO: import the default value of prompt.* dynamically hugegraph-python-client/src/pyhugegraph/client.py (1 line): - line 52: port: str, # TODO: port should be int? hugegraph-llm/src/hugegraph_llm/operators/index_op/semantic_id_query.py (1 line): - line 58: # TODO: we should add a global GraphSchemaCache to avoid calling the server every time docker/charts/hg-llm/values.yaml (1 line): - line 22: # TODO: use pvc to store vector & graph-backup data in "src/hugegraph_llm/resources/" hugegraph-llm/src/hugegraph_llm/api/admin_api.py (1 line): - line 29: # FIXME: line 31: E0702: Raising dict while only classes or instances are allowed (raising-bad-type) hugegraph-llm/src/hugegraph_llm/models/llms/qianfan.py (1 line): - line 114: # TODO: replace with config way hugegraph-llm/src/hugegraph_llm/config/llm_config.py (1 line): - line 67: # TODO: update to one token key mode hugegraph-llm/src/hugegraph_llm/indices/keyword_index.py (1 line): - line 19: """TODO: implement keyword index""" hugegraph-llm/src/hugegraph_llm/operators/common_op/merge_dedup_rerank.py (1 line): - line 52: priority: bool = False, # TODO: implement priority hugegraph-llm/src/hugegraph_llm/utils/vector_index_utils.py (1 line): - line 49: # TODO: support PDF file hugegraph-llm/src/hugegraph_llm/operators/index_op/gremlin_example_index_query.py (1 line): - line 60: # TODO: use asyncio for IO tasks hugegraph-llm/src/hugegraph_llm/operators/llm_op/gremlin_generate.py (1 line): - line 128: # TODO: Update to async_generate again hugegraph-llm/src/hugegraph_llm/operators/llm_op/disambiguate_data.py (1 line): - line 47: # TODO: ensure the logic here hugegraph-llm/src/hugegraph_llm/operators/hugegraph_op/fetch_graph_data.py (1 line): - line 33: # TODO: v_limit will influence the vid embedding logic in build_semantic_index.py hugegraph-llm/src/hugegraph_llm/operators/hugegraph_op/schema_manager.py (1 line): - line 64: # TODO: enhance the logic here hugegraph-llm/src/hugegraph_llm/operators/llm_op/property_graph_extract.py (1 line): - line 30: TODO: It is not clear whether there is any other dependence on the SCHEMA_EXAMPLE_PROMPT variable. hugegraph-llm/src/hugegraph_llm/utils/hugegraph_utils.py (1 line): - line 175: #TODO: In the path demo/rag_demo/configs_block.py,