huggingface / huggingface-llama-recipes
File Size

The distribution of size of files (measured in lines of code).

Intro
Learn more...
File Size Overall
81% | 5% | 7% | 3% | 1%
Legend:
1001+
501-1000
201-500
101-200
1-100


explore: grouped by folders | grouped by size | sunburst | 3D view
File Size per Extension
1001+
501-1000
201-500
101-200
1-100
ipynb83% | 5% | 7% | 3% | 0%
py0% | 0% | 0% | 0% | 100%
yaml0% | 0% | 0% | 0% | 100%
File Size per Logical Decomposition
primary
1001+
501-1000
201-500
101-200
1-100
llama_rag100% | 0% | 0% | 0% | 0%
llama_guard90% | 0% | 9% | 0% | 0%
gradio_demos96% | 0% | 0% | 0% | 3%
local_inference0% | 55% | 0% | 39% | 5%
ROOT0% | 100% | 0% | 0% | 0%
assisted_decoding0% | 0% | 93% | 0% | 6%
fine_tune0% | 0% | 75% | 0% | 24%
api_inference0% | 0% | 0% | 100% | 0%
performance_optimization0% | 0% | 0% | 62% | 37%
Longest Files (Top 24)
File# lines# units
7687 -
7349 -
chatbot_demo.ipynb
in gradio_demos
1316 -
awq.ipynb
in local_inference
561 -
501 -
llama-guard-4.ipynb
in llama_guard
435 -
prompt_guard.ipynb
in llama_guard
371 -
assisted_decoding_70B_3B.ipynb
in assisted_decoding
353 -
323 -
inference-api.ipynb
in api_inference
174 -
torch_compile_with_torchao.ipynb
in performance_optimization
152 -
fp8-405B.ipynb
in local_inference
149 -
4bit_bnb.ipynb
in local_inference
125 -
8bit_bnb.ipynb
in local_inference
121 -
sft_vlm.py
in fine_tune
66 1
chatbot_demo.py
in gradio_demos
52 3
peft_finetuning.py
in fine_tune
40 -
awq_generation.py
in local_inference
29 -
torch_compile.py
in performance_optimization
24 -
quantized_cache.py
in performance_optimization
24 -
assisted_decoding.py
in assisted_decoding
23 -
gptq_generation.py
in local_inference
23 -
prompt_reuse.py
in performance_optimization
23 -
deepspeed_zero3.yaml
in performance_optimization
22 -
Files With Most Units (Top 2)
File# lines# units
chatbot_demo.py
in gradio_demos
52 3
sft_vlm.py
in fine_tune
66 1
Files With Long Lines (Top 14)

There are 14 files with lines longer than 120 characters. In total, there are 114 long lines.

File# lines# units# long lines
prompt_guard.ipynb
in llama_guard
371 - 18
inference-api.ipynb
in api_inference
174 - 17
7349 - 17
501 - 12
llama-guard-4.ipynb
in llama_guard
435 - 10
chatbot_demo.ipynb
in gradio_demos
1316 - 9
323 - 8
7687 - 7
assisted_decoding_70B_3B.ipynb
in assisted_decoding
353 - 5
awq.ipynb
in local_inference
561 - 4
fp8-405B.ipynb
in local_inference
149 - 3
torch_compile_with_torchao.ipynb
in performance_optimization
152 - 2
quantized_cache.py
in performance_optimization
24 - 1
prompt_reuse.py
in performance_optimization
23 - 1