aws / sagemaker-hyperpod-recipes
File Size

The distribution of size of files (measured in lines of code).

Intro
Learn more...
File Size Overall
0% | 5% | 0% | 81% | 13%
Legend:
1001+
501-1000
201-500
101-200
1-100


explore: grouped by folders | grouped by size | sunburst | 3D view
File Size per Extension
1001+
501-1000
201-500
101-200
1-100
py0% | 35% | 0% | 34% | 29%
yaml0% | 0% | 0% | 89% | 10%
toml0% | 0% | 0% | 0% | 100%
File Size per Logical Decomposition
primary
1001+
501-1000
201-500
101-200
1-100
launcher0% | 34% | 0% | 26% | 38%
recipes_collection0% | 0% | 0% | 92% | 7%
ROOT0% | 0% | 0% | 86% | 13%
template0% | 0% | 0% | 100% | 0%
launcher_scripts0% | 0% | 0% | 0% | 100%
Longest Files (Top 50)
File# lines# units
stages.py
in launcher/nemo
618 49
main.py
in root
195 4
training.yaml
in launcher/nemo/k8s_templates/training
178 -
megatron_llama3_1_8b_nemo.yaml
in recipes_collection/recipes/training/llama
162 -
efa.py
in launcher
156 -
value_validator.py
in launcher/config_validator
138 14
hf_llama3_8b_seq8k_gpu_dpo.yaml
in recipes_collection/recipes/fine-tuning/llama
110 -
hf_llama3_8b_seq8k_trn1_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/llama
109 -
sm_jobs.py
in template
109 2
hf_llama3_2_3b_seq8k_gpu_p5x1_pretrain.yaml
in recipes_collection/recipes/training/llama
108 -
hf_llama3_2_1b_seq8k_gpu_p5x1_pretrain.yaml
in recipes_collection/recipes/training/llama
108 -
hf_deepseek_r1_distilled_llama_8b_seq16k_gpu_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/deepseek
108 -
hf_deepseek_r1_distilled_llama_8b_seq8k_gpu_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/deepseek
108 -
hf_deepseek_r1_distilled_llama_70b_seq8k_gpu_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/deepseek
108 -
hf_deepseek_r1_distilled_llama_70b_seq16k_gpu_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/deepseek
108 -
hf_llama3_70b_seq8k_gpu_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/llama
108 -
hf_llama3_3_70b_seq8k_gpu_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/llama
108 -
hf_llama3_8b_seq8k_gpu_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/llama
108 -
hf_llama3_3_70b_seq16k_gpu_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/llama
108 -
hf_llama3_70b_seq16k_gpu_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/llama
108 -
hf_llama3_8b_seq16k_gpu_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/llama
108 -
hf_llama3_8b_seq8k_gpu_p5x16_pretrain.yaml
in recipes_collection/recipes/training/llama
107 -
hf_llama3_8b_seq16k_gpu_p5x16_pretrain.yaml
in recipes_collection/recipes/training/llama
107 -
hf_llama3_70b_seq16k_gpu_p5x64_pretrain.yaml
in recipes_collection/recipes/training/llama
107 -
hf_llama3_70b_seq16k_gpu_p5x128_pretrain.yaml
in recipes_collection/recipes/training/llama
107 -
hf_llama3_8b_seq8k_gpu_p5x32_pretrain.yaml
in recipes_collection/recipes/training/llama
107 -
hf_llama3_70b_seq8k_gpu_p5x32_pretrain.yaml
in recipes_collection/recipes/training/llama
107 -
hf_llama3_70b_seq8k_gpu_p5x64_pretrain.yaml
in recipes_collection/recipes/training/llama
107 -
hf_llama3_8b_seq16k_gpu_p5x32_pretrain.yaml
in recipes_collection/recipes/training/llama
107 -
hf_llama3_70b_seq8k_gpu_p5x128_pretrain.yaml
in recipes_collection/recipes/training/llama
107 -
hf_llama3_70b_seq16k_gpu_p5x32_pretrain.yaml
in recipes_collection/recipes/training/llama
107 -
hf_deepseek_r1_distilled_qwen_32b_seq16k_gpu_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/deepseek
107 -
hf_deepseek_r1_distilled_qwen_32b_seq16k_gpu_lora.yaml
in recipes_collection/recipes/fine-tuning/deepseek
107 -
hf_deepseek_r1_distilled_qwen_32b_seq8k_gpu_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/deepseek
107 -
hf_deepseek_r1_distilled_qwen_32b_seq8k_gpu_lora.yaml
in recipes_collection/recipes/fine-tuning/deepseek
107 -
hf_llama4_17b_16e_seq4k_gpu_lora_text_to_text.yaml
in recipes_collection/recipes/fine-tuning/llama
107 -
hf_llama4_17b_16e_seq8k_gpu_lora_multimodal_finetuning.yaml
in recipes_collection/recipes/fine-tuning/llama
107 -
hf_llama4_17b_16e_seq4k_gpu_lora_multimodal_finetuning.yaml
in recipes_collection/recipes/fine-tuning/llama
107 -
hf_llama4_17b_16e_seq8k_gpu_lora_text_to_text.yaml
in recipes_collection/recipes/fine-tuning/llama
107 -
p4_hf_llama3_70b_seq8k_gpu.yaml
in recipes_collection/recipes/training/llama
106 -
hf_deepseek_r1_distilled_qwen_1_dot_5b_seq16k_gpu_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/deepseek
106 -
hf_deepseek_r1_distilled_qwen_1_dot_5b_seq16k_gpu_lora.yaml
in recipes_collection/recipes/fine-tuning/deepseek
106 -
hf_deepseek_r1_distilled_qwen_14b_seq16k_gpu_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/deepseek
106 -
hf_deepseek_r1_distilled_llama_8b_seq8k_gpu_lora.yaml
in recipes_collection/recipes/fine-tuning/deepseek
106 -
hf_deepseek_r1_distilled_qwen_7b_seq16k_gpu_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/deepseek
106 -
hf_deepseek_r1_distilled_qwen_14b_seq8k_gpu_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/deepseek
106 -
hf_deepseek_r1_distilled_qwen_1_dot_5b_seq8k_gpu_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/deepseek
106 -
hf_deepseek_r1_distilled_qwen_1_dot_5b_seq8k_gpu_lora.yaml
in recipes_collection/recipes/fine-tuning/deepseek
106 -
hf_deepseek_r1_distilled_qwen_7b_seq8k_gpu_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/deepseek
106 -
hf_deepseek_r1_distilled_qwen_14b_seq8k_gpu_lora.yaml
in recipes_collection/recipes/fine-tuning/deepseek
106 -
Files With Most Units (Top 11)
File# lines# units
stages.py
in launcher/nemo
618 49
recipe_stages.py
in launcher/nemo
91 14
value_validator.py
in launcher/config_validator
138 14
type_validator.py
in launcher/config_validator
99 10
slurm_launcher.py
in launcher/nemo
93 7
launchers.py
in launcher/nemo
33 4
telemetry.py
in launcher
78 4
main.py
in root
195 4
sm_jobs.py
in template
109 2
82 2
15 1
Files With Long Lines (Top 8)

There are 8 files with lines longer than 120 characters. In total, there are 11 long lines.

File# lines# units# long lines
stages.py
in launcher/nemo
618 49 4
hf_llama3_8b_seq8k_trn1x4_pretrain.yaml
in recipes_collection/recipes/training/llama
101 - 1
hf_llama3_70b_seq8k_trn1x16_pretrain.yaml
in recipes_collection/recipes/training/llama
104 - 1
megatron_llama3_1_8b_nemo.yaml
in recipes_collection/recipes/training/llama
162 - 1
hf_llama3_8b_seq8k_trn1_fine_tuning.yaml
in recipes_collection/recipes/fine-tuning/llama
109 - 1
15 - 1
train-script-trn.yaml
in launcher/nemo/k8s_templates/training
88 - 1
train-script-gpu.yaml
in launcher/nemo/k8s_templates/training
49 - 1
Correlations

File Size vs. Commits (all time): 116 points

recipes_collection/recipes/fine-tuning/llama/hf_llama4_17b_16e_seq4k_gpu_lora_multimodal_finetuning.yaml x: 2 commits (all time) y: 107 lines of code recipes_collection/recipes/fine-tuning/llama/hf_llama4_17b_16e_seq8k_gpu_lora_multimodal_finetuning.yaml x: 4 commits (all time) y: 107 lines of code launcher/nemo/stages.py x: 17 commits (all time) y: 618 lines of code recipes_collection/recipes/fine-tuning/llama/hf_llama3_8b_seq8k_gpu_dpo.yaml x: 2 commits (all time) y: 110 lines of code launcher/nemo/k8s_templates/training/training.yaml x: 7 commits (all time) y: 178 lines of code launcher/config_validator/type_validator.py x: 3 commits (all time) y: 99 lines of code launcher/config_validator/value_validator.py x: 3 commits (all time) y: 138 lines of code launcher/nemo/k8s_templates/training/values.yaml x: 5 commits (all time) y: 36 lines of code launcher_scripts/custom_script/config_k8s.yaml x: 3 commits (all time) y: 41 lines of code recipes_collection/cluster/k8s.yaml x: 3 commits (all time) y: 12 lines of code main.py x: 7 commits (all time) y: 195 lines of code recipes_collection/recipes/fine-tuning/deepseek/hf_deepseek_r1_671b_seq8k_gpu_lora.yaml x: 4 commits (all time) y: 102 lines of code recipes_collection/recipes/fine-tuning/deepseek/hf_deepseek_r1_distilled_llama_70b_seq16k_gpu_fine_tuning.yaml x: 6 commits (all time) y: 108 lines of code recipes_collection/recipes/fine-tuning/deepseek/hf_deepseek_r1_distilled_llama_70b_seq16k_gpu_lora.yaml x: 6 commits (all time) y: 106 lines of code recipes_collection/recipes/fine-tuning/deepseek/hf_deepseek_r1_distilled_qwen_14b_seq16k_gpu_lora.yaml x: 4 commits (all time) y: 104 lines of code recipes_collection/recipes/fine-tuning/llama/hf_llama3_3_70b_seq16k_gpu_fine_tuning.yaml x: 4 commits (all time) y: 108 lines of code recipes_collection/recipes/fine-tuning/llama/hf_llama3_405b_seq128k_gpu_qlora.yaml x: 3 commits (all time) y: 106 lines of code recipes_collection/recipes/fine-tuning/llama/hf_llama3_70b_seq16k_gpu_fine_tuning.yaml x: 3 commits (all time) y: 108 lines of code recipes_collection/recipes/training/custom_model/falcon.yaml x: 3 commits (all time) y: 83 lines of code recipes_collection/recipes/training/llama/hf_llama3_2_11b_seq8k_gpu_p5x4_pretrain.yaml x: 5 commits (all time) y: 86 lines of code recipes_collection/recipes/training/llama/hf_llama3_2_1b_seq8k_gpu_p5x1_pretrain.yaml x: 7 commits (all time) y: 108 lines of code recipes_collection/recipes/training/llama/hf_llama3_70b_seq8k_trn1x16_pretrain.yaml x: 3 commits (all time) y: 104 lines of code recipes_collection/recipes/training/llama/hf_llama3_8b_seq8k_trn1x4_pretrain.yaml x: 3 commits (all time) y: 101 lines of code recipes_collection/recipes/training/mistral/hf_mistral_7b_seq16k_gpu_p5x16_pretrain.yaml x: 5 commits (all time) y: 100 lines of code recipes_collection/recipes/training/mixtral/hf_mixtral_8x22b_seq16k_gpu_p5x32_pretrain.yaml x: 5 commits (all time) y: 104 lines of code recipes_collection/recipes/training/mixtral/hf_mixtral_8x22b_seq16k_gpu_p5x64_pretrain.yaml x: 5 commits (all time) y: 105 lines of code recipes_collection/recipes/training/mixtral/hf_mixtral_8x22b_seq8k_gpu_p5x128_pretrain.yaml x: 6 commits (all time) y: 104 lines of code recipes_collection/recipes/training/llama/megatron_llama3_1_8b_nemo.yaml x: 3 commits (all time) y: 162 lines of code launcher/nemo/k8s_templates/training/train-script-gpu.yaml x: 3 commits (all time) y: 49 lines of code launcher/nemo/k8s_templates/training/train-script-trn.yaml x: 3 commits (all time) y: 88 lines of code recipes_collection/config.yaml x: 3 commits (all time) y: 24 lines of code launcher/nemo/constants.py x: 3 commits (all time) y: 14 lines of code launcher/accelerator_devices.py x: 3 commits (all time) y: 82 lines of code launcher/efa.py x: 3 commits (all time) y: 156 lines of code launcher/__init__.py x: 1 commits (all time) y: 1 lines of code launcher/nemo/k8s_templates/training/Chart.yaml x: 1 commits (all time) y: 5 lines of code launcher/nemo/k8s_templates/training/training-config.yaml x: 1 commits (all time) y: 8 lines of code launcher/nemo/launchers.py x: 1 commits (all time) y: 33 lines of code launcher/nemo/recipe_stages.py x: 1 commits (all time) y: 91 lines of code launcher/nemo/slurm_launcher.py x: 1 commits (all time) y: 93 lines of code launcher/telemetry.py x: 1 commits (all time) y: 78 lines of code launcher_scripts/custom_script/custom_allreduce.py x: 1 commits (all time) y: 10 lines of code pyproject.toml x: 1 commits (all time) y: 15 lines of code recipes_collection/cluster/sm_jobs.yaml x: 1 commits (all time) y: 18 lines of code template/sm_jobs.py x: 1 commits (all time) y: 109 lines of code
618.0
lines of code
  min: 1.0
  average: 98.73
  25th percentile: 100.0
  median: 106.0
  75th percentile: 107.0
  max: 618.0
0 17.0
commits (all time)
min: 1.0 | average: 3.79 | 25th percentile: 3.0 | median: 3.0 | 75th percentile: 5.0 | max: 17.0

File Size vs. Contributors (all time): 116 points

recipes_collection/recipes/fine-tuning/llama/hf_llama4_17b_16e_seq4k_gpu_lora_multimodal_finetuning.yaml x: 2 contributors (all time) y: 107 lines of code launcher/nemo/stages.py x: 9 contributors (all time) y: 618 lines of code recipes_collection/recipes/fine-tuning/llama/hf_llama3_8b_seq8k_gpu_dpo.yaml x: 2 contributors (all time) y: 110 lines of code launcher/nemo/k8s_templates/training/training.yaml x: 5 contributors (all time) y: 178 lines of code launcher/config_validator/type_validator.py x: 2 contributors (all time) y: 99 lines of code launcher/config_validator/value_validator.py x: 2 contributors (all time) y: 138 lines of code launcher/nemo/k8s_templates/training/values.yaml x: 4 contributors (all time) y: 36 lines of code launcher_scripts/custom_script/config_k8s.yaml x: 2 contributors (all time) y: 41 lines of code recipes_collection/cluster/k8s.yaml x: 2 contributors (all time) y: 12 lines of code main.py x: 3 contributors (all time) y: 195 lines of code recipes_collection/recipes/fine-tuning/deepseek/hf_deepseek_r1_671b_seq8k_gpu_lora.yaml x: 2 contributors (all time) y: 102 lines of code recipes_collection/recipes/fine-tuning/deepseek/hf_deepseek_r1_distilled_llama_70b_seq16k_gpu_fine_tuning.yaml x: 2 contributors (all time) y: 108 lines of code recipes_collection/recipes/fine-tuning/deepseek/hf_deepseek_r1_distilled_qwen_14b_seq16k_gpu_lora.yaml x: 2 contributors (all time) y: 104 lines of code recipes_collection/recipes/fine-tuning/llama/hf_llama3_405b_seq128k_gpu_qlora.yaml x: 3 contributors (all time) y: 106 lines of code recipes_collection/recipes/fine-tuning/llama/hf_llama3_70b_seq16k_gpu_fine_tuning.yaml x: 3 contributors (all time) y: 108 lines of code recipes_collection/recipes/training/custom_model/falcon.yaml x: 3 contributors (all time) y: 83 lines of code recipes_collection/recipes/training/llama/hf_llama3_2_11b_seq8k_gpu_p5x4_pretrain.yaml x: 3 contributors (all time) y: 86 lines of code recipes_collection/recipes/training/llama/hf_llama3_70b_seq8k_trn1x16_pretrain.yaml x: 3 contributors (all time) y: 104 lines of code recipes_collection/recipes/training/llama/hf_llama3_8b_seq8k_trn1x4_pretrain.yaml x: 3 contributors (all time) y: 101 lines of code recipes_collection/recipes/training/llama/megatron_llama3_1_8b_nemo.yaml x: 3 contributors (all time) y: 162 lines of code launcher/nemo/k8s_templates/training/train-script-gpu.yaml x: 3 contributors (all time) y: 49 lines of code launcher/nemo/k8s_templates/training/train-script-trn.yaml x: 3 contributors (all time) y: 88 lines of code recipes_collection/config.yaml x: 3 contributors (all time) y: 24 lines of code launcher/nemo/constants.py x: 3 contributors (all time) y: 14 lines of code launcher/accelerator_devices.py x: 3 contributors (all time) y: 82 lines of code launcher/efa.py x: 3 contributors (all time) y: 156 lines of code launcher/__init__.py x: 1 contributors (all time) y: 1 lines of code launcher/nemo/k8s_templates/training/Chart.yaml x: 1 contributors (all time) y: 5 lines of code launcher/nemo/k8s_templates/training/training-config.yaml x: 1 contributors (all time) y: 8 lines of code launcher/nemo/launchers.py x: 1 contributors (all time) y: 33 lines of code launcher/nemo/recipe_stages.py x: 1 contributors (all time) y: 91 lines of code launcher/nemo/slurm_launcher.py x: 1 contributors (all time) y: 93 lines of code launcher/telemetry.py x: 1 contributors (all time) y: 78 lines of code launcher_scripts/custom_script/custom_allreduce.py x: 1 contributors (all time) y: 10 lines of code pyproject.toml x: 1 contributors (all time) y: 15 lines of code recipes_collection/cluster/sm_jobs.yaml x: 1 contributors (all time) y: 18 lines of code template/sm_jobs.py x: 1 contributors (all time) y: 109 lines of code
618.0
lines of code
  min: 1.0
  average: 98.73
  25th percentile: 100.0
  median: 106.0
  75th percentile: 107.0
  max: 618.0
0 9.0
contributors (all time)
min: 1.0 | average: 2.44 | 25th percentile: 2.0 | median: 2.0 | 75th percentile: 3.0 | max: 9.0

File Size vs. Commits (30 days): 6 points

recipes_collection/recipes/fine-tuning/llama/hf_llama4_17b_16e_seq4k_gpu_lora_multimodal_finetuning.yaml x: 2 commits (30d) y: 107 lines of code recipes_collection/recipes/fine-tuning/llama/hf_llama4_17b_16e_seq8k_gpu_lora_multimodal_finetuning.yaml x: 4 commits (30d) y: 107 lines of code launcher/nemo/stages.py x: 4 commits (30d) y: 618 lines of code recipes_collection/recipes/fine-tuning/llama/hf_llama3_8b_seq8k_gpu_dpo.yaml x: 2 commits (30d) y: 110 lines of code
618.0
lines of code
  min: 107.0
  average: 192.67
  25th percentile: 107.0
  median: 107.0
  75th percentile: 237.0
  max: 618.0
0 4.0
commits (30d)
min: 2.0 | average: 3.0 | 25th percentile: 2.0 | median: 3.0 | 75th percentile: 4.0 | max: 4.0

File Size vs. Contributors (30 days): 6 points

recipes_collection/recipes/fine-tuning/llama/hf_llama4_17b_16e_seq4k_gpu_lora_multimodal_finetuning.yaml x: 2 contributors (30d) y: 107 lines of code launcher/nemo/stages.py x: 2 contributors (30d) y: 618 lines of code recipes_collection/recipes/fine-tuning/llama/hf_llama3_8b_seq8k_gpu_dpo.yaml x: 2 contributors (30d) y: 110 lines of code
618.0
lines of code
  min: 107.0
  average: 192.67
  25th percentile: 107.0
  median: 107.0
  75th percentile: 237.0
  max: 618.0
0 2.0
contributors (30d)
min: 2.0 | average: 2.0 | 25th percentile: 2.0 | median: 2.0 | 75th percentile: 2.0 | max: 2.0

File Size vs. Commits (90 days): 98 points

recipes_collection/recipes/fine-tuning/llama/hf_llama4_17b_16e_seq4k_gpu_lora_multimodal_finetuning.yaml x: 2 commits (90d) y: 107 lines of code recipes_collection/recipes/fine-tuning/llama/hf_llama4_17b_16e_seq8k_gpu_lora_multimodal_finetuning.yaml x: 4 commits (90d) y: 107 lines of code launcher/nemo/stages.py x: 12 commits (90d) y: 618 lines of code recipes_collection/recipes/fine-tuning/llama/hf_llama3_8b_seq8k_gpu_dpo.yaml x: 2 commits (90d) y: 110 lines of code launcher/nemo/k8s_templates/training/training.yaml x: 6 commits (90d) y: 178 lines of code launcher/config_validator/type_validator.py x: 2 commits (90d) y: 99 lines of code launcher/config_validator/value_validator.py x: 2 commits (90d) y: 138 lines of code launcher/nemo/k8s_templates/training/values.yaml x: 4 commits (90d) y: 36 lines of code launcher_scripts/custom_script/config_k8s.yaml x: 2 commits (90d) y: 41 lines of code recipes_collection/cluster/k8s.yaml x: 2 commits (90d) y: 12 lines of code main.py x: 6 commits (90d) y: 195 lines of code recipes_collection/recipes/fine-tuning/deepseek/hf_deepseek_r1_671b_seq8k_gpu_lora.yaml x: 4 commits (90d) y: 102 lines of code recipes_collection/recipes/fine-tuning/deepseek/hf_deepseek_r1_distilled_llama_70b_seq16k_gpu_fine_tuning.yaml x: 2 commits (90d) y: 108 lines of code recipes_collection/recipes/fine-tuning/deepseek/hf_deepseek_r1_distilled_qwen_14b_seq16k_gpu_lora.yaml x: 2 commits (90d) y: 104 lines of code recipes_collection/recipes/training/custom_model/falcon.yaml x: 2 commits (90d) y: 83 lines of code recipes_collection/recipes/training/llama/hf_llama3_2_11b_seq8k_gpu_p5x4_pretrain.yaml x: 4 commits (90d) y: 86 lines of code recipes_collection/recipes/training/llama/hf_llama3_2_1b_seq8k_gpu_p5x1_pretrain.yaml x: 4 commits (90d) y: 108 lines of code recipes_collection/recipes/training/llama/hf_llama3_8b_seq8k_trn1x4_pretrain.yaml x: 2 commits (90d) y: 101 lines of code recipes_collection/recipes/training/llama/megatron_llama3_1_8b_nemo.yaml x: 2 commits (90d) y: 162 lines of code launcher/nemo/k8s_templates/training/train-script-gpu.yaml x: 2 commits (90d) y: 49 lines of code launcher/nemo/k8s_templates/training/train-script-trn.yaml x: 2 commits (90d) y: 88 lines of code recipes_collection/config.yaml x: 2 commits (90d) y: 24 lines of code
618.0
lines of code
  min: 12.0
  average: 108.97
  25th percentile: 104.0
  median: 106.0
  75th percentile: 107.0
  max: 618.0
0 12.0
commits (90d)
min: 2.0 | average: 2.45 | 25th percentile: 2.0 | median: 2.0 | 75th percentile: 2.0 | max: 12.0

File Size vs. Contributors (90 days): 98 points

recipes_collection/recipes/fine-tuning/llama/hf_llama4_17b_16e_seq4k_gpu_lora_multimodal_finetuning.yaml x: 2 contributors (90d) y: 107 lines of code launcher/nemo/stages.py x: 5 contributors (90d) y: 618 lines of code recipes_collection/recipes/fine-tuning/llama/hf_llama3_8b_seq8k_gpu_dpo.yaml x: 2 contributors (90d) y: 110 lines of code launcher/nemo/k8s_templates/training/training.yaml x: 4 contributors (90d) y: 178 lines of code launcher/config_validator/type_validator.py x: 1 contributors (90d) y: 99 lines of code launcher/config_validator/value_validator.py x: 1 contributors (90d) y: 138 lines of code launcher/nemo/k8s_templates/training/values.yaml x: 3 contributors (90d) y: 36 lines of code launcher_scripts/custom_script/config_k8s.yaml x: 1 contributors (90d) y: 41 lines of code recipes_collection/cluster/k8s.yaml x: 1 contributors (90d) y: 12 lines of code main.py x: 2 contributors (90d) y: 195 lines of code recipes_collection/recipes/fine-tuning/deepseek/hf_deepseek_r1_671b_seq8k_gpu_lora.yaml x: 2 contributors (90d) y: 102 lines of code recipes_collection/recipes/fine-tuning/deepseek/hf_deepseek_r1_distilled_llama_70b_seq16k_gpu_fine_tuning.yaml x: 2 contributors (90d) y: 108 lines of code recipes_collection/recipes/fine-tuning/deepseek/hf_deepseek_r1_distilled_qwen_14b_seq16k_gpu_lora.yaml x: 2 contributors (90d) y: 104 lines of code recipes_collection/recipes/training/custom_model/falcon.yaml x: 2 contributors (90d) y: 83 lines of code recipes_collection/recipes/training/llama/hf_llama3_2_11b_seq8k_gpu_p5x4_pretrain.yaml x: 2 contributors (90d) y: 86 lines of code recipes_collection/recipes/training/llama/megatron_llama3_1_8b_nemo.yaml x: 2 contributors (90d) y: 162 lines of code launcher/nemo/k8s_templates/training/train-script-gpu.yaml x: 2 contributors (90d) y: 49 lines of code launcher/nemo/k8s_templates/training/train-script-trn.yaml x: 2 contributors (90d) y: 88 lines of code recipes_collection/config.yaml x: 2 contributors (90d) y: 24 lines of code
618.0
lines of code
  min: 12.0
  average: 108.97
  25th percentile: 104.0
  median: 106.0
  75th percentile: 107.0
  max: 618.0
0 5.0
contributors (90d)
min: 1.0 | average: 2.02 | 25th percentile: 2.0 | median: 2.0 | 75th percentile: 2.0 | max: 5.0