azure / slm-innovator-lab
File Size

The distribution of size of files (measured in lines of code).

Intro
Learn more...
File Size Overall
0% | 61% | 27% | 3% | 8%
Legend:
1001+
501-1000
201-500
101-200
1-100


explore: grouped by folders | grouped by size | sunburst | 3D view
File Size per Extension
1001+
501-1000
201-500
101-200
1-100
ipynb0% | 75% | 20% | 4% | 0%
py0% | 20% | 58% | 0% | 20%
yaml0% | 0% | 0% | 0% | 100%
jinja20% | 0% | 0% | 0% | 100%
jsonl0% | 0% | 0% | 0% | 100%
html0% | 0% | 0% | 0% | 100%
File Size per Logical Decomposition
primary
1001+
501-1000
201-500
101-200
1-100
2_slm-fine-tuning-mlstudio0% | 79% | 14% | 2% | 3%
1_synthetic-qa-generation0% | 53% | 40% | 2% | 3%
3_llmops-aifoundry0% | 41% | 29% | 0% | 28%
0_lab_preparation0% | 0% | 56% | 39% | 3%
ROOT0% | 0% | 0% | 0% | 100%
Longest Files (Top 50)
File# lines# units
2_serving_basic_phi.ipynb
in 2_slm-fine-tuning-mlstudio/phi
923 -
make_qa_multimodal_pdf_docai.ipynb
in 1_synthetic-qa-generation/seed
901 -
2_serving_llm_phi.ipynb
in 2_slm-fine-tuning-mlstudio/phi
871 -
1_training_mlflow_phi.ipynb
in 2_slm-fine-tuning-mlstudio/phi
866 -
1_training_custom_phi.ipynb
in 2_slm-fine-tuning-mlstudio/phi
863 -
glan_tutorial.ipynb
in 1_synthetic-qa-generation/glan-instruct
831 -
2_serving_florence2.ipynb
in 2_slm-fine-tuning-mlstudio/florence2-VQA
811 -
evolve.py
in 1_synthetic-qa-generation/reasoningplaning
697 48
make_qa_multimodal_pdf_oss.ipynb
in 1_synthetic-qa-generation/seed
679 -
1_training_mlflow_florence2.ipynb
in 2_slm-fine-tuning-mlstudio/florence2-VQA
640 -
3_optimization_olive.ipynb
in 2_slm-fine-tuning-mlstudio/phi
599 -
contentsafety_with_code_ja.ipynb
in 3_llmops-aifoundry/3_4_operationalizing
579 -
contentsafety_with_code_en.ipynb
in 3_llmops-aifoundry/3_4_operationalizing
559 -
promptflow_with_code.ipynb
in 3_llmops-aifoundry/3_2_prototyping
486 -
make_qa_only_image_pdf.ipynb
in 1_synthetic-qa-generation/seed
416 -
trial.ipynb
in 1_synthetic-qa-generation/auto-evolve-instruct
393 -
glan.py
in 1_synthetic-qa-generation/glan-instruct
378 13
evolve.py
in 1_synthetic-qa-generation/evolve-instruct
361 14
promptflow_with_evaluation_code.ipynb
in 3_llmops-aifoundry/3_3_optimizing
345 -
make_qa_only_image_multiple_pdf.ipynb
in 1_synthetic-qa-generation/seed
288 -
1_get_started.ipynb
in 0_lab_preparation
276 -
train_mlflow.py
in 2_slm-fine-tuning-mlstudio/phi/src_train
264 5
train.py
in 2_slm-fine-tuning-mlstudio/phi/src_train
263 4
phi3.py
in 2_slm-fine-tuning-mlstudio/phi/olive
260 5
preprocess.py
in 1_synthetic-qa-generation/seed/util
241 21
make_qa_csv.ipynb
in 1_synthetic-qa-generation/seed
236 -
train_mlflow.py
in 2_slm-fine-tuning-mlstudio/florence2-VQA/src_train
206 10
2_prompty.ipynb
in 0_lab_preparation
192 -
1_data-preparation-basic.ipynb
in 2_slm-fine-tuning-mlstudio/phi/dataset-preparation
189 -
merge_training_dataset_json.ipynb
in 1_synthetic-qa-generation/seed
121 -
common_utils.py
in 1_synthetic-qa-generation/seed/util
68 4
score.py
in 2_slm-fine-tuning-mlstudio/florence2-VQA/src_serve
57 3
evaluation.flow.dag.yaml
in 3_llmops-aifoundry/3_3_optimizing/flow-template
55 -
flow.dag.yaml
in 3_llmops-aifoundry/3_3_optimizing/evaluation
55 -
train_tokenizer.py
in 2_slm-fine-tuning-mlstudio/phi/dataset-preparation
51 2
chat-serverless.flow.dag.yaml
in 3_llmops-aifoundry/3_3_optimizing/flow-template
47 -
chat-serverless.flow.dag.yaml
in 3_llmops-aifoundry/3_2_prototyping/flow-template
47 -
chat-context.flow.dag.yaml
in 3_llmops-aifoundry/3_2_prototyping/flow-template
46 -
flow.dag.yaml
in 3_llmops-aifoundry/3_2_prototyping/chat-context
46 -
flow.dag.yaml
in 3_llmops-aifoundry/3_2_prototyping/chat-serverless
46 -
phi35_chatcompletion.py
in 3_llmops-aifoundry/3_2_prototyping/chat
44 2
phi35_finetuned.py
in 3_llmops-aifoundry/3_2_prototyping/chat
43 2
phi35_finetuned.py
in 3_llmops-aifoundry/3_2_prototyping/chat-context
42 2
chat.flow.dag.yaml
in 3_llmops-aifoundry/3_2_prototyping/flow-template
41 -
phi35_chatcompletion.py
in 3_llmops-aifoundry/3_2_prototyping/chat-serverless
41 2
flow.dag.yaml
in 3_llmops-aifoundry/3_2_prototyping/chat
41 -
combine_tokenizer.py
in 2_slm-fine-tuning-mlstudio/phi/dataset-preparation
39 2
flow.dag.yaml
in 3_llmops-aifoundry/3_3_optimizing
39 -
qa.py
in 1_synthetic-qa-generation/seed/util
39 4
convert.py
in 1_synthetic-qa-generation/evolve-instruct
34 1
Files With Most Units (Top 24)
File# lines# units
evolve.py
in 1_synthetic-qa-generation/reasoningplaning
697 48
preprocess.py
in 1_synthetic-qa-generation/seed/util
241 21
evolve.py
in 1_synthetic-qa-generation/evolve-instruct
361 14
glan.py
in 1_synthetic-qa-generation/glan-instruct
378 13
train_mlflow.py
in 2_slm-fine-tuning-mlstudio/florence2-VQA/src_train
206 10
train_mlflow.py
in 2_slm-fine-tuning-mlstudio/phi/src_train
264 5
phi3.py
in 2_slm-fine-tuning-mlstudio/phi/olive
260 5
train.py
in 2_slm-fine-tuning-mlstudio/phi/src_train
263 4
common_utils.py
in 1_synthetic-qa-generation/seed/util
68 4
qa.py
in 1_synthetic-qa-generation/seed/util
39 4
score.py
in 2_slm-fine-tuning-mlstudio/florence2-VQA/src_serve
57 3
common.py
in 0_lab_preparation
17 2
combine_tokenizer.py
in 2_slm-fine-tuning-mlstudio/phi/dataset-preparation
39 2
train_tokenizer.py
in 2_slm-fine-tuning-mlstudio/phi/dataset-preparation
51 2
score.py
in 2_slm-fine-tuning-mlstudio/phi/src_serve
28 2
phi35_finetuned.py
in 3_llmops-aifoundry/3_2_prototyping/chat-context
42 2
phi35_chatcompletion.py
in 3_llmops-aifoundry/3_2_prototyping/chat-serverless
41 2
phi35_chatcompletion.py
in 3_llmops-aifoundry/3_2_prototyping/chat
44 2
phi35_finetuned.py
in 3_llmops-aifoundry/3_2_prototyping/chat
43 2
sglang_api.py
in 2_slm-fine-tuning-mlstudio/phi/cloud/serve
27 1
concat_scores.py
in 3_llmops-aifoundry/3_3_optimizing/evaluation
25 1
aggregate_variants_results.py
in 3_llmops-aifoundry/3_3_optimizing/evaluation
24 1
convert.py
in 1_synthetic-qa-generation/evolve-instruct
34 1
qa_pair.py
in 1_synthetic-qa-generation/seed/util
10 1
Files With Long Lines (Top 39)

There are 39 files with lines longer than 120 characters. In total, there are 389 long lines.

File# lines# units# long lines
trial.ipynb
in 1_synthetic-qa-generation/auto-evolve-instruct
393 - 43
glan_tutorial.ipynb
in 1_synthetic-qa-generation/glan-instruct
831 - 31
1_training_mlflow_phi.ipynb
in 2_slm-fine-tuning-mlstudio/phi
866 - 25
1_training_custom_phi.ipynb
in 2_slm-fine-tuning-mlstudio/phi
863 - 25
1_training_mlflow_florence2.ipynb
in 2_slm-fine-tuning-mlstudio/florence2-VQA
640 - 21
jsonl
simple_qna_data_en.jsonl
in 3_llmops-aifoundry/3_3_optimizing/data
19 - 19
make_qa_multimodal_pdf_docai.ipynb
in 1_synthetic-qa-generation/seed
901 - 19
contentsafety_with_code_ja.ipynb
in 3_llmops-aifoundry/3_4_operationalizing
579 - 16
promptflow_with_code.ipynb
in 3_llmops-aifoundry/3_2_prototyping
486 - 16
make_qa_multimodal_pdf_oss.ipynb
in 1_synthetic-qa-generation/seed
679 - 15
contentsafety_with_code_en.ipynb
in 3_llmops-aifoundry/3_4_operationalizing
559 - 14
3_optimization_olive.ipynb
in 2_slm-fine-tuning-mlstudio/phi
599 - 12
2_serving_florence2.ipynb
in 2_slm-fine-tuning-mlstudio/florence2-VQA
811 - 12
evolve.py
in 1_synthetic-qa-generation/reasoningplaning
697 48 12
glan.py
in 1_synthetic-qa-generation/glan-instruct
378 13 11
train_mlflow.py
in 2_slm-fine-tuning-mlstudio/florence2-VQA/src_train
206 10 10
jinja2
groundedness_score.jinja2
in 3_llmops-aifoundry/3_3_optimizing/evaluation
30 - 10
evolve.py
in 1_synthetic-qa-generation/evolve-instruct
361 14 10
2_serving_basic_phi.ipynb
in 2_slm-fine-tuning-mlstudio/phi
923 - 9
2_serving_llm_phi.ipynb
in 2_slm-fine-tuning-mlstudio/phi
871 - 8
jsonl
simple_math_data_en.jsonl
in 3_llmops-aifoundry/3_3_optimizing/data
20 - 8
promptflow_with_evaluation_code.ipynb
in 3_llmops-aifoundry/3_3_optimizing
345 - 8
jsonl
qna_outdoor.jsonl
in 3_llmops-aifoundry/3_3_optimizing/data
5 - 5
2_prompty.ipynb
in 0_lab_preparation
192 - 4
1_get_started.ipynb
in 0_lab_preparation
276 - 3
1_data-preparation-basic.ipynb
in 2_slm-fine-tuning-mlstudio/phi/dataset-preparation
189 - 3
make_qa_only_image_pdf.ipynb
in 1_synthetic-qa-generation/seed
416 - 3
make_qa_only_image_multiple_pdf.ipynb
in 1_synthetic-qa-generation/seed
288 - 3
combine_tokenizer.py
in 2_slm-fine-tuning-mlstudio/phi/dataset-preparation
39 2 2
jsonl
questions_outdoor.jsonl
in 3_llmops-aifoundry/3_2_prototyping/data
4 - 2
make_qa_csv.ipynb
in 1_synthetic-qa-generation/seed
236 - 2
score.py
in 2_slm-fine-tuning-mlstudio/phi/src_serve
28 2 1
score.py
in 2_slm-fine-tuning-mlstudio/florence2-VQA/src_serve
57 3 1
phi35_finetuned.py
in 3_llmops-aifoundry/3_2_prototyping/chat-context
42 2 1
phi35_chatcompletion.py
in 3_llmops-aifoundry/3_2_prototyping/chat
44 2 1
phi35_finetuned.py
in 3_llmops-aifoundry/3_2_prototyping/chat
43 2 1
convert.py
in 1_synthetic-qa-generation/evolve-instruct
34 1 1
common_utils.py
in 1_synthetic-qa-generation/seed/util
68 4 1
generate_answer_only.py
in 1_synthetic-qa-generation/glan-instruct
28 - 1
Correlations

File Size vs. Commits (all time): 73 points

2_slm-fine-tuning-mlstudio/phi/2_serving_llm_phi.ipynb x: 3 commits (all time) y: 871 lines of code 2_slm-fine-tuning-mlstudio/phi/2_serving_basic_phi.ipynb x: 2 commits (all time) y: 923 lines of code 2_slm-fine-tuning-mlstudio/phi/1_training_custom_phi.ipynb x: 1 commits (all time) y: 863 lines of code 2_slm-fine-tuning-mlstudio/phi/1_training_mlflow_phi.ipynb x: 1 commits (all time) y: 866 lines of code 2_slm-fine-tuning-mlstudio/phi/3_optimization_olive.ipynb x: 1 commits (all time) y: 599 lines of code 2_slm-fine-tuning-mlstudio/phi/cloud/serve/sglang_api.py x: 1 commits (all time) y: 27 lines of code 2_slm-fine-tuning-mlstudio/phi/dataset-preparation/1_data-preparation-basic.ipynb x: 1 commits (all time) y: 189 lines of code 2_slm-fine-tuning-mlstudio/phi/dataset-preparation/combine_tokenizer.py x: 1 commits (all time) y: 39 lines of code 2_slm-fine-tuning-mlstudio/phi/dataset-preparation/train_tokenizer.py x: 1 commits (all time) y: 51 lines of code 2_slm-fine-tuning-mlstudio/phi/logger.py x: 1 commits (all time) y: 12 lines of code 2_slm-fine-tuning-mlstudio/phi/olive/conda.yaml x: 1 commits (all time) y: 25 lines of code 2_slm-fine-tuning-mlstudio/phi/olive/phi3.py x: 1 commits (all time) y: 260 lines of code 2_slm-fine-tuning-mlstudio/phi/src_train/train.py x: 1 commits (all time) y: 263 lines of code 2_slm-fine-tuning-mlstudio/florence2-VQA/1_training_mlflow_florence2.ipynb x: 6 commits (all time) y: 640 lines of code 2_slm-fine-tuning-mlstudio/florence2-VQA/2_serving_florence2.ipynb x: 5 commits (all time) y: 811 lines of code 0_lab_preparation/1_get_started.ipynb x: 13 commits (all time) y: 276 lines of code 1_synthetic-qa-generation/reasoningplaning/evolve.py x: 3 commits (all time) y: 697 lines of code 1_synthetic-qa-generation/seed/make_qa_multimodal_pdf_docai.ipynb x: 20 commits (all time) y: 901 lines of code 3_llmops-aifoundry/3_2_prototyping/chat-context/chat.jinja2 x: 1 commits (all time) y: 15 lines of code 3_llmops-aifoundry/3_2_prototyping/chat-context/flow.dag.yaml x: 1 commits (all time) y: 46 lines of code 3_llmops-aifoundry/3_2_prototyping/chat-context/phi35_finetuned.py x: 1 commits (all time) y: 42 lines of code 3_llmops-aifoundry/3_2_prototyping/chat/chat.jinja2 x: 1 commits (all time) y: 4 lines of code 3_llmops-aifoundry/3_2_prototyping/data/questions_basic.jsonl x: 1 commits (all time) y: 8 lines of code 3_llmops-aifoundry/3_2_prototyping/promptflow_with_code.ipynb x: 1 commits (all time) y: 486 lines of code 3_llmops-aifoundry/3_3_optimizing/data/simple_math_data_en.jsonl x: 1 commits (all time) y: 20 lines of code 3_llmops-aifoundry/3_3_optimizing/evaluation/flow.dag.yaml x: 1 commits (all time) y: 55 lines of code 3_llmops-aifoundry/3_3_optimizing/evaluation/groundedness_score.jinja2 x: 1 commits (all time) y: 30 lines of code 3_llmops-aifoundry/3_3_optimizing/promptflow_with_evaluation_code.ipynb x: 1 commits (all time) y: 345 lines of code 3_llmops-aifoundry/3_4_operationalizing/contentsafety_with_code_en.ipynb x: 1 commits (all time) y: 559 lines of code 3_llmops-aifoundry/3_4_operationalizing/contentsafety_with_code_ja.ipynb x: 1 commits (all time) y: 579 lines of code 0_lab_preparation/2_prompty.ipynb x: 13 commits (all time) y: 192 lines of code 1_synthetic-qa-generation/seed/make_qa_csv.ipynb x: 13 commits (all time) y: 236 lines of code 1_synthetic-qa-generation/seed/make_qa_multimodal_pdf_oss.ipynb x: 17 commits (all time) y: 679 lines of code 1_synthetic-qa-generation/seed/make_qa_only_image_multiple_pdf.ipynb x: 11 commits (all time) y: 288 lines of code 1_synthetic-qa-generation/seed/make_qa_only_image_pdf.ipynb x: 12 commits (all time) y: 416 lines of code 1_synthetic-qa-generation/seed/merge_training_dataset_json.ipynb x: 10 commits (all time) y: 121 lines of code 1_synthetic-qa-generation/auto-evolve-instruct/trial.ipynb x: 4 commits (all time) y: 393 lines of code 1_synthetic-qa-generation/seed/util/preprocess.py x: 4 commits (all time) y: 241 lines of code 0_lab_preparation/common.py x: 2 commits (all time) y: 17 lines of code 2_slm-fine-tuning-mlstudio/florence2-VQA/logger.py x: 2 commits (all time) y: 12 lines of code 2_slm-fine-tuning-mlstudio/florence2-VQA/src_serve/score.py x: 2 commits (all time) y: 57 lines of code 1_synthetic-qa-generation/evolve-instruct/evolve.py x: 3 commits (all time) y: 361 lines of code 1_synthetic-qa-generation/glan-instruct/glan.py x: 4 commits (all time) y: 378 lines of code 1_synthetic-qa-generation/glan-instruct/glan_tutorial.ipynb x: 2 commits (all time) y: 831 lines of code 1_synthetic-qa-generation/seed/util/qa_pair.py x: 2 commits (all time) y: 10 lines of code 0_lab_preparation/azure.yaml x: 2 commits (all time) y: 1 lines of code 404.html x: 2 commits (all time) y: 22 lines of code 1_synthetic-qa-generation/evolve-instruct/convert.py x: 1 commits (all time) y: 34 lines of code 1_synthetic-qa-generation/seed/util/common_utils.py x: 1 commits (all time) y: 68 lines of code 2_slm-fine-tuning-mlstudio/florence2-VQA/src_train/train_mlflow.py x: 1 commits (all time) y: 206 lines of code
923.0
lines of code
  min: 1.0
  average: 219.82
  25th percentile: 25.0
  median: 47.0
  75th percentile: 353.0
  max: 923.0
0 20.0
commits (all time)
min: 1.0 | average: 2.82 | 25th percentile: 1.0 | median: 1.0 | 75th percentile: 2.0 | max: 20.0

File Size vs. Contributors (all time): 73 points

2_slm-fine-tuning-mlstudio/phi/2_serving_llm_phi.ipynb x: 1 contributors (all time) y: 871 lines of code 2_slm-fine-tuning-mlstudio/phi/2_serving_basic_phi.ipynb x: 1 contributors (all time) y: 923 lines of code 2_slm-fine-tuning-mlstudio/phi/1_training_custom_phi.ipynb x: 1 contributors (all time) y: 863 lines of code 2_slm-fine-tuning-mlstudio/phi/1_training_mlflow_phi.ipynb x: 1 contributors (all time) y: 866 lines of code 2_slm-fine-tuning-mlstudio/phi/3_optimization_olive.ipynb x: 1 contributors (all time) y: 599 lines of code 2_slm-fine-tuning-mlstudio/phi/cloud/serve/sglang_api.py x: 1 contributors (all time) y: 27 lines of code 2_slm-fine-tuning-mlstudio/phi/dataset-preparation/1_data-preparation-basic.ipynb x: 1 contributors (all time) y: 189 lines of code 2_slm-fine-tuning-mlstudio/phi/dataset-preparation/combine_tokenizer.py x: 1 contributors (all time) y: 39 lines of code 2_slm-fine-tuning-mlstudio/phi/dataset-preparation/train_tokenizer.py x: 1 contributors (all time) y: 51 lines of code 2_slm-fine-tuning-mlstudio/phi/logger.py x: 1 contributors (all time) y: 12 lines of code 2_slm-fine-tuning-mlstudio/phi/olive/conda.yaml x: 1 contributors (all time) y: 25 lines of code 2_slm-fine-tuning-mlstudio/phi/olive/phi3.py x: 1 contributors (all time) y: 260 lines of code 2_slm-fine-tuning-mlstudio/phi/src_train/train.py x: 1 contributors (all time) y: 263 lines of code 2_slm-fine-tuning-mlstudio/florence2-VQA/1_training_mlflow_florence2.ipynb x: 3 contributors (all time) y: 640 lines of code 2_slm-fine-tuning-mlstudio/florence2-VQA/2_serving_florence2.ipynb x: 3 contributors (all time) y: 811 lines of code 0_lab_preparation/1_get_started.ipynb x: 3 contributors (all time) y: 276 lines of code 1_synthetic-qa-generation/reasoningplaning/evolve.py x: 3 contributors (all time) y: 697 lines of code 1_synthetic-qa-generation/seed/make_qa_multimodal_pdf_docai.ipynb x: 5 contributors (all time) y: 901 lines of code 3_llmops-aifoundry/3_2_prototyping/chat-context/chat.jinja2 x: 1 contributors (all time) y: 15 lines of code 3_llmops-aifoundry/3_2_prototyping/chat-context/flow.dag.yaml x: 1 contributors (all time) y: 46 lines of code 3_llmops-aifoundry/3_2_prototyping/chat-context/phi35_finetuned.py x: 1 contributors (all time) y: 42 lines of code 3_llmops-aifoundry/3_2_prototyping/chat/chat.jinja2 x: 1 contributors (all time) y: 4 lines of code 3_llmops-aifoundry/3_2_prototyping/data/questions_basic.jsonl x: 1 contributors (all time) y: 8 lines of code 3_llmops-aifoundry/3_2_prototyping/promptflow_with_code.ipynb x: 1 contributors (all time) y: 486 lines of code 3_llmops-aifoundry/3_3_optimizing/data/simple_math_data_en.jsonl x: 1 contributors (all time) y: 20 lines of code 3_llmops-aifoundry/3_3_optimizing/evaluation/flow.dag.yaml x: 1 contributors (all time) y: 55 lines of code 3_llmops-aifoundry/3_3_optimizing/evaluation/groundedness_score.jinja2 x: 1 contributors (all time) y: 30 lines of code 3_llmops-aifoundry/3_3_optimizing/promptflow_with_evaluation_code.ipynb x: 1 contributors (all time) y: 345 lines of code 3_llmops-aifoundry/3_4_operationalizing/contentsafety_with_code_en.ipynb x: 1 contributors (all time) y: 559 lines of code 3_llmops-aifoundry/3_4_operationalizing/contentsafety_with_code_ja.ipynb x: 1 contributors (all time) y: 579 lines of code 0_lab_preparation/2_prompty.ipynb x: 3 contributors (all time) y: 192 lines of code 1_synthetic-qa-generation/seed/make_qa_csv.ipynb x: 4 contributors (all time) y: 236 lines of code 1_synthetic-qa-generation/seed/make_qa_multimodal_pdf_oss.ipynb x: 4 contributors (all time) y: 679 lines of code 1_synthetic-qa-generation/seed/make_qa_only_image_multiple_pdf.ipynb x: 3 contributors (all time) y: 288 lines of code 1_synthetic-qa-generation/seed/make_qa_only_image_pdf.ipynb x: 3 contributors (all time) y: 416 lines of code 1_synthetic-qa-generation/seed/merge_training_dataset_json.ipynb x: 3 contributors (all time) y: 121 lines of code 1_synthetic-qa-generation/auto-evolve-instruct/trial.ipynb x: 3 contributors (all time) y: 393 lines of code 1_synthetic-qa-generation/seed/util/preprocess.py x: 3 contributors (all time) y: 241 lines of code 1_synthetic-qa-generation/evolve-instruct/evolve.py x: 1 contributors (all time) y: 361 lines of code 1_synthetic-qa-generation/glan-instruct/glan.py x: 1 contributors (all time) y: 378 lines of code 1_synthetic-qa-generation/glan-instruct/glan_tutorial.ipynb x: 1 contributors (all time) y: 831 lines of code 0_lab_preparation/azure.yaml x: 2 contributors (all time) y: 1 lines of code 404.html x: 2 contributors (all time) y: 22 lines of code 1_synthetic-qa-generation/evolve-instruct/convert.py x: 1 contributors (all time) y: 34 lines of code 1_synthetic-qa-generation/seed/util/common_utils.py x: 1 contributors (all time) y: 68 lines of code 2_slm-fine-tuning-mlstudio/florence2-VQA/src_train/train_mlflow.py x: 1 contributors (all time) y: 206 lines of code
923.0
lines of code
  min: 1.0
  average: 219.82
  25th percentile: 25.0
  median: 47.0
  75th percentile: 353.0
  max: 923.0
0 5.0
contributors (all time)
min: 1.0 | average: 1.44 | 25th percentile: 1.0 | median: 1.0 | 75th percentile: 1.0 | max: 5.0

File Size vs. Commits (30 days): 0 points

No data for "commits (30d)" vs. "lines of code".

File Size vs. Contributors (30 days): 0 points

No data for "contributors (30d)" vs. "lines of code".


File Size vs. Commits (90 days): 17 points

2_slm-fine-tuning-mlstudio/phi/2_serving_llm_phi.ipynb x: 3 commits (90d) y: 871 lines of code 2_slm-fine-tuning-mlstudio/phi/2_serving_basic_phi.ipynb x: 2 commits (90d) y: 923 lines of code 2_slm-fine-tuning-mlstudio/phi/1_training_custom_phi.ipynb x: 1 commits (90d) y: 863 lines of code 2_slm-fine-tuning-mlstudio/phi/1_training_mlflow_phi.ipynb x: 1 commits (90d) y: 866 lines of code 2_slm-fine-tuning-mlstudio/phi/3_optimization_olive.ipynb x: 1 commits (90d) y: 599 lines of code 2_slm-fine-tuning-mlstudio/phi/cloud/serve/sglang_api.py x: 1 commits (90d) y: 27 lines of code 2_slm-fine-tuning-mlstudio/phi/dataset-preparation/1_data-preparation-basic.ipynb x: 1 commits (90d) y: 189 lines of code 2_slm-fine-tuning-mlstudio/phi/dataset-preparation/combine_tokenizer.py x: 1 commits (90d) y: 39 lines of code 2_slm-fine-tuning-mlstudio/phi/dataset-preparation/train_tokenizer.py x: 1 commits (90d) y: 51 lines of code 2_slm-fine-tuning-mlstudio/phi/logger.py x: 1 commits (90d) y: 12 lines of code 2_slm-fine-tuning-mlstudio/phi/olive/conda.yaml x: 1 commits (90d) y: 25 lines of code 2_slm-fine-tuning-mlstudio/phi/olive/phi3.py x: 1 commits (90d) y: 260 lines of code 2_slm-fine-tuning-mlstudio/phi/src_train/train.py x: 1 commits (90d) y: 263 lines of code 2_slm-fine-tuning-mlstudio/florence2-VQA/1_training_mlflow_florence2.ipynb x: 2 commits (90d) y: 640 lines of code 2_slm-fine-tuning-mlstudio/florence2-VQA/2_serving_florence2.ipynb x: 1 commits (90d) y: 811 lines of code
923.0
lines of code
  min: 12.0
  average: 395.94
  25th percentile: 33.5
  median: 263.0
  75th percentile: 837.0
  max: 923.0
0 3.0
commits (90d)
min: 1.0 | average: 1.24 | 25th percentile: 1.0 | median: 1.0 | 75th percentile: 1.0 | max: 3.0

File Size vs. Contributors (90 days): 17 points

2_slm-fine-tuning-mlstudio/phi/2_serving_llm_phi.ipynb x: 1 contributors (90d) y: 871 lines of code 2_slm-fine-tuning-mlstudio/phi/2_serving_basic_phi.ipynb x: 1 contributors (90d) y: 923 lines of code 2_slm-fine-tuning-mlstudio/phi/1_training_custom_phi.ipynb x: 1 contributors (90d) y: 863 lines of code 2_slm-fine-tuning-mlstudio/phi/1_training_mlflow_phi.ipynb x: 1 contributors (90d) y: 866 lines of code 2_slm-fine-tuning-mlstudio/phi/3_optimization_olive.ipynb x: 1 contributors (90d) y: 599 lines of code 2_slm-fine-tuning-mlstudio/phi/cloud/serve/sglang_api.py x: 1 contributors (90d) y: 27 lines of code 2_slm-fine-tuning-mlstudio/phi/dataset-preparation/1_data-preparation-basic.ipynb x: 1 contributors (90d) y: 189 lines of code 2_slm-fine-tuning-mlstudio/phi/dataset-preparation/combine_tokenizer.py x: 1 contributors (90d) y: 39 lines of code 2_slm-fine-tuning-mlstudio/phi/dataset-preparation/train_tokenizer.py x: 1 contributors (90d) y: 51 lines of code 2_slm-fine-tuning-mlstudio/phi/logger.py x: 1 contributors (90d) y: 12 lines of code 2_slm-fine-tuning-mlstudio/phi/olive/conda.yaml x: 1 contributors (90d) y: 25 lines of code 2_slm-fine-tuning-mlstudio/phi/olive/phi3.py x: 1 contributors (90d) y: 260 lines of code 2_slm-fine-tuning-mlstudio/phi/src_train/train.py x: 1 contributors (90d) y: 263 lines of code 2_slm-fine-tuning-mlstudio/florence2-VQA/1_training_mlflow_florence2.ipynb x: 1 contributors (90d) y: 640 lines of code 2_slm-fine-tuning-mlstudio/florence2-VQA/2_serving_florence2.ipynb x: 1 contributors (90d) y: 811 lines of code
923.0
lines of code
  min: 12.0
  average: 395.94
  25th percentile: 33.5
  median: 263.0
  75th percentile: 837.0
  max: 923.0
0 1.0
contributors (90d)
min: 1.0 | average: 1.0 | 25th percentile: 1.0 | median: 1.0 | 75th percentile: 1.0 | max: 1.0