huggingface / optimum-tpu
File Size

The distribution of size of files (measured in lines of code).

Intro
Learn more...
File Size Overall
35% | 33% | 6% | 8% | 15%
Legend:
1001+
501-1000
201-500
101-200
1-100


explore: grouped by folders | grouped by size | sunburst | 3D view
File Size per Extension
1001+
501-1000
201-500
101-200
1-100
py36% | 34% | 7% | 8% | 12%
toml0% | 0% | 0% | 0% | 100%
in0% | 0% | 0% | 0% | 100%
cfg0% | 0% | 0% | 0% | 100%
File Size per Logical Decomposition
primary
1001+
501-1000
201-500
101-200
1-100
optimum53% | 34% | 0% | 2% | 9%
text-generation-inference0% | 34% | 22% | 21% | 22%
ROOT0% | 0% | 0% | 0% | 100%
Longest Files (Top 40)
File# lines# units
modeling_llama.py
in optimum/tpu
1124 48
modeling_mistral.py
in optimum/tpu
1106 45
modeling_gemma.py
in optimum/tpu
877 38
generator.py
in text-generation-inference/server/text_generation_server
681 54
xla_model_parallel.py
in optimum/tpu
548 41
generator.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support
439 40
engine_loader.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support
182 10
token_selector.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support
135 4
token_selector.py
in optimum/tpu/generation
116 4
llama_model_exportable_hf.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support/models
102 5
98 -
server.py
in text-generation-inference/server/text_generation_server
89 2
distributed_model.py
in optimum/tpu
79 7
cli.py
in text-generation-inference/server/text_generation_server
78 2
cli.py
in optimum/tpu
76 5
modeling.py
in optimum/tpu
47 2
fsdp_v2.py
in optimum/tpu
43 4
model.py
in optimum/tpu
43 2
Cargo.toml
in text-generation-inference
42 -
mixtral_model_hf.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support/models
41 9
xla_mp_comm.py
in optimum/tpu
39 6
logits_process.py
in optimum/tpu/generation
34 2
logits_process.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support
33 2
pyproject.toml
in text-generation-inference/server
32 -
auto_generator.py
in text-generation-inference/server/text_generation_server
26 1
generator_base.py
in text-generation-inference/server/text_generation_server
25 7
compatibility.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support
24 2
interceptor.py
in text-generation-inference/server/text_generation_server
23 -
gemma_model_hf.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support/models
23 3
static_cache_xla.py
in optimum/tpu
23 2
in
15 -
cfg
setup.cfg
in root
15 -
xla_logger.py
in optimum/tpu
14 4
11 1
__init__.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support/models
3 -
version.py
in text-generation-inference/server/text_generation_server
3 -
__init__.py
in optimum/tpu
3 -
version.py
in optimum/tpu
3 -
__init__.py
in optimum/tpu/generation
2 -
__init__.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support
1 -
Files With Most Units (Top 28)
File# lines# units
generator.py
in text-generation-inference/server/text_generation_server
681 54
modeling_llama.py
in optimum/tpu
1124 48
modeling_mistral.py
in optimum/tpu
1106 45
xla_model_parallel.py
in optimum/tpu
548 41
generator.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support
439 40
modeling_gemma.py
in optimum/tpu
877 38
engine_loader.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support
182 10
mixtral_model_hf.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support/models
41 9
generator_base.py
in text-generation-inference/server/text_generation_server
25 7
distributed_model.py
in optimum/tpu
79 7
xla_mp_comm.py
in optimum/tpu
39 6
llama_model_exportable_hf.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support/models
102 5
cli.py
in optimum/tpu
76 5
token_selector.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support
135 4
token_selector.py
in optimum/tpu/generation
116 4
xla_logger.py
in optimum/tpu
14 4
fsdp_v2.py
in optimum/tpu
43 4
gemma_model_hf.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support/models
23 3
cli.py
in text-generation-inference/server/text_generation_server
78 2
compatibility.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support
24 2
logits_process.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support
33 2
server.py
in text-generation-inference/server/text_generation_server
89 2
logits_process.py
in optimum/tpu/generation
34 2
modeling.py
in optimum/tpu
47 2
static_cache_xla.py
in optimum/tpu
23 2
model.py
in optimum/tpu
43 2
auto_generator.py
in text-generation-inference/server/text_generation_server
26 1
11 1
Files With Long Lines (Top 7)

There are 7 files with lines longer than 120 characters. In total, there are 20 long lines.

File# lines# units# long lines
modeling_mistral.py
in optimum/tpu
1106 45 6
modeling_gemma.py
in optimum/tpu
877 38 3
modeling_llama.py
in optimum/tpu
1124 48 3
generator.py
in text-generation-inference/server/text_generation_server
681 54 2
token_selector.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support
135 4 2
generator.py
in text-generation-inference/server/text_generation_server/jetstream_pt_support
439 40 2
token_selector.py
in optimum/tpu/generation
116 4 2