Size | # | Folders | Files | Lines | Code |
30 |
x 2 |
projects/deep_video_compressionprojects/deep_video_compression |
|
135:174 (13%)206:245 (13%) |
view |
18 |
x 2 |
neuralcompression/layersneuralcompression/layers |
_synthesis_transformation_2d.py_synthesis_transformation_2d.py |
41:58 (38%)50:67 (38%) |
view |
16 |
x 2 |
neuralcompression/layersneuralcompression/layers |
_analysis_transformation_2d.py_analysis_transformation_2d.py |
41:56 (37%)49:64 (37%) |
view |
14 |
x 2 |
projects/variational_image_compression/lightningprojects/variational_image_compression/lightning |
_mean_scale_hyperprior_autoencoder.py_scale_hyperprior_autoencoder.py |
68:83 (31%)68:83 (31%) |
view |
12 |
x 2 |
projects/variational_image_compression/lightningprojects/variational_image_compression/lightning |
_factorized_prior_autoencoder.py_scale_hyperprior_autoencoder.py |
52:64 (26%)54:66 (26%) |
view |
12 |
x 2 |
projects/variational_image_compression/lightningprojects/variational_image_compression/lightning |
_factorized_prior_autoencoder.py_mean_scale_hyperprior_autoencoder.py |
52:64 (26%)54:66 (26%) |
view |
12 |
x 2 |
projects/variational_image_compression/lightningprojects/variational_image_compression/lightning |
_mean_scale_hyperprior_autoencoder.py_scale_hyperprior_autoencoder.py |
54:66 (26%)54:66 (26%) |
view |
12 |
x 2 |
neuralcompression/modelsneuralcompression/models |
deep_video_compression.pydeep_video_compression.py |
336:348 (2%)422:434 (2%) |
view |
12 |
x 2 |
neuralcompression/layersneuralcompression/layers |
|
34:75 (30%)81:122 (30%) |
view |
12 |
x 2 |
projects/variational_image_compression/lightningprojects/variational_image_compression/lightning |
_factorized_prior_autoencoder.py_mean_scale_hyperprior_autoencoder.py |
66:77 (26%)68:79 (26%) |
view |
12 |
x 2 |
projects/variational_image_compression/lightningprojects/variational_image_compression/lightning |
_factorized_prior_autoencoder.py_scale_hyperprior_autoencoder.py |
66:77 (26%)68:79 (26%) |
view |
11 |
x 2 |
neuralcompression/modelsneuralcompression/models |
_mean_scale_hyperprior_autoencoder.py_scale_hyperprior_autoencoder.py |
101:114 (10%)49:62 (18%) |
view |
11 |
x 2 |
projects/deep_video_compressionprojects/deep_video_compression |
|
111:121 (4%)179:189 (4%) |
view |
11 |
x 2 |
neuralcompression/entropy_codersneuralcompression/entropy_coders |
jax_arithemetic_coder.pyjax_arithemetic_coder.py |
163:173 (2%)286:296 (2%) |
view |
10 |
x 2 |
projects/variational_image_compression/lightningprojects/variational_image_compression/lightning |
_mean_scale_hyperprior_autoencoder.py_scale_hyperprior_autoencoder.py |
37:46 (22%)37:46 (22%) |
view |
10 |
x 2 |
projects/deep_video_compressionprojects/scale_hyperprior_lightning |
|
80:90 (16%)66:76 (20%) |
view |
10 |
x 2 |
projects/deep_video_compressionprojects/deep_video_compression |
|
122:133 (4%)191:202 (4%) |
view |
10 |
x 2 |
neuralcompression/functionalneuralcompression/metrics |
|
75:85 (25%)135:145 (11%) |
view |
10 |
x 2 |
projects/variational_image_compression/lightningprojects/variational_image_compression/lightning |
_factorized_prior_autoencoder.py_scale_hyperprior_autoencoder.py |
35:44 (21%)37:46 (22%) |
view |
10 |
x 2 |
projects/variational_image_compression/lightningprojects/variational_image_compression/lightning |
_factorized_prior_autoencoder.py_mean_scale_hyperprior_autoencoder.py |
35:44 (21%)37:46 (22%) |
view |