tensorflow / serving
Unit Size

The distribution of size of units (measured in lines of code).

Intro
  • Unit size measurements show the distribution of size of units of code (methods, functions...).
  • Units are classified in four categories based on their size (lines of code): 1-20 (small units), 20-50 (medium size units), 51-100 (long units), 101+ (very long units).
  • You should aim at keeping units small (< 20 lines). Long units may become "bloaters", code that have increased to such gargantuan proportions that they are hard to work with.
Learn more...
Unit Size Overall
  • There are 820 units with 12,503 lines of code in units (56.2% of code).
    • 5 very long units (811 lines of code)
    • 32 long units (2,151 lines of code)
    • 142 medium size units (4,299 lines of code)
    • 199 small units (2,956 lines of code)
    • 442 very small units (2,286 lines of code)
6% | 17% | 34% | 23% | 18%
Legend:
101+
51-100
21-50
11-20
1-10
Unit Size per Extension
101+
51-100
21-50
11-20
1-10
cc7% | 18% | 35% | 22% | 15%
py0% | 14% | 48% | 19% | 18%
h0% | 4% | 18% | 34% | 41%
Unit Size per Logical Component
primary logical decomposition
101+
51-100
21-50
11-20
1-10
tensorflow_serving/model_servers25% | 6% | 38% | 23% | 7%
tensorflow_serving/util4% | 18% | 36% | 18% | 21%
tensorflow_serving/batching10% | 33% | 20% | 26% | 8%
tensorflow_serving/servables3% | 28% | 29% | 20% | 17%
tensorflow_serving/core0% | 6% | 31% | 34% | 28%
tensorflow_serving/example0% | 19% | 50% | 16% | 13%
tensorflow_serving/experimental0% | 20% | 50% | 15% | 12%
tensorflow_serving/sources0% | 16% | 35% | 29% | 18%
tensorflow_serving/resources0% | 0% | 42% | 34% | 23%
tensorflow_serving/apis0% | 0% | 65% | 11% | 22%
tensorflow_serving/session_bundle0% | 0% | 35% | 0% | 64%
Alternative Visuals
Longest Units
Top 20 longest units
Unit# linesMcCabe index# params
int main()
in tensorflow_serving/model_servers/main.cc
236 6 2
Status Server::BuildAndStart()
in tensorflow_serving/model_servers/server.cc
225 35 1
GZipHeader::Status GZipHeader::ReadMore()
in tensorflow_serving/util/net_http/compression/gzip_zlib.cc
131 32 3
Status SplitInputTask()
in tensorflow_serving/batching/batching_session.cc
118 17 4
Status TfLiteSession::Create()
in tensorflow_serving/servables/tensorflow/tflite_session.cc
101 15 6
void request_cb()
in tensorflow_serving/util/net_http/socket/testing/ev_print_req_server.cc
100 17 2
def main()
in tensorflow_serving/example/mnist_saved_model.py
92 7 1
Status PostProcessClassificationResult()
in tensorflow_serving/servables/tensorflow/classifier.cc
90 24 5
int ZLib::UncompressAtMostOrAll()
in tensorflow_serving/util/net_http/compression/gzip_zlib.cc
88 29 5
Status TensorFlowMultiInferenceRunner::Infer()
in tensorflow_serving/servables/tensorflow/multi_inference.cc
87 16 3
Status BatchingSession::MergeInputTensors()
in tensorflow_serving/batching/batching_session.cc
85 15 3
int main()
in tensorflow_serving/util/net_http/socket/testing/ev_fetch_client.cc
83 17 2
void BatchingSession::ProcessBatch()
in tensorflow_serving/batching/batching_session.cc
78 13 2
Status BatchingSession::SplitOutputTensors()
in tensorflow_serving/batching/batching_session.cc
75 14 3
Status FillTensorMapFromInstancesList()
in tensorflow_serving/util/json_tensor.cc
73 15 3
Status BatchingSession::InternalRun()
in tensorflow_serving/batching/batching_session.cc
73 6 7
Status TfLiteSession::SplitTfLiteInputTask()
in tensorflow_serving/servables/tensorflow/tflite_session.cc
70 11 4
int main()
in tensorflow_serving/util/net_http/socket/testing/ev_print_req_server.cc
67 10 2
Status SetInputAndInvokeMiniBatch()
in tensorflow_serving/servables/tensorflow/tflite_session.cc
65 14 5
void AspiredVersionsManager::ProcessAspiredVersionsRequest()
in tensorflow_serving/core/aspired_versions_manager.cc
65 12 2