src/parquet/column_writer.cc (3 lines): - line 68: // TODO: Due to the way we currently check if the buffer is full enough, - line 397: // TODO: This only works with due to some RLE specifics - line 584: // TODO Get rid of this deep call src/parquet/statistics.cc (2 lines): - line 169: // TODO: support distinct count? - line 208: // TODO: support distinct count? src/parquet/encoding-internal.h (2 lines): - line 853: // TODO: the key to this algorithm is to decode the entire miniblock at once. - line 947: // TODO: this doesn't work and requires memory management. We need to allocate build-support/cpplint.py (2 lines): - line 2910: _RE_PATTERN_TODO = re.compile(r'^//(\s*)TODO(\(.+?\))?:?(\s|$)?') - line 2938: # Checks for common mistakes in TODO comments. cmake_modules/SetupCxxFlags.cmake (1 line): - line 152: # TODO: Enable /Wall and disable individual warnings until build compiles without errors benchmarks/decode_benchmark.cc (1 line): - line 32: * TODO: this file needs some major cleanup. src/parquet/metadata.h (1 line): - line 65: // TODO (majetideepak): Implement support for pre_release src/parquet/file_writer.cc (1 line): - line 35: // FIXME: copied from reader-internal.cc src/parquet/test-util.h (1 line): - line 173: // TODO: compute a more precise maximum size for the encoded levels src/parquet/arrow/schema.cc (1 line): - line 594: // TODO: DENSE_UNION, SPARE_UNION, JSON_SCALAR, DECIMAL_TEXT, VARCHAR src/parquet/column_page.h (1 line): - line 35: // TODO: Parallel processing is not yet safe because of memory-ownership src/parquet/parquet.thrift (1 line): - line 508: /** TODO: **/ src/parquet/column_reader.cc (1 line): - line 358: // TODO figure a way to set max_definition_level_ to 0 src/parquet/arrow/record_reader.cc (1 line): - line 677: // TODO figure a way to set max_definition_level_ to 0 src/parquet/arrow/arrow-schema-test.cc (1 line): - line 832: // TODO: Test Decimal Arrow -> Parquet conversion