Summary: 287 instances, 246 unique Text Count // TODO: rate limit reading headers from blob files. 1 // TODO: consider rename to GetSliceHash32 1 # TODO re-enable pipelined write. Not well tested atm 1 && defined(__GNUC__) /* TODO: IBM XL */ 1 // TODO: support WBWI::SingleDelete() with timestamp. 1 // TODO:(gzh) maybe use direct reads/writes here if possible 1 // TODO: maybe handle the tracing status? 1 // TODO: consider rename to Hash32 1 # TODO: enable this once we figure out how to adjust kill odds for WAL- 1 // TODO: introduce perf counters for block cache insertions 1 // FIXME ^^^: there should be no reason for Get() to depend on current 1 // TODO: Do some refactoring and use only one recovery_error_ 1 // TODO: rate limit reading footers from blob files. 2 // TODO: Pick files with max_timestamp > trim_ts by each file's timestamp meta 1 // TODO akanksha: Update the condition when asynchronous prefetching is 1 // TODO: Should this error be ignored? 1 // TODO: refactor this to have a better signature, consolidate 1 // TODO (sagar0): Modify AlignedBuffer.Append to allow doing a memmove 1 // TODO remember the iterator is invalidated because of prefix 1 // TODO: This does not follow symbolic links at this point 1 // TODO: What do we do if this returns an error? 2 // FIXME ^^^: there should be no reason for MultiGet() to depend on current 1 // TODO (yanqin) maybe support max_open_files != -1 by creating hard links 1 // TODO (yanqin): make the following check optional, especially when data 1 // TODO: rate limit old blob DB file reads. 1 // the additional set. (TODO: how this can mitigate scalability and 1 // TODO: think about interaction with Merge. If a user key cannot 1 // TODO: maybe check the return value of Close. 1 // TODO: pre-create kTsMax. 1 // TODO: need to unset flush reason? 1 // TODO: Open file or create hard link to prevent the file from being 1 // TODO: Tune the buffer size. 1 # TODO: Make this work with find_package and/or get rid of it 1 // TODO: This is very similar to FindNextUserEntry() and MergeValuesNewToOld(). 1 // TODO: explore combining in a struct 1 // TODO: consider changing to Slice 1 prefix_size_ = 4; // TODO: support different prefix_size 1 // TODO find a better way to pass compaction_options_fifo. 1 // TODO: Refactor code so that BlockType can determine both the C++ type 1 // TODO: a better and complete implementation is needed to ensure strict 1 // TODO: Cleanup io_status in BuildTable and table builders 1 // TODO: introduce dedicated tickers/statistics/counters 3 // TODO cover transaction DB is not covered in this fault test too. 1 // TODO (PR7798). We should only add the file to the FileManager if it 1 // TODO: John 1 // TODO (yanqin) try SetSnapshotOnNextOperation(). We currently need to take 1 // TODO: refactor Interleaved*Query so that queries can be "prepared" by 1 // TODO: Add verify_table() 1 // TODO: Presently there is no way to differentiate between error/corruption 1 // TODO: Hook the "name" up to the actual Name() of the MergeOperators? 1 // TODO: distinguish between MANIFEST write and CURRENT renaming 4 TODO: 1 // TODO: Implement the trimming in flush code path. 1 // TODO: support WBWI::Put() with timestamp. 1 # TODO: there is such a thing as transactions with WAL disabled. We should 1 // TODO: re-use one top-level index iterator 1 // TODO: handle timestamp corruption like in general iterator semantics 1 // TODO: support timestamp-based conflict checking. 1 // TODO: support larger timestamp sizes 1 //**TODO: What should we do if we failed to 1 // TODO akanksha: Merge this function with TryReadFromCache once async 1 /* TODO: update */ 1 // The following is grossly complicated. TODO: clean it up 1 // handled. TODO: allow Bloom checks where max_covering_tombstone_seq==0 1 // TODO: Should handle this error? 1 // TODO: add support for decoding blob indexes in ldb as well 1 // TODO: Break up into multiple records to reduce memory usage on recovery? 1 s.PermitUncheckedError(); //**TODO: What to do on error? 2 // thread_local is part of C++11 and later (TODO: clean up this define) 2 // of a single linear system. (TODO: implement) 1 .PermitUncheckedError(); //**TODO: What do to on error? 1 // TODO: this is a temporarily solution as it is safe but not optimal for 1 // TODO should have error handling though not much we can do... 1 // TODO: we have to open default CF, because of an implementation limitation, 1 // TODO: introduce aggregate (not per-level) block cache miss count 1 // TODO: log error 3 // TODO: Perform trimming before inserting into memtable during recovery. 1 // TODO (haoyu): We only support Get for now. We need to extend the tracing 1 // TODO read record's till the first no corrupt entry? 1 // TODO: Once GetLiveFilesMetaData supports blob files, update the logic 1 // TODO the while loop inherits from two-level-iterator. We don't know 2 // TODO: will be used with InterleavedSolutionStorage? 1 // TODO: Avoid the snapshot stripe map lookup in CompactionRangeDelAggregator 1 // TODO: We can optimize this if we steal 3 bits. 1 bit: this node is 1 // TODO: maybe handle the tracing status? 5 //**TODO: If/When the DBOptions has a registry in it, the ConfigOptions 1 // TODO: a better way to set or clean the retryable IO error which 1 // TODO: Ideally we want to verify the hash entry 1 // TODO: give log file and sst file different options (log 1 s.PermitUncheckedError(); //**TODO: What to do on error? 1 // TODO: break this include loop 1 //**TODO: Make the simulate fs something that can be loaded 1 // (TODO: APIs to help choose parameters) One option for fallback in 1 // TODO: update per-level perfcontext user_key_return_count for kMerge 1 // TODO: log error 1 // TODO: Should we check for errors here? 1 // TODO (yanqin) support snapshot. 2 // TODO (yanqin) investigate whether we should abort ingestion or 1 s.PermitUncheckedError(); //**TODO: What to do on error? 1 // TODO akanksha: Add perf_times etc. 1 // TODO: consider memory usage of the FilterBitsReader 1 // TODO: Fix those issues so that the Status 1 // TODO (PR7798). We should only add the file to the FileManager if it 1 // TODO: consider moving ReadOptions from ArenaWrappedDBIter to DBIter to 1 // XXX/FIXME: This is just basic, naive handling of range tombstones, 1 // TODO (yanqin). 1 // TODO (yanqin) find a suitable status code. 1 // TODO: Should handle status here? 1 // TODO: effectively use the existing checksum of the data being writing to 1 // TODO (yanqin) support separating primary index and secondary index in 1 // TODO fix return value/type 1 // TODO (yanqin) 4 // (TODO: consider using class toku::txnid_set. The reason for using STL 1 // TODO (yanqin) add stats for other cases? 1 // TODO maybe cache the computation result 1 // TODO: we will support WAL tailing soon. 1 # TODO: enable write-prepared 1 // FIXME: code duplication with GetFastLocalBloomBuilderWithContext 1 // TODO: Support counter batch update for partitioned index and 2 // TODO: Should we return an error if we cannot delete the directory? 1 // TODO: Check for unlock error 1 // FIXME: is changed prefix_extractor handled anywhere for hash index? 1 // block before setting up cache keys. TODO: consider setting up a bootstrap 1 // TODO (yanqin): parallelize jobs with threads. 1 // TODO: Should we check for an error here? 1 // TODO: we need to check the cache dump format version and RocksDB version 1 // TODO: min_log_number_to_keep_2pc check needed? 1 // TODO: examine the behavior for corrupted key 2 // TODO: verify checksum 1 // TODO (yanqin) investigate whether we should sync the closed logs for 1 // TODO: introduce perf counter for compression dictionary hit count 1 // TODO: Should the insert error be ignored? 1 // TODO (AP) support indirect buffers, though probably via a less efficient code path 1 // TODO krad: Evaluate if we need to move to a more strict mode where we 1 // TODO: what are the X-functions? Xcalloc, Xrealloc? 1 // TODO: This particular case seems confusing and unnecessary to pin the 2 // TODO akankshamahajan: Update Poll API to take into account min_completions 1 // TODO: Check for error here? 1 // TODO: update options_file_number_ needed? 1 // TODO: Should we check for errors here? 1 // TODO: pre-create kTsMax. 1 // TODO: sub_compact.io_status is not checked like status. Not sure if thats 1 // TODO: this use of operator bool on `tracer_` can avoid unnecessary lock 2 // TODO: maybe handle the tracing status? 1 // TODO: this use of operator bool on `tracer_` can avoid unnecessary lock 1 // TODO: optimize this performance 1 // TODO: rate limit file reads for checksum calculation during file ingestion. 1 // TODO: What if this fails? 1 // TODO: stopwatch DB_GET needed?, perf timer needed? 1 // TODO: The following is duplicated with Cleanup(). 1 // TODO: kManifestWriteNoWAL and kFlushNoWAL are misleading. Refactor is 1 // TODO (AP) add keysArray[i].arrayOffset() if the buffer is indirect 1 // TODO: consider rename to SliceHasher32 1 // TODO: make sure the buff is large enough 1 // FIXME: is changed prefix_extractor handled anywhere for hash index? 1 // Some legacy testing stuff TODO: carefully clean up obsolete parts 1 // TODO (yanqin) maybe account for file metadata bytes for exact accuracy? 1 // TODO: need to be improved since it sort of defeats the purpose of the rate 2 // TODO: change builder to take the option struct 1 // TODO remove this restriction 1 // TODO: Should handle status here? 1 // TODO: new_checksums: to update files to latest file checksum algorithm 1 // TODO: remove this line when options are used in the loader 1 // TODO: this use of operator bool on `tracer_` can avoid unnecessary lock 1 // TODO: rate limit plain table reads. 1 // TODO: temperature, file_checksum, file_checksum_func_name 1 // TODO: update per-level perfcontext user_key_return_count for kMerge 1 // TODO: some things moved toku_instrumentation.h, not necessarily the best 1 // TODO: iterating over all column families under db mutex. 1 // TODO: rate limit file reads for checksum calculation during file 1 // TODO: Write buffer size passed in should be max of all CF's instead 1 // TODO: What to do if we cannot delete the directory? 1 // TODO (yanqin) with a probability, we can use either forward or backward 1 // TODO future: checksum_func for populating checksums 1 // TODO akanksha:: Dedup below code by calling 1 // TODO akanksha: Update TEST_SYNC_POINT after new tests are added. 1 s.PermitUncheckedError(); // TODO: Check the status 1 // TODO (AP) because in that case we have to pass the array directly, 1 // TODO (yanqin) maybe handle the case in which column_families have 1 // TODO: implement batched interface to plain table bloom 1 // TODO offset passed in is not accurate for parallel compression case 2 // TODO: We should introduce a way to explicitly disable verification 1 // NOTE/TODO: We hope to revise this requirement in the future. 1 // TODO: consider optimizations such as 1 // TODO: the name is currently not stored persistently and thus 1 // TODO: verify/validate 1 // // TODO: configuring Homogeneous Ribbon for arbitrarily large filters 1 // TODO: Check for errors from OnAddFile? 1 // TODO support multi paths? 1 // TODO Also check the IO status when create the Iterator. 1 // TODO: This may incorrectly select small readahead in case partitioned 1 // TODO: consider using expected_values_dir instead, but this is more 1 // TODO: support timestamp 2 // TODO: more details on trade-offs and practical issues. 1 // TODO: Should we check for an error here? 1 // TODO (yanqin) parallelize if key space is large 2 // TODO: implement batched interface to full block reader 1 // TODO what if one drops a column family while transaction(s) still have 1 s.PermitUncheckedError(); // TODO: What should we do with this error? 1 //**TODO: Should this be error be returned or swallowed? 1 options.sample_for_compression), // TODO: is 0 fine here? 1 // TODO: source location info might have to be pulled up one caller 1 // TODO: introduce perf counters for misses per block type 1 // TODO: maybe handle the tracing status? 1 // TODO: support timestamp 8 // FIXME: should be a parameter for reading table properties to use persistent 1 // TODO: Re-style this comment to be like the first one 1 // TODO: Don't ignore errors from allocate 1 // TODO: rate limit reads of whole cuckoo tables. 1 // TODO: range unlock does nothing... 1 // TODO (yanqin) allow user to configure probability of each operation. 1 // TODO: easier config for bloom (maybe based on avg key/value size) 1 // TODO: introduce dedicated perf counter for range tombstones 1 s.PermitUncheckedError(); // TODO: What should we do on error? 3 // TODO this experimental option isn't made configurable 1 // TODO: add perf counter for compression dictionary read time 1 // TODO: consider fixed-column specializations with stack-allocated state 1 escalation). (TODO: it is not clear why these operations are tracked with 1 // TODO: rate limit reading headers from blob files. 1 MULTIPLIER); // TODO: add large key support 1 // TODO: kManifestWriteNoWAL and kFlushNoWAL are misleading. Refactor 1 // TODO with incremental compaction is supported, we might want to 1 // TODO: simplify using GetRefedColumnFamilySet? 1 # TODO: May need to adjust random odds if kill_random_test 1 // TODO: rate limit `BlobLogSequentialReader` reads (it appears unused?) 1 // TODO: move it to different files, as it's testing an internal API 2 // TODO: This mutex should be removed later, to improve performance when 4 // TODO: Add better error handling. 1 // TODO: rate limit reading footers from blob files. 1 // TODO: Need to pass appropriate deadline to TryReadFromCache(). Right now, 1 // TODO: consider rename to LegacyBloomHash32 1 // TODO (yanqin) enable compaction filter 1 // TODO: much more to come 1 // FIXME: can be inconsistent with DisableFileDeletions in cases like 1 // TODO: Check for an error here 1 // TODO: Set status for individual keys appropriately 1 // TODO: consider not counting these as Bloom hits to more closely 1 // TODO: Implement ReadOnly MultiGet? 1 // TODO: a bug here. This function actually does not necessarily 1 // TODO: In the future, BackgroundErrorReason will only be used to indicate 1 // TODO: rate limit footer reads. 1 // TODO (AP) support indirect buffers 1 // TODO: support WBWI::Delete() with timestamp. 1 # TODO detect a hanging condition. The job might run too long as RocksDB 1 // TODO: process future tags such as checksum. 1 * FIXME: Clang's output is still _much_ faster -- On an AMD Ryzen 3600, 1