src/Lucene.Net/Codecs/BlockTreeTermsReader.cs (20 lines):
- line 625: // TODO: maybe push this into Terms?
- line 692: // TODO: can we share this with the frame in STE?
- line 921: // TODO: maybe add scanToLabel; should give perf boost
- line 972: // TODO: better API would be "jump straight to term=N"???
- line 975: // TODO: we could make "tiers" of metadata, ie,
- line 981: // TODO: if docFreq were bulk decoded we could
- line 1008: // TODO: in some cases we can filter by length? eg
- line 1038: // TODO: if the automaton is "smallish" we really
- line 1134: // TODO: we could be more efficient for the next()
- line 1423: // TODO: maybe we should do the same linear test
- line 1861: // TODO: reverse vLong byte order for better FST
- line 2766: // TODO: if suffixes were stored in random-access
- line 2804: // TODO: we could skip this if !hasTerms; but
- line 2866: // TODO: skip this if !hasTerms? Then postings
- line 2936: // TODO: make this array'd so we can do bin search?
- line 3025: // TODO: better API would be "jump straight to term=N"???
- line 3028: // TODO: we could make "tiers" of metadata, ie,
- line 3034: // TODO: if docFreq were bulk decoded we could
- line 3265: // TODO: not consistent that in the
- line 3421: // TODO: not consistent that in the
src/Lucene.Net.Codecs/Memory/DirectPostingsFormat.cs (19 lines):
- line 49: // TODO:
- line 84: // TODO: allow passing/wrapping arbitrary postings format?
- line 249: // TODO: maybe specialize into prx/no-prx/no-frq cases?
- line 337: // TODO: maybe make a separate builder? These are only
- line 926: // TODO: we should use the skip pointers; should be
- line 954: // TODO: we should use the skip pointers; should be
- line 1002: // TODO: implement reuse, something like Pulsing:
- line 1102: // TODO: implement reuse, something like Pulsing:
- line 1579: // TODO: add assert that we don't inc too many times
- line 1678: // TODO: implement reuse, something like Pulsing:
- line 1729: // TODO: implement reuse, something like Pulsing:
- line 1786: // TODO: can do this w/o setting members?
- line 1871: // TODO: can do this w/o setting members?
- line 1961: // TODO: can do this w/o setting members?
- line 1997: // TODO: store docID member?
- line 2014: // TODO: can I do postings[upto+1]?
- line 2026: // TODO: could do a better estimate
- line 2201: // TODO: could do a better estimate
- line 2387: // TODO: specialize offsets and not
src/Lucene.Net.TestFramework/Util/LuceneTestCase.cs (19 lines):
- line 519: /// TODO: javadoc?
- line 607: /// TODO: javadoc?
- line 617: /// TODO: javadoc?
- line 621: /// TODO: javadoc?
- line 897: // LUCENENET TODO: Not sure how to convert these
- line 907: /* LUCENENET TODO: Not sure how to convert these
- line 1777: // TODO: once all core & test codecs can index
- line 1805: // TODO: we need to do this, but smarter, ie, most of
- line 1895: // TODO: remove this, and fix those tests to wrap before putting slow around:
- line 1968: /// TODO: javadoc
- line 1975: /// TODO: javadoc
- line 2061: // TODO: this whole check is a coverage hack, we should move it to tests for various filterreaders.
- line 2065: // TODO: not useful to check DirectoryReader (redundant with checkindex)
- line 2316: // TODO: test start term too
- line 2714: // TODO: I think this is bogus because we don't document what the order should be
- line 2745: // TODO: should we check the FT at all?
- line 2787: // TODO: clean this up... very messy
- line 2920: // TODO: this is kinda stupid, we don't delete documents in the test.
- line 2947: // TODO: would be great to verify more than just the names of the fields!
src/Lucene.Net/Util/Fst/FST.cs (18 lines):
- line 42: // TODO: break this into WritableFST and ReadOnlyFST.. then
- line 47: // TODO: if FST is pure prefix trie we can do a more compact
- line 85: // TODO: we can free up a bit if we can nuke this:
- line 192: // TODO: we could be smarter here, and prune periodically
- line 506: // TODO: really we should encode this as an arc, arriving
- line 694: // TODO: for better perf (but more RAM used) we
- line 766: // TODO: try to avoid wasteful cases: disable doFixedArray in that case
- line 795: // TODO: clean this up: or just rewind+reuse and deal with it
- line 1169: // TODO: can't assert this because we call from readFirstArc
- line 1222: // TODO: would be nice to make this lazy -- maybe
- line 1281: // TODO: could we somehow [partially] tableize arc lookups
- line 1392: // TODO: we should fix this code to not have to create
- line 1521: // TODO: must assert this FST was built with
- line 1526: // TODO: use bitset to not revisit nodes already
- line 1590: // TODO: instead of new Arc() we can re-use from
- line 1664: // TODO: other things to try
- line 1704: // TODO: we could use more RAM efficient selection algo here...
- line 2098: // TODO: we can free up a bit if we can nuke this:
src/Lucene.Net.Facet/DrillSidewaysScorer.cs (14 lines):
- line 90: // TODO: if we ever allow null baseScorer ... it will
- line 204: // TODO: should we sort this 2nd dimension of
- line 228: // TODO: for the "non-costly Bits" we really should
- line 257: // TODO: we could score on demand instead since we are
- line 291: // TODO: maybe a class like BS, instead of parallel arrays
- line 362: // TODO: single-valued dims will always be true
- line 422: // TODO: we could jump slot0 forward to the
- line 440: // TODO: factor this out & share w/ union scorer,
- line 456: // TODO: single-valued dims will always be true
- line 475: // TODO: sometimes use advance?
- line 521: // TODO: maybe a class like BS, instead of parallel arrays
- line 620: // TODO: single-valued dims will always be true
- line 690: // TODO: we could "fix" faceting of the sideways counts
- line 713: // TODO: we could "fix" faceting of the sideways counts
src/Lucene.Net/Util/Fst/Util.cs (10 lines):
- line 38: public static class Util // LUCENENET specific - made static // LUCENENET TODO: Fix naming conflict with containing namespace
- line 46: // TODO: would be nice not to alloc this on every lookup
- line 72: // TODO: maybe a CharsRef version for BYTE2
- line 84: // TODO: would be nice not to alloc this on every lookup
- line 126: // TODO: would be nice not to alloc this on every lookup
- line 484: // TODO: we could enable FST to sorting arcs by weight
- line 488: // TODO: maybe we should make an FST.INPUT_TYPE.BYTE0.5!?
- line 588: // TODO: maybe we can save this copyFrom if we
- line 1088: // TODO maybe this is a useful in the FST class - we could simplify some other code like FSTEnum?
- line 1168: // TODO: we should fix this code to not have to create
src/Lucene.Net.Codecs/BlockTerms/BlockTermsReader.cs (10 lines):
- line 373: /// TODO: we may want an alternate mode here which is
- line 502: // TODO: maybe we should store common prefix
- line 698: // TODO: cutover to something better for these ints! simple64?
- line 742: // TODO: cutover to something better for these ints! simple64?
- line 838: // TODO: if ord is in same terms block and
- line 891: // TODO: we still lazy-decode the byte[] for each
- line 955: // TODO: cutover to random-access API
- line 963: // TODO: better API would be "jump straight to term=N"???
- line 967: // TODO: we could make "tiers" of metadata, ie,
- line 973: // TODO: if docFreq were bulk decoded we could
src/Lucene.Net/Codecs/BlockTreeTermsWriter.cs (10 lines):
- line 108: TODO:
- line 216: ///
- line 535: // TODO: try writing the leading vLong in MSB order
- line 598: // TODO: maybe we could add bulk-add method to
- line 738: // TODO: we could store min & max suffix start byte
- line 757: // TODO: this is wasteful since the builder had
- line 865: // TODO: make a better segmenter? It'd have to
- line 1008: // TODO: cutover to bulk int codec... simple64?
- line 1115: // TODO: now that terms dict "sees" these longs,
- line 1164: // TODO: we could block-write the term suffix pointers;
src/Lucene.Net/Util/UnicodeUtil.cs (9 lines):
- line 110: public static readonly BytesRef BIG_TERM = new BytesRef(new byte[] { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }); // TODO this is unrelated here find a better place for it
- line 130: // TODO: broken if incoming result.offset != 0
- line 214: // TODO: broken if incoming result.offset != 0
- line 302: // TODO: broken if incoming result.offset != 0
- line 391: // TODO: broken if incoming result.offset != 0
- line 753: // TODO: broken if incoming result.offset != 0
- line 755: // TODO: ints cannot be null, should be an assert
- line 793: // TODO: this may read past utf8's limit.
- line 961: // TODO: broken if chars.offset != 0
src/Lucene.Net/Search/MinShouldMatchSumScorer.cs (9 lines):
- line 193: // TODO instead advance() might be possible, but complicates things
- line 207: return; // continue looping TODO consider advance() here
- line 225: // find next most costly subScorer within heap TODO can this be done better?
- line 235: return; // continue looping TODO consider advance() here
- line 241: // TODO: this currently scores, but so did the previous impl
- line 242: // TODO: remove recursion.
- line 243: // TODO: consider separating scoring out of here, then modify this
- line 324: // TODO is cost for advance() different to cost for iteration + heap merge
- line 347: // TODO could this implementation also move rather than swapping neighbours?
src/Lucene.Net.Suggest/Suggest/Analyzing/AnalyzingInfixSuggester.cs (8 lines):
- line 36: // TODO:
- line 141: // LUCENENET UPGRADE TODO: Remove method at version 4.11.0. Was retained for perfect 4.8 compatibility
- line 251: // TODO: use threads?
- line 410: // TODO: if we had a BinaryTermField we could fix
- line 572: // TODO: if we had a BinaryTermField we could fix
- line 583: // TODO: we could allow blended sort here, combining
- line 631: // TODO: maybe just stored fields? they compress...
- line 794: // TODO: apps can try to invert their analysis logic
src/Lucene.Net.TestFramework/Index/BasePostingsFormatTestCase.cs (8 lines):
- line 49: // TODO can we make it easy for testing to pair up a "random terms dict impl" with your postings base format...
- line 51: // TODO test when you reuse after skipping a term or two, eg the block reuse case
- line 131: // TODO: more realistic to inversely tie this to numDocs:
- line 186: // TODO: sometimes have a biggish gap here!
- line 483: // TODO maybe instead of @BeforeClass just make a single test run: build postings & index & test it?
- line 505: // TODO use allowPayloads
- line 1252: // TODO test thread safety of buildIndex too
- line 1329: // TODO test thread safety of buildIndex too
src/Lucene.Net/Codecs/Lucene3x/Lucene3xFields.cs (8 lines):
- line 430: // TODO: more efficient seek?
- line 493: // TODO: more efficient seek? can we simply swap
- line 589: // TODO: can we avoid this copy?
- line 685: // TODO: more efficient seek?
- line 746: // TODO: more efficient seek?
- line 859: // TODO: maybe we can handle this like the next()
- line 892: // TODO: faster seek?
- line 1001: // TODO: can we use STE's prevBuffer here?
src/Lucene.Net.TestFramework/Search/ShardSearchingTestBase.cs (7 lines):
- line 39: // TODO: maybe SLM should throw this instead of returning null...
- line 189: // TODO: broadcastNodeExpire? then we can purge the
- line 266: // TODO: nothing evicts from here!!! Somehow, on searcher
- line 386: // TODO: we could compute this on init and cache,
- line 579: // TODO: set warmer
- line 686: // TODO: make this more realistic, ie, each node should
- line 722: // TODO: doc blocks too
src/Lucene.Net.Analysis.Common/Analysis/Hunspell/Dictionary.cs (7 lines):
- line 68: // TODO: really for suffixes we should reverse the automaton and run them backwards
- line 112: private readonly string tempDir = OfflineSorter.DefaultTempDir; // TODO: make this configurable?
- line 520: throw new ParseException("The affix file contains a rule with less than four elements: " + line, 0 /*, reader.LineNumber */);// LUCENENET TODO: LineNumberReader
- line 569: regex = ".*"; // TODO: optimize this better:
- line 657: throw new ParseException("invalid syntax: " + line, lineNumber /*, reader.LineNumber */); // LUCENENET TODO: LineNumberReader
- line 1025: // TODO: the flags themselves can be double-chars (long) or also numeric
- line 1414: // TODO: this could be more efficient!
src/Lucene.Net/Util/Fst/FSTEnum.cs (7 lines):
- line 133: // TODO: should we return a status here (SEEK_FOUND / SEEK_NOT_FOUND /
- line 142: // TODO: possibly caller could/should provide common
- line 308: // TODO: should we return a status here (SEEK_FOUND / SEEK_NOT_FOUND /
- line 314: // TODO: possibly caller could/should provide common
- line 398: // TODO: if each arc could somehow read the arc just
- line 463: // TODO: if each arc could somehow read the arc just
- line 519: // TODO: possibly caller could/should provide common
src/Lucene.Net/Codecs/Lucene40/Lucene40PostingsReader.cs (7 lines):
- line 83: // TODO: hasProx should (somehow!) become codec private,
- line 263: // TODO: can we optimize if FLAG_PAYLOADS / FLAG_OFFSETS
- line 266: // TODO: refactor
- line 339: // TODO: for full enum case (eg segment merging) this
- line 730: // TODO specialize DocsAndPosEnum too
- line 777: // TODO: for full enum case (eg segment merging) this
- line 1009: // TODO: for full enum case (eg segment merging) this
src/Lucene.Net.Codecs/Sep/SepPostingsReader.cs (7 lines):
- line 25: // TODO: -- should we switch "hasProx" higher up? and
- line 280: // TODO: -- should we do omitTF with 2 different enum classes?
- line 294: // TODO: -- should we do hasProx with 2 different enum classes?
- line 333: // TODO: can't we only do this if consumer
- line 513: // TODO: can't we only do this if consumer
- line 554: // TODO: maybe we should do the 1-bit trick for encoding
- line 761: // TODO: remove sep layout, its fallen behind on features...
src/Lucene.Net.TestFramework/Util/TestUtil.cs (7 lines):
- line 495: // TODO: we really need for postings impls etc to announce themselves
- line 527: // TODO: we really need for docvalues impls etc to announce themselves
- line 552: // TODO: generalize all 'test-checks-for-crazy-codecs' to
- line 590: // TODO: remove this, push this test to Lucene40/Lucene42 codec tests
- line 701: // TODO: is there a pre-existing way to do this!!!
- line 814: // TODO: cast to DocsAndPositionsEnum?
- line 1032: // :TODO: is this list exhaustive?
src/Lucene.Net/Codecs/Lucene41/Lucene41PostingsReader.cs (6 lines):
- line 251: // TODO: specialize to liveDocs vs not
- line 474: // TODO: make frq block load lazy/skippable
- line 841: // TODO: make frq block load lazy/skippable
- line 942: // TODO: in theory we could avoid loading frq block
- line 1420: // TODO: make frq block load lazy/skippable
- line 1524: // TODO: in theory we could avoid loading frq block
src/Lucene.Net.Analysis.Kuromoji/Dict/UserDictionary.cs (6 lines):
- line 79: // TODO: should we allow multiple segmentations per input 'phrase'?
- line 138: // TODO: can we avoid this treemap/toIndexArray?
- line 231: return null; // TODO: add support?
- line 236: return null; // TODO: add support?
- line 241: return null; // TODO: add support?
- line 246: return null; // TODO: add support?
src/Lucene.Net.Grouping/BlockGroupingCollector.cs (6 lines):
- line 27: // TODO: this sentence is too long for the class summary.
- line 68: // TODO: specialize into 2 classes, static "create" method:
- line 103: public override int Freq => throw UnsupportedOperationException.Create(); // TODO: wtf does this class do?
- line 294: // TODO: allow null groupSort to mean "by relevance",
- line 312: // TODO: maybe allow no sort on retrieving groups? app
- line 453: // TODO: we could aggregate scores across children
src/Lucene.Net/Search/FieldCacheImpl.cs (6 lines):
- line 1241: // TODO: use bulk API
- line 1871: // TODO: use Uninvert?
- line 1905: // TODO: use Uninvert?
- line 1968: // TODO: this if DocTermsIndex was already created, we
- line 2017: // TODO: would be nice to first check if DocTermsIndex
- line 2124: // TODO: this if DocTermsIndex was already created, we
src/Lucene.Net.Facet/DrillSidewaysQuery.cs (6 lines):
- line 101: // TODO: would be nice if we could say "we will do no
- line 142: // TODO: would be nice if AssertingIndexSearcher
- line 154: // TODO: it could be better if we take acceptDocs
- line 180: // TODO: this logic is too naive: the
- line 191: // TODO: Filter needs to express its expected
- line 242: // TODO: these should do "deeper" equals/hash on the 2-D drillDownTerms array
src/Lucene.Net.Suggest/Suggest/Analyzing/FreeTextSuggester.cs (6 lines):
- line 216: // TODO: use ShingleAnalyzerWrapper?
- line 298: // TODO: if only we had IndexOptions.TERMS_ONLY...
- line 512: // TODO: this is somewhat iffy; today, ShingleFilter
- line 606: // TODO: we could add fuzziness here
- line 628: // TODO: we could do this division at build time, and
- line 836: return (int)(long.MaxValue - output); // LUCENENET TODO: Perhaps a Java Lucene bug? Why cast to int when returning long?
src/Lucene.Net.Suggest/Suggest/Analyzing/AnalyzingSuggester.cs (6 lines):
- line 584: // TODO: I think we can avoid the extra 2 bytes when
- line 863: // TODO: for fuzzy case would be nice to return
- line 984: // TODO: we could walk & add simultaneously, so we
- line 992: // TODO: is there a Reader from a CharSequence?
- line 1005: // TODO: we could use the end offset to "guess"
- line 1013: // TODO: we can optimize this somewhat by determinizing
src/Lucene.Net/Search/FieldCache.cs (6 lines):
- line 719: // TODO: would be far better to directly parse from
- line 754: // TODO: would be far better to directly parse from
- line 789: // TODO: would be far better to directly parse from
- line 824: // TODO: would be far better to directly parse from
- line 861: // TODO: would be far better to directly parse from
- line 891: // TODO: would be far better to directly parse from
src/Lucene.Net.Codecs/Pulsing/PulsingPostingsReader.cs (5 lines):
- line 35: // TODO: -- should we switch "hasProx" higher up? and
- line 177: // TODO Double check this is right..
- line 193: // TODO: sort of silly to copy from one big byte[]
- line 412: // TODO: skipVInt
- line 686: // TODO: we should consider nuking this map and just making it so if you do this,
src/Lucene.Net.Analysis.Common/Analysis/Wikipedia/WikipediaTokenizer.cs (5 lines):
- line 167: // TODO: cutover to enum
- line 231: //TODO: how to know how much whitespace to add
- line 255: // TODO: this is inefficient
- line 280: //TODO: how to know how much whitespace to add
- line 299: // TODO: this is inefficient
src/Lucene.Net.Queries/Function/FunctionValues.cs (5 lines):
- line 84: // TODO: should we make a termVal, returns BytesRef?
- line 96: /// returns the bytes representation of the str val - TODO: should this return the indexed raw bytes not?
- line 126: /// TODO: Maybe we can just use intVal for this...
- line 232: // TODO: should we make a termVal, fills BytesRef[]?
- line 251: // TODO: change "reader" to AtomicReaderContext
src/Lucene.Net/Codecs/Lucene41/Lucene41PostingsWriter.cs (5 lines):
- line 120: // TODO: does this ctor even make sense?
- line 189: // TODO: should we try skipping every 2/4 blocks...?
- line 455: // TODO: wasteful we are counting this (counting # docs
- line 524: // TODO: should we send offsets/payloads to
- line 672: // TODO: add a finish() at least to PushBase? DV too...?
src/Lucene.Net.Misc/Store/NativeUnixDirectory.cs (5 lines):
- line 75: // TODO: this is OS dependent, but likely 512 is the LCD
- line 224: // TODO -- how to impl this? neither FOS nor
- line 230: // TODO -- I don't think this method is necessary?
- line 261: // TODO: the case where we'd seek'd back, wrote an
- line 272: // TODO: seek is fragile at best; it can only properly
src/Lucene.Net/Search/BooleanScorer.cs (5 lines):
- line 117: // TODO: break out bool anyProhibited, int
- line 164: // TODO: re-enable this if BQ ever sends us required clauses
- line 180: // TODO: re-enable this if BQ ever sends us required clauses
- line 192: // TODO: re-enable this if BQ ever sends us required clauses
- line 243: // TODO: re-enable this if BQ ever sends us required
src/Lucene.Net.Misc/Store/WindowsDirectory.cs (5 lines):
- line 50: //JAVA TO C# CONVERTER TODO TASK: The library is specified in the 'DllImport' attribute for .NET:
- line 158: //JAVA TO C# CONVERTER TODO TASK: Replace 'unknown' with the appropriate dll name:
- line 164: //JAVA TO C# CONVERTER TODO TASK: Replace 'unknown' with the appropriate dll name:
- line 170: //JAVA TO C# CONVERTER TODO TASK: Replace 'unknown' with the appropriate dll name:
- line 176: //JAVA TO C# CONVERTER TODO TASK: Replace 'unknown' with the appropriate dll name:
src/Lucene.Net.TestFramework/Analysis/ValidatingTokenFilter.cs (5 lines):
- line 24: // TODO: rename to OffsetsXXXTF? ie we only validate
- line 27: // TODO: also make a DebuggingTokenFilter, that just prints
- line 30: // TODO: BTSTC should just append this to the chain
- line 166: // TODO: what else to validate
- line 168: // TODO: check that endOffset is >= max(endOffset)
src/Lucene.Net.Codecs/IntBlock/VariableIntBlockIndexInput.cs (5 lines):
- line 26: // TODO: much of this can be shared code w/ the fixed case
- line 59: // TODO: can this be simplified?
- line 117: // TODO: should we do this in real-time, not lazy?
- line 138: // TODO: if we were more clever when writing the
- line 204: // TODO: we can't do this assert because non-causal
src/Lucene.Net.TestFramework/Codecs/RAMOnly/RAMOnlyPostingsFormat.cs (5 lines):
- line 265: // TODO: finalize stuff
- line 427: // TODO: reuse BytesRef
- line 467: // TODO: override bulk read, for better perf
- line 518: // TODO: override bulk read, for better perf
- line 584: // TODO -- ok to do this up front instead of
src/Lucene.Net.Codecs/Sep/SepPostingsWriter.cs (5 lines):
- line 137: // TODO: -- only if at least one field stores payloads?
- line 166: // TODO: -- just ask skipper to "start" here
- line 248: // TODO: -- awkward we have to make these two
- line 280: // TODO: explore whether we get better compression
- line 322: // TODO: -- wasteful we are counting this in two places?
src/Lucene.Net.Benchmark/ByTask/Benchmark.cs (5 lines):
- line 38: /// - TODO - report into Excel and/or graphed view.
- line 39: /// - TODO - perf comparison between Lucene releases over the years.
- line 40: /// - TODO - perf report adequate to include in Lucene nightly build site? (so we can easily track performance changes.)
- line 41: /// - TODO - add overall time control for repeated execution (vs. current by-count only).
- line 42: /// - TODO - query maker that is based on index statistics.
src/Lucene.Net/Util/PagedBytes.cs (5 lines):
- line 39: // TODO: refactor this, byteblockpool, fst.bytestore, and any
- line 45: // TODO: these are unused?
- line 136: // TODO: this really needs to be refactored into fieldcacheimpl
- line 231: // TODO: we could also support variable block sizes
- line 295: // TODO: this really needs to be refactored into fieldcacheimpl
src/Lucene.Net/Search/FieldComparator.cs (5 lines):
- line 613: // TODO: are there sneaky non-branch ways to compute sign of float?
- line 879: // TODO: there are sneaky non-branch ways to compute
- line 1080: // TODO: can we "map" our docIDs to the current
- line 1455: // TODO: should we remove this? who really uses it?
- line 1475: // TODO: add missing first/last support here?
src/Lucene.Net/Search/Spans/NearSpansUnordered.cs (4 lines):
- line 129: // TODO: Remove warning after API has been finalized
- line 137: // TODO: Remove warning after API has been finalized
- line 277: // TODO: Remove warning after API has been finalized
- line 295: // TODO: Remove warning after API has been finalized
src/Lucene.Net.Analysis.Kuromoji/JapaneseTokenizer.cs (4 lines):
- line 279: // TODO: maybe we do something else here, instead of just
- line 607: // TODO: we can be more aggressive about user
- line 917: // TODO: sort of silly to make Token instances here; the
- line 1378: // TODO: make generic'd version of this "circular array"?
src/Lucene.Net/Index/FreqProxTermsWriterPerField.cs (4 lines):
- line 36: // TODO: break into separate freq and prox writers as
- line 475: // TODO: really TermsHashPerField should take over most
- line 576: // Mark it deleted. TODO: we could also skip
- line 580: // TODO: can we do this reach-around in a cleaner way????
src/Lucene.Net/Index/BufferedUpdatesStream.cs (4 lines):
- line 56: // TODO: maybe linked list?
- line 589: // TODO: we can process the updates per DV field, from last to first so that
- line 609: // TODO: we traverse the terms in update order (not term order) so that we
- line 736: // TODO: we re-use term now in our merged iterable, but we shouldn't clone, instead copy for this assert
src/Lucene.Net.TestFramework/Analysis/LookaheadTokenFilter.cs (4 lines):
- line 31: // TODO: cut SynFilter over to this
- line 32: // TODO: somehow add "nuke this input token" capability...
- line 366: // TODO: end()?
- line 367: // TODO: close()?
src/Lucene.Net/Codecs/Lucene40/Lucene40TermVectorsReader.cs (4 lines):
- line 277: // TODO: we can improve writer here, eg write 0 into
- line 373: // TODO: really indexer hardwires
- line 517: // TODO: we could maybe reuse last array, if we can
- line 791: // TODO: we can improve writer here, eg write 0 into
src/Lucene.Net/Index/IndexWriter.cs (4 lines):
- line 1810: // TODO: this is a slow linear search, but, number of
- line 3543: // TODO: somehow we should fix this merge so it's
- line 5225: // TODO: in the non-pool'd case this is somewhat
- line 5708: // TODO: ideally we would freeze merge.info here!!
src/Lucene.Net.Spatial/Prefix/Tree/SpatialPrefixTree.cs (4 lines):
- line 110: //TODO cache for each level
- line 129: /// TODO rename to GetTopCell or is this fine?
- line 202: //TODO consider an on-demand iterator -- it won't build up all cells in memory.
- line 308: [Obsolete("TODO remove; not used and not interesting, don't need collection in & out")]
src/Lucene.Net/Util/OpenBitSet.cs (4 lines):
- line 315: public virtual void Set(long startIndex, long endIndex) // LUCENENET TODO: API: Change this to use startIndex and length to match .NET
- line 404: public virtual void Clear(int startIndex, int endIndex) // LUCENENET TODO: API: Change this to use startIndex and length to match .NET
- line 451: public virtual void Clear(long startIndex, long endIndex) // LUCENENET TODO: API: Change this to use startIndex and length to match .NET
- line 590: public virtual void Flip(long startIndex, long endIndex) // LUCENENET TODO: API: Change this to use startIndex and length to match .NET
src/Lucene.Net/Index/DocumentsWriter.cs (4 lines):
- line 114: // TODO: cut over to BytesRefHash in BufferedDeletes
- line 150: // TODO why is this synchronized?
- line 162: // TODO: we could check w/ FreqProxTermsWriter: if the
- line 170: // TODO why is this synchronized?
src/Lucene.Net.TestFramework/Index/AssertingAtomicReader.cs (4 lines):
- line 76: // TODO: should we give this thing a random to be super-evil,
- line 421: // TODO: should we give this thing a random to be super-evil,
- line 435: // TODO: should we give this thing a random to be super-evil,
- line 461: // TODO: we should separately track if we are 'at the end' ?
src/Lucene.Net.Suggest/Spell/SpellChecker.cs (4 lines):
- line 67: // TODO: why is this package private?
- line 145: // TODO: we should make this final as it is called in the constructor
- line 486: // TODO: we should use ReaderUtil+seekExact, we dont care about the docFreq
- line 571: // TODO: this isn't that great, maybe in the future SpellChecker should take
src/Lucene.Net.Spatial/Serialized/SerializedDVStrategy.cs (4 lines):
- line 49: //TODO do we make this non-volatile since it's merely a heuristic?
- line 105: //TODO if makeShapeValueSource gets lifted to the top; this could become a generic impl.
- line 136: //TODO raise to SpatialStrategy
- line 337: return outerInstance.GetDescription() + "=" + ObjectVal(doc);//TODO truncate?
src/Lucene.Net.QueryParser/Classic/QueryParserBase.cs (4 lines):
- line 123: // TODO: Work out what the default date resolution SHOULD be (was null in Java, which isn't valid for an enum type)
- line 298: public virtual CultureInfo Locale // LUCENENET TODO: API - Rename Culture
- line 509: // LUCENENET TODO: Try to make setting custom formats easier by adding
- line 593: // LUCENETODO: Should this be protected instead?
src/Lucene.Net/Util/Automaton/CompiledAutomaton.cs (4 lines):
- line 84: // TODO: would be nice if these sortedTransitions had "int
- line 224: // TODO: use binary search here
- line 285: // TODO: should this take startTerm too? this way
- line 295: AUTOMATON_TYPE.PREFIX => new PrefixTermsEnum(terms.GetEnumerator(), Term),// TODO: this is very likely faster than .intersect,
src/Lucene.Net.Codecs/Sep/SepSkipListWriter.cs (4 lines):
- line 27: // TODO: -- skip data should somehow be more local to the particular stream
- line 47: // TODO: -- private again
- line 49: // TODO: -- private again
- line 72: // TODO: -- also cutover normal IndexOutput to use getIndex()?
src/Lucene.Net.Codecs/Pulsing/PulsingPostingsWriter.cs (4 lines):
- line 27: // TODO: we now inline based on total TF of the term,
- line 111: // TODO: -- lazy init this? ie, if every single term
- line 156: // TODO: -- should we NOT reuse across fields? would
- line 285: // TODO: it'd be better to share this encoding logic
src/Lucene.Net.Codecs/Memory/MemoryPostingsFormat.cs (4 lines):
- line 55: // TODO: would be nice to somehow allow this to act like
- line 73: // TODO: Maybe name this 'Cached' or something to reflect
- line 508: // TODO: we could make more efficient version, but, it
- line 713: // TODO: we could make more efficient version, but, it
src/Lucene.Net.Suggest/Suggest/Analyzing/BlendedInfixSuggester.cs (4 lines):
- line 32: // TODO:
- line 85: // TODO:
- line 108: // LUCENENET UPGRADE TODO: Remove method at version 4.11.0. Was retained for perfect 4.8 compatibility
- line 165: // TODO: maybe just stored fields? they compress...
src/Lucene.Net/Index/LiveIndexWriterConfig.cs (4 lines):
- line 49: private volatile int termIndexInterval; // TODO: this should be private to the codec, not settable here
- line 150: termIndexInterval = IndexWriterConfig.DEFAULT_TERM_INDEX_INTERVAL; // TODO: this should be private to the codec, not settable here
- line 264: set => this.termIndexInterval = value; // TODO: this should be private to the codec, not settable here
- line 573: sb.Append("termIndexInterval=").Append(TermIndexInterval).Append("\n"); // TODO: this should be private to the codec, not settable here
src/Lucene.Net.Analysis.Common/Analysis/Synonym/SynonymFilter.cs (4 lines):
- line 79: // TODO: maybe we should resolve token -> wordID then run
- line 82: // TODO: a more efficient approach would be Aho/Corasick's
- line 110: // TODO: we should set PositionLengthAttr too...
- line 588: // TODO: maybe just a PendingState class, holding
src/Lucene.Net.Spatial/Prefix/AbstractVisitingPrefixTreeFilter.cs (3 lines):
- line 268: if (Debugging.AssertsEnabled) Debugging.Assert(StringHelper.StartsWith(thisTerm, curVNodeTerm));//TODO refactor to use method on curVNode.cell
- line 286: //TODO use termsEnum.docFreq() as heuristic
- line 334: if (thisTerm is not null && StringHelper.StartsWith(thisTerm, curVNodeTerm)) //TODO refactor to use method on curVNode.cell
src/Lucene.Net/Index/SegmentReader.cs (3 lines):
- line 79: // TODO: why is this public?
- line 83: // TODO if the segment uses CFS, we may open the CFS file twice: once for
- line 195: // TODO: can we avoid iterating over fieldinfos several times and creating maps of all this stuff if dv updates do not exist?
src/Lucene.Net/Search/FilteredQuery.cs (3 lines):
- line 351: // TODO once we have way to figure out if we use RA or LeapFrog we can remove this scorer
- line 598: // TODO once we have way to figure out if we use RA or LeapFrog we can remove this scorer
- line 617: //TODO once we have a cost API on filters and scorers we should rethink this heuristic
src/Lucene.Net.TestFramework/Store/BaseDirectoryTestCase.cs (3 lines):
- line 422: // TODO: fold in some of the testing of o.a.l.index.TestIndexInput in here!
- line 884: // TODO: somehow change this test to
- line 891: // TODO: figure a way to test this better/clean it up. E.g. we should be testing for FileSwitchDir,
src/Lucene.Net.Facet/DrillDownQuery.cs (3 lines):
- line 220: // TODO: we should use FilteredQuery?
- line 243: // TODO: we should use FilteredQuery?
- line 347: // TODO: this is hackish; we need to do it because
src/Lucene.Net/Index/DocumentsWriterPerThread.cs (3 lines):
- line 628: // TODO: ideally we would freeze newSegment here!!
- line 643: // TODO: we should prune the segment if it's 100%
- line 646: // TODO: in the NRT case it'd be better to hand
src/Lucene.Net/Search/Spans/NearSpansOrdered.cs (3 lines):
- line 153: // TODO: Remove warning after API has been finalized
- line 154: // TODO: Would be nice to be able to lazy load payloads
- line 160: // TODO: Remove warning after API has been finalized
src/Lucene.Net/Search/BooleanQuery.cs (3 lines):
- line 153: public virtual bool CoordDisabled => disableCoord; // LUCENENET TODO: API Change to CoordEnabled? Per MSDN, properties should be in the affirmative.
- line 400: // TODO: (LUCENE-4872) in some cases BooleanScorer may be faster for minNrShouldMatch
- line 423: // TODO: there are some cases where BooleanScorer
src/Lucene.Net.TestFramework/Analysis/TokenStreamToDot.cs (3 lines):
- line 64: // TODO: is there some way to tell dot that it should
- line 76: // TODO: hmm are TS's still allowed to do this...?
- line 124: // TODO: should we output any final text (from end
src/Lucene.Net/Index/FieldInfos.cs (3 lines):
- line 148: // TODO: what happens if in fact a different order is used?
- line 195: // TODO: we should similarly catch an attempt to turn
- line 374: // TODO: really, indexer shouldn't even call this
src/Lucene.Net.Queries/TermsFilter.cs (3 lines):
- line 160: // TODO: maybe use oal.index.PrefixCodedTerms instead?
- line 164: // TODO: we also pack terms in FieldCache/DocValues
- line 167: // TODO: yet another option is to build the union of the terms in
src/Lucene.Net.Replicator/IndexInputInputStream.cs (3 lines):
- line 41: throw IllegalStateException.Create("Cannot flush a readonly stream."); // LUCENENET TODO: Change to NotSupportedException ?
- line 63: throw IllegalStateException.Create("Cannot change length of a readonly stream."); // LUCENENET TODO: Change to NotSupportedException ?
- line 76: throw new InvalidCastException("Cannot write to a readonly stream."); // LUCENENET TODO: Change to NotSupportedException ?
src/Lucene.Net/Search/MultiPhraseQuery.cs (3 lines):
- line 146: public virtual IList GetTermArrays() // LUCENENET TODO: API - make into a property
- line 528: // TODO: if ever we allow subclassing of the *PhraseScorer
- line 643: // TODO: move this init into positions(): if the search
src/Lucene.Net.Facet/Taxonomy/Directory/DirectoryTaxonomyReader.cs (3 lines):
- line 74: // TODO: test DoubleBarrelLRUCache and consider using it instead
- line 401: // TODO: can we use an int-based hash impl, such as IntToObjectMap,
- line 502: // LUCENENET TODO: Should we use a 3rd party logging library?
src/Lucene.Net.Grouping/AbstractFirstPassGroupingCollector.cs (3 lines):
- line 78: // TODO: allow null groupSort to mean "by relevance",
- line 201: // TODO: should we add option to mean "ignore docs that
- line 313: // TODO: optimize this
src/Lucene.Net/Util/Automaton/SpecialOperations.cs (3 lines):
- line 93: // TODO: not great that this is recursive... in theory a
- line 144: // TODO: this currently requites a determinized machine,
- line 275: // TODO: this is a dangerous method ... Automaton could be
src/Lucene.Net.TestFramework/Analysis/MockTokenizer.cs (3 lines):
- line 60: /// TODO: Keyword returns an "empty" token for an empty reader...
- line 85: // TODO: "register" with LuceneTestCase to ensure all streams are closed() ?
- line 313: // TODO: investigate the CachingTokenFilter "double-close"... for now we ignore this
src/Lucene.Net.Replicator/Http/HttpClientBase.cs (3 lines):
- line 52: // TODO compression?
- line 230: // TODO: can we simplify this Consuming !?!?!?
- line 251: // TODO: can we simplify this Consuming !?!?!?
src/Lucene.Net.TestFramework/Codecs/MockRandom/MockRandomPostingsFormat.cs (3 lines):
- line 86: // TODO: others
- line 193: // TODO: randomize variables like acceptibleOverHead?!
- line 251: // TODO: would be nice to allow 1 but this is very
src/Lucene.Net.Codecs/BlockTerms/BlockTermsWriter.cs (3 lines):
- line 30: // TODO: Currently we encode all terms between two indexed terms as a block
- line 354: // TODO: cutover to better intblock codec, instead
- line 364: // TODO: cutover to better intblock codec. simple64?
src/Lucene.Net.QueryParser/Surround/Parser/QueryParser.cs (3 lines):
- line 104: /* FIXME: check acceptable subquery: at least one subquery should not be
- line 130: : int.Parse(distanceOp.Substring(0, distanceOp.Length - 1)); // LUCENENET TODO: Culture from current thread?
- line 564: // LUCENENET TODO: Test parsing float in various cultures (.NET)
src/Lucene.Net/Support/ExceptionHandling/ExceptionExtensions.cs (3 lines):
- line 396: or NullReferenceException; // LUCENENET TODO: These could be real problems where exceptions can be prevented that our catch blocks are hiding
- line 479: return e is InvalidDataException; // LUCENENET TODO: Not sure if this is the right exception
- line 695: // LUCENENET TODO: Using the internal type here because it is only thrown in AnalysisSPILoader and is not ever likely
src/Lucene.Net.Codecs/Memory/MemoryDocValuesProducer.cs (3 lines):
- line 804: // TODO: add SeekStatus to FSTEnum like in https://issues.apache.org/jira/browse/LUCENE-3729
- line 821: // TODO: would be better to make this simpler and faster.
- line 830: // TODO: we could do this lazily, better to try to push into FSTEnum though?
src/Lucene.Net.Codecs/SimpleText/SimpleTextTermVectorsReader.cs (3 lines):
- line 319: // TODO: reuse
- line 421: // TODO: reuse
- line 433: // TODO: reuse
src/Lucene.Net/Index/SortedDocValuesTermsEnum.cs (3 lines):
- line 52: // TODO: is there a cleaner way?
- line 68: // TODO: hmm can we avoid this "extra" lookup?:
- line 81: // TODO: is there a cleaner way?
src/Lucene.Net.Spatial/Prefix/Tree/Cell.cs (3 lines):
- line 231: //TODO add getParent() and update some algorithms to use this?
- line 260: //TODO change API to return a filtering iterator
- line 292: //TODO Cell getSubCell(byte b)
src/Lucene.Net/Search/DisjunctionSumScorer.cs (3 lines):
- line 63: // TODO: this currently scores, but so did the previous impl
- line 64: // TODO: remove recursion.
- line 65: // TODO: if we separate scoring, out of here,
src/Lucene.Net/Index/DocInverterPerField.cs (3 lines):
- line 74: // TODO FI: this should be "genericized" to querying
- line 88: // TODO: after we fix analyzers, also check if termVectorOffsets will be indexed.
- line 201: // TODO: maybe add some safety? then again, its already checked
src/Lucene.Net/Index/ReadersAndUpdates.cs (3 lines):
- line 65: // TODO: it's sometimes wasteful that we hold open two
- line 284: // TODO: can we somehow use IOUtils here...? problem is
- line 454: // TODO (DVU_RENAME) to writeDeletesAndUpdates
src/Lucene.Net/Codecs/Lucene42/Lucene42DocValuesProducer.cs (3 lines):
- line 784: // TODO: add SeekStatus to FSTEnum like in https://issues.apache.org/jira/browse/LUCENE-3729
- line 809: // TODO: would be better to make this simpler and faster.
- line 818: // TODO: we could do this lazily, better to try to push into FSTEnum though?
src/Lucene.Net.Analysis.Common/Analysis/CommonGrams/CommonGramsFilter.cs (3 lines):
- line 28: * TODO: Consider implementing https://issues.apache.org/jira/browse/LUCENE-1688 changes to stop list and associated constructors
- line 92: /// TODO:Consider adding an option to not emit unigram stopwords
- line 96: /// TODO: Consider optimizing for the case of three
src/Lucene.Net/Index/SortedSetDocValuesTermsEnum.cs (3 lines):
- line 52: // TODO: is there a cleaner way?
- line 68: // TODO: hmm can we avoid this "extra" lookup?:
- line 81: // TODO: is there a cleaner way?
src/Lucene.Net/Codecs/Lucene3x/Lucene3xTermVectorsReader.cs (3 lines):
- line 98: // TODO: if we are worried, maybe we could eliminate the
- line 270: // TODO: we can improve writer here, eg write 0 into
- line 832: // TODO: we can improve writer here, eg write 0 into
src/Lucene.Net/Search/Spans/SpanPositionCheckQuery.cs (3 lines):
- line 175: // TODO: Remove warning after API has been finalized
- line 185: return result; //TODO: any way to avoid the new construction?
- line 188: // TODO: Remove warning after API has been finalized
src/Lucene.Net.Suggest/Suggest/Analyzing/FuzzySuggester.cs (3 lines):
- line 182: // TODO: right now there's no penalty for fuzzy/edits,
- line 240: // TODO: maybe add alphaMin to LevenshteinAutomata,
- line 269: // TODO: we could call toLevenshteinAutomata() before det?
src/Lucene.Net.Demo/Facet/DistanceFacetsExample.cs (2 lines):
- line 88: // TODO: we could index in radians instead ... saves all the conversions in GetBoundingBoxFilter
- line 140: // TODO: maybe switch to recursive prefix tree instead
src/Lucene.Net/Search/Spans/SpanPayloadCheckQuery.cs (2 lines):
- line 75: //TODO: check the byte arrays are the same
- line 146: //TODO: is this right?
src/Lucene.Net.Facet/SortedSet/DefaultSortedSetDocValuesReaderState.cs (2 lines):
- line 65: // TODO: we can make this more efficient if eg we can be
- line 72: // TODO: this approach can work for full hierarchy?;
src/Lucene.Net/Search/Spans/SpanNotQuery.cs (2 lines):
- line 206: public override int End => includeSpans.End; // TODO: Remove warning after API has been finalized
- line 218: // TODO: Remove warning after API has been finalized
src/Lucene.Net.Benchmark/ByTask/Stats/TaskStats.cs (2 lines):
- line 82: maxUsedMem = maxTotMem; // - Runtime.getRuntime().freeMemory(); // LUCENENET TODO: available RAM
- line 97: long usedMem = totMem; //- Runtime.getRuntime().freeMemory(); // LUCENENET TODO: available RAM
src/Lucene.Net/Codecs/TermVectorsWriter.cs (2 lines):
- line 146: // TODO: we should probably nuke this and make a more efficient 4.x format
- line 263: // count manually! TODO: Maybe enforce that Fields.size() returns something valid?
src/Lucene.Net.TestFramework/Index/ThreadedIndexingAndSearchingTestCase.cs (2 lines):
- line 163: // TODO: would be better if this were cross thread, so that we make sure one thread deleting anothers added docs works:
- line 403: // TODO: we should enrich this to do more interesting searches
src/Lucene.Net.TestFramework/Index/RandomCodec.cs (2 lines):
- line 132: // TODO: make it possible to specify min/max iterms per
- line 148: //TODO as a PostingsFormat which wraps others, we should allow TestBloomFilteredLucene41Postings to be constructed
src/Lucene.Net.Benchmark/ByTask/Feeds/SpatialDocMaker.cs (2 lines):
- line 174: //TODO remove previous round config?
- line 276: //TODO consider abusing the 'size' notion to number of shapes per document
src/Lucene.Net/Util/Automaton/UTF32ToUTF8.cs (2 lines):
- line 63: internal int Value { get; set; } // TODO: change to byte
- line 68: // TODO: maybe move to UnicodeUtil?
src/Lucene.Net.Grouping/AbstractSecondPassGroupingCollector.cs (2 lines):
- line 57: if (!groups.Any()) // LUCENENET TODO: Change back to .Count if/when IEnumerable is changed to ICollection or IReadOnlyCollection
- line 160: // TODO: merge with SearchGroup or not?
src/Lucene.Net.Join/ToParentBlockJoinCollector.cs (2 lines):
- line 110: // TODO: allow null sort to be specialized to relevance
- line 167: // TODO: we could sweep all joinScorers here and
src/Lucene.Net.Codecs/Sep/SepSkipListReader.cs (2 lines):
- line 33: // TODO: rewrite this as recursive classes?
- line 45: // TODO: -- make private again
src/Lucene.Net/Search/Spans/SpanNearPayloadCheckQuery.cs (2 lines):
- line 65: //TODO: check the byte arrays are the same
- line 143: //TODO: is this right?
src/Lucene.Net/Analysis/TokenAttributes/OffsetAttributeImpl.cs (2 lines):
- line 42: // TODO: we could assert that this is set-once, ie,
- line 61: // TODO: we could use -1 as default here? Then we can
src/Lucene.Net/Util/BytesRef.cs (2 lines):
- line 148: if (Debugging.AssertsEnabled) Debugging.Assert(Offset == 0); // TODO broken if offset != 0
- line 160: if (Debugging.AssertsEnabled) Debugging.Assert(Offset == 0); // TODO broken if offset != 0
src/Lucene.Net.Analysis.ICU/Analysis/Icu/Segmentation/ICUTokenizer.cs (2 lines):
- line 61: private static readonly object syncLock = new object(); // LUCENENET specific - workaround until BreakIterator is made thread safe (LUCENENET TODO: TO REVERT)
- line 220: // TODO: refactor to a shared readFully somewhere
src/Lucene.Net.Benchmark/ByTask/Tasks/NewCollationAnalyzerTask.cs (2 lines):
- line 88: // LUCENENET TODO: The .NET equivalent to create a collator like the one in the JDK is:
- line 130: // TODO: add strength, decomposition, etc
src/Lucene.Net/Codecs/Lucene3x/Lucene3xNormsProducer.cs (2 lines):
- line 69: // TODO: just a list, and double-close() separate norms files?
- line 139: // TODO: change to a real check? see LUCENE-3619
src/Lucene.Net.Benchmark/ByTask/Feeds/LongToEnglishContentSource.cs (2 lines):
- line 37: // TODO: we could take param to specify locale...
- line 48: UninterruptableMonitor.Enter(this); // LUCENENET TODO: Since the whole method is synchronized, do we need this?
src/Lucene.Net.Analysis.Kuromoji/Util/ToStringUtil.cs (2 lines):
- line 271: // TODO: now that this is used by readingsfilter and not just for
- line 1155: // TODO: investigate all this
src/Lucene.Net.Misc/Index/Sorter/BlockJoinComparatorSource.cs (2 lines):
- line 38: // TODO: can/should we clean this thing up (e.g. return a proper sort value)
- line 250: // TODO: would be better if copy() didnt cause a term lookup in TermOrdVal & co,
src/Lucene.Net/Index/IndexableField.cs (2 lines):
- line 28: // TODO: how to handle versioning here...?
- line 30: // TODO: we need to break out separate StoredField...
src/Lucene.Net/Util/FieldCacheSanityChecker.cs (2 lines):
- line 103: /// (:TODO: is this a bad idea? are we masking a real problem?)
- line 303: // TODO: We don't check closed readers here (as getTopReaderContext
src/Lucene.Net.Facet/Range/DoubleRange.cs (2 lines):
- line 71: // TODO: if DoubleDocValuesField used
- line 151: // TODO: this is just like ValueSourceScorer,
src/Lucene.Net.Analysis.ICU/Analysis/Icu/Segmentation/DefaultICUTokenizerConfig.cs (2 lines):
- line 67: // TODO: if the wrong version of the ICU jar is used, loading these data files may give a strange error.
- line 76: // TODO: deprecate this boolean? you only care if you are doing super-expert stuff...
src/Lucene.Net.Benchmark/ByTask/Tasks/NearRealtimeReaderTask.cs (2 lines):
- line 72: // TODO: gather basic metrics for reporting -- eg mean,
- line 96: // TODO: somehow we need to enable warming, here
src/Lucene.Net.Sandbox/Queries/FuzzyLikeThisQuery.cs (2 lines):
- line 48: // TODO: generalize this query (at least it should not reuse this static sim!
- line 336: //TODO possible alternative step 3 - organize above booleans into a new layer of field-based
src/Lucene.Net.TestFramework/Codecs/Lucene40/Lucene40PostingsWriter.cs (2 lines):
- line 107: // TODO: this is a best effort, if one of these fields has no postings
- line 315: // TODO: wasteful we are counting this (counting # docs
src/Lucene.Net.Analysis.Common/Analysis/Pt/RSLPStemmerBase.cs (2 lines):
- line 159: // TODO: use a more efficient datastructure: automaton?
- line 316: throw RuntimeException.Create("Illegal Step header specified at line " /*+ r.LineNumber*/); // TODO Line number
src/Lucene.Net.Join/Support/ToParentBlockJoinCollector.cs (2 lines):
- line 112: // TODO: allow null sort to be specialized to relevance
- line 169: // TODO: we could sweep all joinScorers here and
src/Lucene.Net.Analysis.Common/Collation/CollationKeyFilterFactory.cs (2 lines):
- line 149: // LUCENENET TODO: Verify Decomposition > NormalizationMode mapping between the JDK and icu-dotnet
- line 200: // LUCENENET TODO: This method won't work on .NET core - confirm the above solution works as expected.
src/Lucene.Net/Search/IndexSearcher.cs (2 lines):
- line 425: // TODO: if we fix type safety of TopFieldDocs we can
- line 718: // TODO: should we make this
src/Lucene.Net.TestFramework/Codecs/CheapBastard/CheapBastardCodec.cs (2 lines):
- line 28: // TODO: better name :)
- line 32: // TODO: would be better to have no terms index at all and bsearch a terms dict
src/Lucene.Net/Util/Fst/Outputs.cs (2 lines):
- line 41: // TODO: maybe change this API to allow for re-use of the
- line 99: // TODO: maybe make valid(T output) public...? for asserts
src/Lucene.Net/Codecs/Lucene41/Lucene41Codec.cs (2 lines):
- line 50: // TODO: slightly evil
- line 97: // TODO: slightly evil
src/Lucene.Net/Index/SlowCompositeReaderWrapper.cs (2 lines):
- line 200: // TODO: this could really be a weak map somewhere else on the coreCacheKey,
- line 254: // TODO: as this is a wrapper, should we really close the delegate?
src/Lucene.Net/Search/Spans/TermSpans.cs (2 lines):
- line 106: // TODO: Remove warning after API has been finalized
- line 124: // TODO: Remove warning after API has been finalized
src/Lucene.Net.QueryParser/Flexible/Core/Nodes/QueryNodeImpl.cs (2 lines):
- line 37: // TODO remove PLAINTEXT_FIELD_NAME replacing it with configuration APIs
- line 215: // TODO: remove this method, it's commonly used by
src/Lucene.Net.TestFramework/Search/AssertingIndexSearcher.cs (2 lines):
- line 87: // TODO: use the more sophisticated QueryUtils.check sometimes!
- line 105: // TODO: shouldn't we AssertingCollector.wrap(collector) here?
src/Lucene.Net.Suggest/Suggest/Fst/FSTCompletion.cs (2 lines):
- line 34: // TODO: we could store exact weights as outputs from the FST (int4 encoded
- line 38: // TODO: support for Analyzers (infix suggestions, synonyms?)
src/Lucene.Net/Index/SegmentInfo.cs (2 lines):
- line 40: // TODO: remove these from this class, for now this is the representation
- line 214: // TODO: we could append toString of attributes() here?
src/Lucene.Net.Highlighter/PostingsHighlight/PostingsHighlighter.cs (2 lines):
- line 90: // TODO: maybe allow re-analysis for tiny fields? currently we require offsets,
- line 442: // TODO: should we have some reasonable defaults for term pruning? (e.g. stopwords)
src/Lucene.Net.Analysis.Kuromoji/Tools/CharacterDefinitionWriter.cs (2 lines):
- line 35: /// Constructor for building. TODO: remove write access
- line 68: // TODO: length def ignored
src/Lucene.Net.QueryParser/Xml/QueryTemplateManager.cs (2 lines):
- line 92: // TODO: Suppress XML header with encoding (as Strings have no encoding)
- line 104: // TODO: Suppress XML header with encoding (as Strings have no encoding)
src/Lucene.Net/Codecs/Lucene45/Lucene45DocValuesConsumer.cs (2 lines):
- line 127: // TODO: more efficient?
- line 266: // TODO: in some cases representing missing with minValue-1 wouldn't take up additional space and so on,
src/Lucene.Net/Search/SearcherLifetimeManager.cs (2 lines):
- line 146: // TODO: we could get by w/ just a "set"; need to have
- line 177: // TODO: we don't have to use IR.getVersion to track;
src/Lucene.Net.Expressions/ExpressionComparator.cs (2 lines):
- line 45: // TODO: change FieldComparer.setScorer to throw IOException and remove this try-catch
- line 49: // TODO: might be cleaner to lazy-init 'source' and set scorer after?
src/Lucene.Net/Store/RateLimiter.cs (2 lines):
- line 66: // TODO: we could also allow eg a sub class to dynamically
- line 110: // TODO: this is purely instantaneous rate; maybe we
src/Lucene.Net.Spatial/Prefix/RecursivePrefixTreeStrategy.cs (2 lines):
- line 52: prefixGridScanLevel = grid.MaxLevels - 4;//TODO this default constant is dependent on the prefix grid size
- line 64: //TODO if negative then subtract from maxlevels
src/Lucene.Net.TestFramework/Analysis/BaseTokenStreamTestCase.cs (2 lines):
- line 862: // TODO: really we should pass a random seed to
- line 908: // TODO: we can make ascii easier to read if we
src/Lucene.Net/Index/DocFieldProcessor.cs (2 lines):
- line 226: // TODO FI: we need to genericize the "flags" that a
- line 274: // sorted order. TODO: we actually only need to
src/Lucene.Net.Suggest/Suggest/Analyzing/FSTUtil.cs (2 lines):
- line 27: // TODO: move to core? nobody else uses it yet though...
- line 112: // TODO: if this transition's TO state is accepting, and
src/Lucene.Net/Index/DocumentsWriterPerThreadPool.cs (2 lines):
- line 60: // TODO this should really be part of DocumentsWriterFlushControl
- line 64: // TODO this should really be part of DocumentsWriterFlushControl
src/Lucene.Net/Util/Fst/Builder.cs (2 lines):
- line 29: // TODO: could we somehow stream an FST to disk while we
- line 626: // TODO: instead of recording isFinal/output on the
src/Lucene.Net.Facet/FacetsCollector.cs (2 lines):
- line 330: // TODO: if we fix type safety of TopFieldDocs we can
- line 341: // TODO: can we pass the right boolean for
src/Lucene.Net/Codecs/Lucene45/Lucene45DocValuesProducer.cs (2 lines):
- line 1106: // TODO: maxLength is negative when all terms are merged away...
- line 1224: // TODO: is there a cleaner way
src/Lucene.Net.Facet/Range/LongRange.cs (2 lines):
- line 60: // TODO: can we require fewer args? (same for
- line 146: // TODO: this is just like ValueSourceScorer,
src/Lucene.Net.TestFramework/Search/CheckHits.cs (2 lines):
- line 312: // TODO: can we improve this entire method? its really geared to work only with TF/IDF
- line 357: // TODO: this is a TERRIBLE assertion!!!!
src/Lucene.Net/Document/Field.cs (2 lines):
- line 273: // TODO: allow direct construction of int, long, float, double value too..?
- line 533: // LUCENENET TODO: Add SetValue() overloads for each type?
src/Lucene.Net.Analysis.Kuromoji/Dict/Dictionary.cs (2 lines):
- line 97: // TODO: maybe we should have a optimal method, a non-typesafe
- line 101: // LUCENENT TODO: Make this whole thing into an abstact class??
src/Lucene.Net.QueryParser/ComplexPhrase/ComplexPhraseQueryParser.cs (2 lines):
- line 237: // TODO ensure that field-sensitivity is preserved ie the query
- line 391: // LUCENETODO alternatively could call extract terms here?
src/Lucene.Net.TestFramework/Index/RandomIndexWriter.cs (2 lines):
- line 108: // TODO: this should be solved in a different way; Random should not be shared (!).
- line 136: // TODO: maybe, we should simply buffer up added docs
src/Lucene.Net/Codecs/PerField/PerFieldPostingsFormat.cs (2 lines):
- line 167: // TODO: we should only provide the "slice" of FIS
- line 201: // TODO: support embedding; I think it should work but
src/Lucene.Net/Util/NumericUtils.cs (2 lines):
- line 163: ulong sortableBits = BitConverter.ToUInt64(BitConverter.GetBytes(val), 0) ^ 0x8000000000000000L; // LUCENENET TODO: Performance - Benchmark this
- line 268: return (long)((ulong)(sortableBits << GetPrefixCodedInt64Shift(val)) ^ 0x8000000000000000L); // LUCENENET TODO: Is the casting here necessary?
src/Lucene.Net.Replicator/IndexRevision.cs (2 lines):
- line 119: //TODO: long.CompareTo(); but which goes where.
- line 125: //TODO: This breaks the contract and will fail if called with a different implementation
src/Lucene.Net/Index/MultiDocValues.cs (2 lines):
- line 432: // TODO: use more efficient packed ints structures?
- line 433: // TODO: pull this out? its pretty generic (maps between N ord()-enabled TermsEnums)
src/Lucene.Net.Codecs/IntBlock/VariableIntBlockIndexOutput.cs (2 lines):
- line 25: // TODO: much of this can be shared code w/ the fixed case
- line 50: // TODO what Var-Var codecs exist in practice... and what are there blocksizes like?
src/Lucene.Net.Codecs/Memory/MemoryDocValuesConsumer.cs (2 lines):
- line 99: // TODO: more efficient?
- line 366: // TODO: in some cases representing missing with minValue-1 wouldn't take up additional space and so on,
src/Lucene.Net/Index/FrozenBufferedUpdates.cs (2 lines):
- line 88: // TODO if a Term affects multiple fields, we could keep the updates key'd by Term
- line 104: // TODO if a Term affects multiple fields, we could keep the updates key'd by Term
src/Lucene.Net/Codecs/Lucene40/Lucene40StoredFieldsWriter.cs (2 lines):
- line 170: // TODO: maybe a field should serialize itself?
- line 380: // TODO: this could be more efficient using
src/Lucene.Net.TestFramework/Codecs/Lucene3x/TermInfosWriter.cs (2 lines):
- line 52: // TODO: the default values for these two parameters should be settable from
- line 284: // TODO: UTF16toUTF8 could tell us this prefix
src/Lucene.Net.TestFramework/Analysis/MockHoleInjectingTokenFilter.cs (2 lines):
- line 25: // TODO: maybe, instead to be more "natural", we should make
- line 83: // TODO: end?
src/Lucene.Net.TestFramework/Support/Util/NUnitTestFixtureBuilder.cs (2 lines):
- line 82: // TODO: This should really return a TestFixture, but that requires changes to the Test hierarchy.
- line 245: // TODO: Check this logic added from Neil's build.
src/Lucene.Net.Analysis.Common/Analysis/Util/AbstractAnalysisFactory.cs (2 lines):
- line 67: // LUCENENET TODO: What should we do if the version is null?
- line 85: protected void AssureMatchVersion() // LUCENENET TODO: Remove this method (not used anyway in .NET)
src/Lucene.Net.Replicator/ReplicationClient.cs (2 lines):
- line 211: // TODO add some validation, on size / checksum
- line 256: //TODO: Resharper Message, Expression is always true -> Verify and if so then we can remove the null check.
src/Lucene.Net/Util/BytesRefHash.cs (2 lines):
- line 525: // TODO: maybe use long? But our keys are typically short...
- line 640: // TODO: can't we just merge this w/
src/Lucene.Net/Index/SegmentCoreReaders.cs (2 lines):
- line 65: // TODO: make a single thread local w/ a
- line 113: // TODO: since we don't write any norms file if there are no norms,
src/Lucene.Net.Facet/Taxonomy/WriterCache/CharBlockArray.cs (2 lines):
- line 94: // LUCENENET TODO: When object fields change, increment serialVersionUID and move the above block here for legacy support...
- line 564: // LUCENENET TODO: When object fields change, increment serialVersionUID and move the above block here for legacy support...
src/Lucene.Net/Codecs/PostingsWriterBase.cs (2 lines):
- line 38: // TODO: find a better name; this defines the API that the
- line 96: // TODO: better name?
src/Lucene.Net/Index/DocValuesProcessor.cs (2 lines):
- line 34: // TODO: somewhat wasteful we also keep a map here; would
- line 101: // TODO: catch missing DV fields here? else we have
src/Lucene.Net.Queries/Function/ValueSources/IfFunction.cs (2 lines):
- line 129: return true; // TODO: flow through to any sub-sources?
- line 134: // TODO: we need types of trueSource / falseSource to handle this
src/Lucene.Net.Analysis.Common/Analysis/Hunspell/Stemmer.cs (2 lines):
- line 351: // TODO: allow this stuff to be reused by tokenfilter
- line 614: // TODO: just pass this in from before, no need to decode it twice
src/Lucene.Net.Benchmark/Support/TagSoup/PYXWriter.cs (2 lines):
- line 16: // FIXME: does not do escapes in attribute values
- line 17: // FIXME: outputs entities as bare '&' character
src/Lucene.Net/Util/RamUsageEstimator.cs (2 lines):
- line 678: // // TODO: No alignments based on field type/ subclass fields alignments?
- line 739: /// TODO: If this is useful outside this class, make it public - needs some work
src/Lucene.Net.Benchmark/ByTask/Feeds/TrecFR94Parser.cs (2 lines):
- line 36: "date:", //TODO improve date extraction for this format
- line 41: //TODO can we also extract title for this format?
src/Lucene.Net/Index/TermVectorsConsumerPerField.cs (2 lines):
- line 84: // TODO: move this check somewhere else, and impl the other missing ones
- line 137: // TODO: only if needed for performance
src/Lucene.Net.TestFramework/Analysis/MockGraphTokenFilter.cs (2 lines):
- line 25: // TODO: sometimes remove tokens too...?
- line 92: // TODO: set TypeAtt too?
src/Lucene.Net/Analysis/TokenAttributes/OffsetAttribute.cs (2 lines):
- line 37: int StartOffset { get; } // LUCENENET TODO: API - add a setter ? It seems the SetOffset only sets two properties at once...
- line 55: int EndOffset { get; } // LUCENENET TODO: API - add a setter ? It seems the SetOffset only sets two properties at once...
src/Lucene.Net.Facet/Taxonomy/WriterCache/LruTaxonomyWriterCache.cs (2 lines):
- line 61: // TODO (Facet): choose between NameHashIntCacheLRU and NameIntCacheLRU.
- line 74: // TODO (Facet): choose between NameHashIntCacheLRU and NameIntCacheLRU.
src/Lucene.Net/Search/Similarities/TFIDFSimilarity.cs (2 lines):
- line 730: // TODO: Validate?
- line 739: // TODO: (sorta LUCENE-1907) make non-static class and expose this squaring via a nice method to subclasses?
src/Lucene.Net/Search/ControlledRealTimeReopenThread.cs (2 lines):
- line 270: // TODO: maybe use private thread ticktock timer, in
- line 279: // TODO: try to guestimate how long reopen might
src/Lucene.Net.TestFramework/Codecs/Lucene42/Lucene42DocValuesConsumer.cs (2 lines):
- line 89: // TODO: more efficient?
- line 98: // TODO: support this as MemoryDVFormat (and be smart about missing maybe)
src/Lucene.Net/Index/DocTermOrds.cs (2 lines):
- line 437: // TODO: really should 1) strip off useless suffix,
- line 482: // TODO: figure out what array lengths we can round up to w/o actually using more memory
src/Lucene.Net.Facet/SortedSet/SortedSetDocValuesFacetCounts.cs (2 lines):
- line 156: // TODO: is this right? really, we need a way to
- line 191: // TODO: yet another option is to count all segs
src/Lucene.Net.Spatial/Prefix/WithinPrefixTreeFilter.cs (2 lines):
- line 50: /// TODO LUCENE-4869: implement faster algorithm based on filtering out false-positives of a
- line 97: //TODO move this generic code elsewhere? Spatial4j?
src/Lucene.Net.TestFramework/Util/FailOnNonBulkMergesInfoStream.cs (1 line):
- line 24: // TODO: we should probably be a wrapper so verbose still works...
src/Lucene.Net.TestFramework/Util/ThrottledIndexOutput.cs (1 line):
- line 113: // TODO: sometimes, write only half the bytes, then
src/Lucene.Net.Queries/Function/ValueSources/QueryValueSource.cs (1 line):
- line 231: // TODO: if we want to support more than one value-filler or a value-filler in conjunction with
src/Lucene.Net/Search/Weight.cs (1 line):
- line 158: // TODO: this may be sort of weird, when we are
src/Lucene.Net/Search/Similarities/BM25Similarity.cs (1 line):
- line 40: // TODO: should we add a delta like sifaka.cs.uiuc.edu/~ylv2/pub/sigir11-bm25l.pdf ?
src/Lucene.Net.Sandbox/Queries/DuplicateFilter.cs (1 line):
- line 35: // TODO: make duplicate filter aware of ReaderContext such that we can
src/Lucene.Net/Search/Similarities/SimilarityBase.cs (1 line):
- line 136: // TODO: add sumDocFreq for field (numberOfFieldPostings)
src/Lucene.Net.Benchmark/ByTask/Tasks/TaskSequence.cs (1 line):
- line 290: // TODO: better to use condition to notify
src/Lucene.Net/Search/FieldCacheRangeFilter.cs (1 line):
- line 562: // TODO: bogus that newStringRange doesnt share this code... generics hell
src/Lucene.Net.TestFramework/Index/MockRandomMergePolicy.cs (1 line):
- line 67: // TODO: sometimes make more than 1 merge?
src/Lucene.Net/Codecs/Lucene41/Lucene41PostingsBaseFormat.cs (1 line):
- line 32: // TODO: should these also be named / looked up via SPI?
src/Lucene.Net.Suggest/Suggest/UnsortedInputIterator.cs (1 line):
- line 32: // TODO keep this for now
src/Lucene.Net.Highlighter/VectorHighlight/FastVectorHighlighter.cs (1 line):
- line 80: // TODO: should we deprecate this?
src/Lucene.Net/Index/NumericDocValuesWriter.cs (1 line):
- line 129: // TODO: make reusable Number
src/Lucene.Net.Misc/Index/MultiPassIndexSplitter.cs (1 line):
- line 116: w.AddIndexes(sr.ToArray()); // TODO: maybe take List here?
src/Lucene.Net/Index/FreqProxTermsWriter.cs (1 line):
- line 35: // TODO: would be nice to factor out more of this, eg the
src/Lucene.Net/Search/QueryRescorer.cs (1 line):
- line 113: // TODO: we should do a partial sort (of only topN)
src/Lucene.Net.TestFramework/Util/RunListenerPrintReproduceInfo.cs (1 line):
- line 12: //JAVA TO C# CONVERTER TODO TASK: this Java 'import static' statement cannot be converted to .NET:
websites/apidocs/Templates/LuceneTemplate/common.js (1 line):
- line 60: // LUCENENET TODO: Set up Improve This Doc to edit the .cs file (the xml doc comments) rather than creating a new .md file
src/Lucene.Net.Queries/Function/DocValues/BoolDocValues.cs (1 line):
- line 88: return Exists(doc) ? J2N.Numerics.Int32.GetInstance(Int32Val(doc)) : null; // LUCENENET TODO: Create Boolean reference type in J2N to return here (and format, etc)
src/Lucene.Net.Benchmark/ByTask/Feeds/EnwikiContentSource.cs (1 line):
- line 1: // LUCENENET TODO: Use HTML Agility pack instead of SAX ?
src/Lucene.Net.Analysis.ICU/Analysis/Icu/ICUTransformFilterFactory.cs (1 line):
- line 43: // TODO: add support for custom rules
src/Lucene.Net.QueryParser/Xml/Builders/LikeThisQueryBuilder.cs (1 line):
- line 69: //TODO MoreLikeThis needs to ideally have per-field stopWords lists - until then
src/Lucene.Net.Highlighter/Highlight/QueryTermScorer.cs (1 line):
- line 32: // TODO: provide option to boost score of fragments near beginning of document
src/Lucene.Net.Spatial/Prefix/PrefixTreeStrategy.cs (1 line):
- line 141: //TODO is CellTokenStream supposed to be re-used somehow? see Uwe's comments:
src/Lucene.Net/Util/ByteBlockPool.cs (1 line):
- line 65: public abstract void RecycleByteBlocks(byte[][] blocks, int start, int end); // LUCENENT TODO: API - Change to use IList
src/Lucene.Net.QueryParser/Flexible/Core/Nodes/QueryNode.cs (1 line):
- line 32: // TODO: this interface might be changed in the future
src/Lucene.Net.Analysis.Kuromoji/Dict/TokenInfoFST.cs (1 line):
- line 64: // TODO: jump to 3040, readNextRealArc to ceiling? (just be careful we don't add bugs)
src/Lucene.Net/Index/IndexFileNames.cs (1 line):
- line 26: // TODO: put all files under codec and remove all the static extensions here
src/Lucene.Net.TestFramework/Search/SearchEquivalenceTestBase.cs (1 line):
- line 116: // TODO: zipf-like distribution
src/Lucene.Net.Analysis.Common/Analysis/Nl/DutchStemmer.cs (1 line):
- line 50: //TODO convert to internal
src/Lucene.Net/Support/IO/FileSupport.cs (1 line):
- line 547: // LUCENENET TODO: On Unix, this resolves symbolic links. Not sure
src/Lucene.Net/Index/SortedDocValuesWriter.cs (1 line):
- line 95: // TODO: can this same OOM happen in THPF?
src/Lucene.Net.Analysis.Common/Analysis/Standard/StandardFilter.cs (1 line):
- line 52: return m_input.IncrementToken(); // TODO: add some niceties for the new grammar
src/Lucene.Net.Join/ToChildBlockJoinQuery.cs (1 line):
- line 202: // TODO: would be nice to pull initial parent
src/Lucene.Net/Search/TopFieldCollector.cs (1 line):
- line 39: // TODO: one optimization we could do is to pre-fill
src/Lucene.Net/Search/Spans/SpanMultiTermQueryWrapper.cs (1 line):
- line 187: // TODO: would be nice to not lose term-state here.
src/Lucene.Net.Queries/CommonTermsQuery.cs (1 line):
- line 72: * TODO maybe it would make sense to abstract this even further and allow to
src/Lucene.Net.QueryParser/Flexible/Standard/Processors/GroupQueryNodeProcessor.cs (1 line):
- line 40: /// Example: TODO: describe a good example to show how this processor works
src/Lucene.Net/Util/Fst/BytesStore.cs (1 line):
- line 29: // TODO: merge with PagedBytes, except PagedBytes doesn't
src/Lucene.Net/Index/NumericDocValuesFieldUpdates.cs (1 line):
- line 117: // TODO: if the Sorter interface changes to take long indexes, we can remove that limitation
src/Lucene.Net.Benchmark/ByTask/Feeds/LongToEnglishQueryMaker.cs (1 line):
- line 39: //// TODO: we could take param to specify locale...
src/Lucene.Net/Util/PriorityQueue.cs (1 line):
- line 230: public T Add(T element) // LUCENENET TODO: Factor out the IndexOutOfRangeException here and make a TryAdd() method.
src/Lucene.Net.Queries/Function/ValueSource.cs (1 line):
- line 39: public abstract FunctionValues GetValues(IDictionary context, AtomicReaderContext readerContext); // LUCENENET TODO: API - See if we can use generic IDictionary here instead
src/Lucene.Net/Search/DisjunctionScorer.cs (1 line):
- line 206: // TODO: make this less horrible
src/Lucene.Net.Analysis.Common/Analysis/Synonym/FSTSynonymFilterFactory.cs (1 line):
- line 107: // TODO: expose dedup as a parameter?
src/Lucene.Net.Analysis.Common/Analysis/Payloads/FloatEncoder.cs (1 line):
- line 34: float payload = float.Parse(new string(buffer, offset, length), CultureInfo.InvariantCulture); //TODO: improve this so that we don't have to new Strings
src/Lucene.Net/Search/TopDocsCollector.cs (1 line):
- line 162: // TODO: shouldn't we throw IAE if apps give bad params here so they dont
src/Lucene.Net.Analysis.Common/Analysis/Miscellaneous/StemmerOverrideFilter.cs (1 line):
- line 99: // TODO maybe we can generalize this and reuse this map somehow?
src/Lucene.Net.TestFramework/Index/MockIndexInput.cs (1 line):
- line 24: // TODO: what is this used for? just testing BufferedIndexInput?
src/Lucene.Net/Util/BroadWord.cs (1 line):
- line 81: // FIXME: replace by biggerequal8_one formula from article page 6, line 9. four operators instead of five here.
src/Lucene.Net.Analysis.Common/Analysis/Pattern/PatternTokenizer.cs (1 line):
- line 202: // TODO: we should see if we can make this tokenizer work without reading
src/Lucene.Net/Search/ConstantScoreQuery.cs (1 line):
- line 107: // TODO: OK to not add any terms when wrapped a filter
src/Lucene.Net/Util/FixedBitSet.cs (1 line):
- line 662: public void Clear(int startIndex, int endIndex) // LUCENENET TODO: API: Change this to use startIndex and length to match .NET
src/Lucene.Net/Document/StoredField.cs (1 line):
- line 94: // TODO: not great but maybe not a big problem?
src/Lucene.Net.TestFramework/Analysis/MockAnalyzer.cs (1 line):
- line 74: // TODO: this should be solved in a different way; Random should not be shared (!).
src/Lucene.Net.TestFramework/Codecs/Lucene3x/PreFlexRWStoredFieldsWriter.cs (1 line):
- line 110: // TODO: maybe a field should serialize itself?
src/Lucene.Net.Misc/Store/NativePosixUtil.cs (1 line):
- line 41: //JAVA TO C# CONVERTER TODO TASK: The library is specified in the 'DllImport' attribute for .NET:
src/Lucene.Net/Store/RAMFile.cs (1 line):
- line 97: protected internal byte[] GetBuffer(int index) // LUCENENET TODO: API - change to indexer property
src/Lucene.Net.TestFramework/Analysis/MockTokenFilter.cs (1 line):
- line 73: // TODO: fix me when posInc=false, to work like FilteringTokenFilter in that case and not return
src/Lucene.Net/Util/Automaton/Automaton.cs (1 line):
- line 307: // TODO: maybe we can eventually allow for oversizing here...
src/Lucene.Net.TestFramework/Codecs/Lucene3x/PreFlexRWFieldInfosWriter.cs (1 line):
- line 31: // TODO move to test-framework preflex RW?
src/Lucene.Net.Codecs/Sep/IntIndexOutput.cs (1 line):
- line 23: // TODO: We may want tighter integration w/IndexOutput
src/Lucene.Net.Highlighter/Highlight/SimpleHTMLEncoder.cs (1 line):
- line 79: // LUCENENET TODO: This logic appears to be correct, but need to
src/Lucene.Net.Queries/Function/ValueSources/OrdFieldSource.cs (1 line):
- line 63: // TODO: this is trappy? perhaps this query instead should make you pass a slow reader yourself?
src/Lucene.Net/Search/ExactPhraseScorer.cs (1 line):
- line 242: // TODO: we could fold in chunkStart into offset and
src/Lucene.Net.Analysis.Common/Analysis/Wikipedia/WikipediaTokenizerFactory.cs (1 line):
- line 50: // TODO: add support for WikipediaTokenizer's advanced options.
src/Lucene.Net.Facet/DrillSideways.cs (1 line):
- line 172: // TODO: we could optimize this pure-browse case by
src/Lucene.Net.Analysis.Common/Analysis/Util/WordlistLoader.cs (1 line):
- line 43: // LUCENENET TODO: Add .NET overloads that accept a file name? Or at least a FileInfo object as was done in 3.0.3?
src/Lucene.Net.TestFramework/Search/RandomSimilarityProvider.cs (1 line):
- line 140: /* TODO: enable Dirichlet
src/Lucene.Net.TestFramework/Util/Automaton/AutomatonTestUtil.cs (1 line):
- line 377: // TODO: not great that this is recursive... in theory a
src/Lucene.Net.QueryParser/Flexible/Standard/Nodes/RegexpQueryNode.cs (1 line):
- line 70: int end) // LUCENENET TODO: API - Change to use length rather than end index to match .NET
src/Lucene.Net.Spatial/Util/ValueSourceFilter.cs (1 line):
- line 34: //TODO see https://issues.apache.org/jira/browse/LUCENE-4251 (move out of spatial & improve)
src/Lucene.Net.Analysis.Common/Analysis/NGram/Lucene43EdgeNGramTokenizer.cs (1 line):
- line 213: // TODO: refactor to a shared readFully somewhere:
src/Lucene.Net/Search/SortRescorer.cs (1 line):
- line 97: // TODO: if we could ask the Sort to explain itself then
src/dotnet/Lucene.Net.CodeAnalysis.CSharp/Lucene1000_SealIncrementTokenMethodCSCodeFixProvider.cs (1 line):
- line 49: // TODO: Replace the following code with your own analysis, generating a CodeAction for each fix to suggest
src/Lucene.Net.Join/Support/ToParentBlockJoinQuery.cs (1 line):
- line 333: // TODO: specialize this into dedicated classes per-scoreMode
src/Lucene.Net/Index/IndexReader.cs (1 line):
- line 552: // TODO: we need a separate StoredField, so that the
src/Lucene.Net/Index/DocInverter.cs (1 line):
- line 65: // TODO: allow endConsumer.finishDocument to also return
src/Lucene.Net.QueryParser/Flexible/Core/Util/UnescapedCharSequence.cs (1 line):
- line 159: // TODO: non efficient implementation, refactor this code
src/Lucene.Net.Benchmark/Support/TagSoup/XMLWriter.cs (1 line):
- line 1157: /// TODO: this method probably needs some cleanup.
src/Lucene.Net.Highlighter/PostingsHighlight/MultiTermHighlighting.cs (1 line):
- line 229: // TODO: we could use CachingWrapperFilter, (or consume twice) to allow us to have a true freq()
src/Lucene.Net/Util/Packed/MonotonicBlockPackedWriter.cs (1 line):
- line 78: // TODO: perform a true linear regression?
src/Lucene.Net.Queries/Function/DocValues/DocTermsIndexDocValues.cs (1 line):
- line 96: // TODO: are lowerVal and upperVal in indexed form or not?
src/Lucene.Net.Analysis.Kuromoji/Tools/BinaryDictionaryWriter.cs (1 line):
- line 295: // TODO: maybe this int[] should instead be the output to the FST...
src/Lucene.Net/Codecs/Lucene40/Lucene40TermVectorsWriter.cs (1 line):
- line 43: // TODO: make a new 4.0 TV format that encodes better
src/Lucene.Net/Search/SortField.cs (1 line):
- line 482: // TODO: should we remove this? who really uses it?
src/Lucene.Net.TestFramework/Analysis/MockCharFilter.cs (1 line):
- line 38: // TODO: instead of fixed remainder... maybe a fixed
src/Lucene.Net.Analysis.ICU/Analysis/Icu/ICUNormalizer2FilterFactory.cs (1 line):
- line 91: // TODO: support custom normalization
src/Lucene.Net.Analysis.Common/Analysis/NGram/Lucene43NGramTokenizer.cs (1 line):
- line 105: // TODO: refactor to a shared readFully somewhere:
src/Lucene.Net/Search/Scorer.cs (1 line):
- line 48: // TODO can we clean this up?
src/Lucene.Net/Index/MergeState.cs (1 line):
- line 185: // TODO: get rid of this? it tells you which segments are 'aligned' (e.g. for bulk merging)
src/Lucene.Net.Queries/Function/ValueSources/ByteFieldSource.cs (1 line):
- line 126: return J2N.Numerics.SByte.GetInstance((sbyte)arr.Get(doc)); // TODO: valid?
src/Lucene.Net.Queries/Function/ValueSources/BoolFunction.cs (1 line):
- line 28: // TODO: placeholder to return type, among other common future functionality
src/Lucene.Net/Search/Spans/Spans.cs (1 line):
- line 96: // TODO: Remove warning after API has been finalized
src/Lucene.Net.Spatial/Vector/PointVectorStrategy.cs (1 line):
- line 172: //TODO this is basically old code that hasn't been verified well and should probably be removed
src/Lucene.Net/Util/TimSorter.cs (1 line):
- line 99: internal virtual int RunEnd(int i) // LUCENENET TODO: API - change to indexer
src/Lucene.Net.Analysis.ICU/Analysis/Icu/ICUFoldingFilter.cs (1 line):
- line 68: // TODO: if the wrong version of the ICU jar is used, loading these data files may give a strange error.
src/dotnet/Lucene.Net.CodeAnalysis.CSharp/Lucene1000_SealTokenStreamClassCSCodeFixProvider.cs (1 line):
- line 49: // TODO: Replace the following code with your own analysis, generating a CodeAction for each fix to suggest
src/Lucene.Net.Benchmark/ByTask/Feeds/DirContentSource.cs (1 line):
- line 12: // LUCENENET TODO: This had to be refactored significantly. We need tests to confirm it works.
src/Lucene.Net.Benchmark/ByTask/Tasks/ReadTask.cs (1 line):
- line 128: // TODO: instead of always passing false we
TestTargetFramework.props (1 line):
- line 37: LUCENENET TODO: Due to a parsing bug, we cannot pass a string with a ; to dotnet msbuild, so passing true as a workaround -->
src/Lucene.Net.Facet/Taxonomy/Directory/DirectoryTaxonomyIndexWriterFactory.cs (1 line):
- line 75: // TODO: should we use a more optimized Codec, e.g. Pulsing (or write custom)?
src/Lucene.Net/Index/DocumentsWriterDeleteQueue.cs (1 line):
- line 145: TryApplyGlobalSlice(); // TODO doing this each time is not necessary maybe
src/Lucene.Net/Util/QueryBuilder.cs (1 line):
- line 141: // TODO: wierd that BQ equals/rewrite/scorer doesn't handle this?
src/Lucene.Net.Analysis.Kuromoji/GraphvizFormatter.cs (1 line):
- line 26: // TODO: would be nice to show 2nd best path in a diff't
src/Lucene.Net/Search/Spans/SpanOrQuery.cs (1 line):
- line 46: // LUCENENET TODO: API - This constructor was added to eliminate casting with PayloadSpanUtil. Make public?
src/Lucene.Net.Analysis.Common/Analysis/Synonym/SolrSynonymParser.cs (1 line):
- line 84: // TODO: we could process this more efficiently.
src/Lucene.Net/Index/AutomatonTermsEnum.cs (1 line):
- line 258: // TODO: paranoia? if we backtrack thru an infinite DFA, the loop detection is important!
src/Lucene.Net/Store/Directory.cs (1 line):
- line 45: public abstract class Directory : IDisposable // LUCENENET TODO: Subclass System.IO.FileSystemInfo ?
src/Lucene.Net.Facet/Range/RangeFacetCounts.cs (1 line):
- line 88: // TODO: should we impl this?
src/Lucene.Net.Facet/Taxonomy/Directory/DirectoryTaxonomyWriter.cs (1 line):
- line 920: UninterruptableMonitor.Enter(syncLock); // LUCENENET TODO: Do we need to synchroize again since the whole method is synchronized?
src/Lucene.Net/Index/MultiDocsEnum.cs (1 line):
- line 172: // TODO: implement bulk read more efficiently than super
src/Lucene.Net/Index/MultiFields.cs (1 line):
- line 238: // TODO: why is this public?
src/dotnet/Lucene.Net.CodeAnalysis.VisualBasic/Lucene1000_SealIncrementTokenMethodVBCodeFixProvider.cs (1 line):
- line 49: // TODO: Replace the following code with your own analysis, generating a CodeAction for each fix to suggest
src/Lucene.Net/Document/TextField.cs (1 line):
- line 50: // TODO: add sugar for term vectors...?
src/Lucene.Net/Index/SortedSetDocValuesWriter.cs (1 line):
- line 138: // TODO: can this same OOM happen in THPF?
src/Lucene.Net.QueryParser/Flexible/Core/Nodes/AnyQueryNode.cs (1 line):
- line 74: // LUCENENET TODO: No need for GetFieldAsString method because
src/Lucene.Net.Queries/Function/ValueSources/BytesRefFieldSource.cs (1 line):
- line 41: // TODO: do it cleaner?
src/Lucene.Net.Analysis.Phonetic/Language/Bm/Rule.cs (1 line):
- line 162: // LUCENENET TODO: change this implementation to use MemoryExtensions.Contains with a polyfill for net462.
src/Lucene.Net.Facet/RandomSamplingFacetsCollector.cs (1 line):
- line 198: // TODO: we could try the WAH8DocIdSet here as well, as the results will be sparse
src/Lucene.Net/Codecs/PerField/PerFieldDocValuesFormat.cs (1 line):
- line 222: // TODO: we should only provide the "slice" of FIS
src/Lucene.Net/Codecs/PostingsBaseFormat.cs (1 line):
- line 30: // TODO: find a better name; this defines the API that the
src/Lucene.Net.QueryParser/Flexible/Core/Nodes/ProximityQueryNode.cs (1 line):
- line 31: /// TODO: Add this to the future standard Lucene parser/processor/builder
src/Lucene.Net.Analysis.ICU/Collation/ICUCollationDocValuesField.cs (1 line):
- line 55: // TODO: can we make this trap-free? maybe just synchronize on the collator
src/Lucene.Net/Search/TermRangeQuery.cs (1 line):
- line 135: // TODO: all these toStrings for queries should just output the bytes, it might not be UTF-8!
src/Lucene.Net/Util/Packed/AbstractAppendingLongBuffer.cs (1 line):
- line 235: // TODO: this is called per-doc-per-norms/dv-field, can we optimize this?
src/Lucene.Net/Codecs/PostingsReaderBase.cs (1 line):
- line 43: // TODO: find a better name; this defines the API that the
src/Lucene.Net/Codecs/DocValuesConsumer.cs (1 line):
- line 510: // TODO: seek-by-ord to nextSetBit
src/Lucene.Net.Benchmark/ByTask/Feeds/ReutersContentSource.cs (1 line):
- line 80: // TODO implement?
src/Lucene.Net.QueryParser/Flexible/Core/Nodes/FieldQueryNode.cs (1 line):
- line 143: // LUCENENET TODO: this method is not required because Field is already type string in .NET
src/Lucene.Net/Search/DocIdSet.cs (1 line):
- line 38: // TODO: somehow this class should express the cost of
src/Lucene.Net.Facet/Taxonomy/TaxonomyFacetSumFloatAssociations.cs (1 line):
- line 76: // TODO: use OrdinalsReader? we'd need to add a
src/Lucene.Net/Codecs/Codec.cs (1 line):
- line 167: // TODO: should we use this, or maybe a system property is better?
src/Lucene.Net.Analysis.OpenNLP/OpenNLPSentenceBreakIterator.cs (1 line):
- line 265: // TODO: is there a better way to extract full text from arbitrary CharacterIterators?
src/Lucene.Net/Index/TermsHashPerField.cs (1 line):
- line 302: // TODO: optimize
src/Lucene.Net.Analysis.Kuromoji/Tools/ConnectionCostsWriter.cs (1 line):
- line 34: /// Constructor for building. TODO: remove write access
src/Lucene.Net.Misc/Document/LazyDocument.cs (1 line):
- line 124: // :TODO: synchronize to prevent redundent copying? (sync per field name?)
src/Lucene.Net.Analysis.Common/Analysis/CharFilter/NormalizeCharMap.cs (1 line):
- line 29: // TODO: save/load?
src/Lucene.Net.Analysis.Common/Analysis/Util/ClasspathResourceLoader.cs (1 line):
- line 64: // LUCENENET TODO: Apparently the second parameter of FindClass was used
src/Lucene.Net.Suggest/Suggest/BufferingTermFreqIteratorWrapper.cs (1 line):
- line 32: // TODO keep this for now
src/Lucene.Net/Codecs/Lucene40/BitVector.cs (1 line):
- line 211: public int Count() // LUCENENET TODO: API - make into a property
src/Lucene.Net/Codecs/Lucene3x/Lucene3xCodec.cs (1 line):
- line 57: // TODO: this should really be a different impl
src/Lucene.Net.QueryParser/Flexible/Standard/Parser/EscapeQuerySyntaxImpl.cs (1 line):
- line 41: // TODO: check what to do with these "*", "?", "\\"
src/Lucene.Net.Queries/Function/ValueSources/DefFunction.cs (1 line):
- line 144: // TODO: need ValueSource.type() to determine correct type
src/Lucene.Net.Analysis.Common/Analysis/Miscellaneous/WordDelimiterIterator.cs (1 line):
- line 81: // TODO: should there be a WORD_DELIM category for chars that only separate words (no catenation of subwords will be
src/Lucene.Net/Codecs/BlockTermState.cs (1 line):
- line 47: // TODO: update BTR to nuke this
src/Lucene.Net.Suggest/Suggest/Lookup.cs (1 line):
- line 198: // TODO: should we move this out of the interface into a utility class?
src/Lucene.Net/Search/SloppyPhraseScorer.cs (1 line):
- line 194: // TODO would be good if we can avoid calling cardinality() in each iteration!
src/Lucene.Net.Codecs/BlockTerms/VariableGapTermsIndexWriter.cs (1 line):
- line 152: // TODO: it'd be nice to let the FST builder prune based
src/Lucene.Net/Index/DocValuesFieldUpdates.cs (1 line):
- line 152: /// TODO
src/Lucene.Net/Search/DocTermOrdsRewriteMethod.cs (1 line):
- line 121: // TODO: we could track max bit set and early terminate (since they come in sorted order)
src/Lucene.Net.Expressions/JS/JavascriptCompiler.cs (1 line):
- line 109: // LUCENENET TODO: ParseException not being thrown here - need to check
src/Lucene.Net.Codecs/Memory/DirectDocValuesConsumer.cs (1 line):
- line 250: // TODO: in some cases representing missing with minValue-1 wouldn't take up additional space and so on,
src/Lucene.Net.Analysis.Common/Analysis/Pattern/PatternReplaceCharFilter.cs (1 line):
- line 113: // LUCENENET TODO: Replacing characters in a StringBuilder via regex is not natively
src/Lucene.Net.Codecs/Memory/FSTOrdTermsWriter.cs (1 line):
- line 266: // TODO: block encode each part
src/Lucene.Net/Util/IntBlockPool.cs (1 line):
- line 269: // TODO make the levels and the sizes configurable
src/Lucene.Net/Util/Mutable/MutableValueInt.cs (1 line):
- line 86: // TODO: if used in HashMap, it already mixes the value... maybe use a straight value?
src/Lucene.Net.TestFramework/Analysis/CannedTokenStream.cs (1 line):
- line 72: // TODO: can we just capture/restoreState so
src/Lucene.Net.Grouping/SearchGroup.cs (1 line):
- line 405: if (shard.Any()) // LUCENENET TODO: Change back to .Count if/when IEnumerable is changed to ICollection or IReadOnlyCollection
src/Lucene.Net.TestFramework/Analysis/CollationTestBase.cs (1 line):
- line 155: // TODO: this test is really fragile. there are already 3 different cases,
src/Lucene.Net/Store/CompoundFileDirectory.cs (1 line):
- line 199: // TODO remove once 3.x is not supported anymore
src/Lucene.Net.Facet/Taxonomy/TaxonomyFacetSumIntAssociations.cs (1 line):
- line 76: // TODO: use OrdinalsReader? we'd need to add a
src/Lucene.Net.Spatial/DisjointSpatialFilter.cs (1 line):
- line 60: // TODO consider making SpatialArgs cloneable
src/Lucene.Net.Analysis.Kuromoji/Dict/TokenInfoDictionary.cs (1 line):
- line 43: // TODO: some way to configure?
src/Lucene.Net.Analysis.Common/Analysis/Miscellaneous/TrimFilter.cs (1 line):
- line 75: //TODO: Is this the right behavior or should we return false? Currently, " ", returns true, so I think this should
src/Lucene.Net/Search/ReqOptSumScorer.cs (1 line):
- line 75: // TODO: sum into a double and cast to float if we ever send required clauses to BS1
src/Lucene.Net/Util/IntsRef.cs (1 line):
- line 58: public int[] Int32s // LUCENENET TODO: API - change to indexer
src/Lucene.Net.TestFramework/Support/JavaCompatibility/SystemTypesHelpers.cs (1 line):
- line 44: public static string toString(this object obj) // LUCENENET TODO: wrap Collections.ToString()
src/Lucene.Net.Join/ToParentBlockJoinQuery.cs (1 line):
- line 331: // TODO: specialize this into dedicated classes per-scoreMode
src/Lucene.Net/Index/FieldInfo.cs (1 line):
- line 334: // TODO: maybe rename to just DOCS?
src/Lucene.Net.Analysis.Common/Tartarus/Snowball/SnowballProgram.cs (1 line):
- line 453: // FIXME: report error somehow.
src/Lucene.Net.TestFramework/Analysis/CannedBinaryTokenStream.cs (1 line):
- line 143: // TODO: can we just capture/restoreState so
src/Lucene.Net/Search/ConjunctionScorer.cs (1 line):
- line 122: // TODO: sum into a double and cast to float if we ever send required clauses to BS1
src/Lucene.Net.Suggest/Suggest/BufferedInputIterator.cs (1 line):
- line 32: // TODO keep this for now
src/Lucene.Net/Codecs/Lucene40/Lucene40PostingsBaseFormat.cs (1 line):
- line 30: // TODO: should these also be named / looked up via SPI?
src/Lucene.Net/Index/SegmentCommitInfo.cs (1 line):
- line 178: // TODO we could rely on TrackingDir.getCreatedFiles() (like we do for
src/Lucene.Net.TestFramework/Util/CloseableDirectory.cs (1 line):
- line 55: // TODO: perform real close of the delegate: LUCENE-4058
src/Lucene.Net.TestFramework/Index/BaseDocValuesFormatTestCase.cs (1 line):
- line 3092: // TODO: get this out of here and into the deprecated codecs (4.0, 4.2)
src/Lucene.Net.Codecs/BlockTerms/FixedGapTermsIndexWriter.cs (1 line):
- line 128: // TODO: we could conceivably make a PackedInts wrapper
src/Lucene.Net/Store/CompoundFileWriter.cs (1 line):
- line 154: // TODO this code should clean up after itself
src/Lucene.Net.Codecs/Memory/FSTOrdTermsReader.cs (1 line):
- line 440: // TODO: this can be achieved by making use of Util.getByOutput()
src/Lucene.Net.TestFramework/Store/MockDirectoryWrapper.cs (1 line):
- line 974: // TODO: factor this out / share w/ TestIW.assertNoUnreferencedFiles
src/Lucene.Net/Codecs/StoredFieldsWriter.cs (1 line):
- line 114: // TODO: this could be more efficient using
src/Lucene.Net.TestFramework/Util/QuickPatchThreadsFilter.cs (1 line):
- line 30: /// TODO: remove when integrated in system filters in rr.
src/Lucene.Net.Benchmark/Support/TagSoup/Parser.cs (1 line):
- line 539: //TODO: Safe?
src/Lucene.Net/Search/FuzzyTermsEnum.cs (1 line):
- line 72: // TODO: chicken-and-egg
src/Lucene.Net.Analysis.Common/Analysis/Synonym/WordnetSynonymParser.cs (1 line):
- line 34: // TODO: allow you to specify syntactic categories (e.g. just nouns, etc)
src/Lucene.Net.TestFramework/Codecs/Lucene3x/PreFlexRWTermVectorsFormat.cs (1 line):
- line 57: // LUCENENET TODO: This does not seem to be hit, unused?
src/Lucene.Net.Analysis.Common/Analysis/Synonym/SynonymMap.cs (1 line):
- line 246: // TODO: are we using the best sharing options?
src/Lucene.Net.Misc/Index/Sorter/Sorter.cs (1 line):
- line 291: // TODO: would be better if copy() didnt cause a term lookup in TermOrdVal & co,
src/Lucene.Net.Analysis.Common/Analysis/Miscellaneous/WordDelimiterFilter.cs (1 line):
- line 284: // TODO: proper hole adjustment (FilteringTokenFilter-like) instead of this previous logic!
src/Lucene.Net/Search/TimeLimitingCollector.cs (1 line):
- line 319: // TODO: Use System.nanoTime() when Lucene moves to Java SE 5.
src/Lucene.Net.QueryParser/Flexible/Core/Nodes/ModifierQueryNode.cs (1 line):
- line 129: // LUCENENET TODO: Work out how to override ToString() (or test this) so this string can be made
src/Lucene.Net.Queries/Function/ValueSources/ReverseOrdFieldSource.cs (1 line):
- line 63: // TODO: this is trappy? perhaps this query instead should make you pass a slow reader yourself?
src/Lucene.Net/Index/SegmentWriteState.cs (1 line):
- line 91: public int TermIndexInterval { get; set; } // TODO: this should be private to the codec, not settable here or in IWC
src/Lucene.Net/Index/SegmentMerger.cs (1 line):
- line 329: // TODO: we may be able to broaden this to
src/Lucene.Net.Analysis.Common/Collation/TokenAttributes/CollatedTermAttributeImpl.cs (1 line):
- line 48: //LUCENENET TODO: Verify that this is correct. Java's byte[] is signed and Big Endian, .NET's is unsigned and Little Endian.
src/Lucene.Net.Codecs/IntBlock/FixedIntBlockIndexInput.cs (1 line):
- line 57: // TODO: can this be simplified?
src/Lucene.Net/Store/RateLimitedIndexOutput.cs (1 line):
- line 37: // TODO should we make buffer size configurable
src/Lucene.Net/Index/ParallelAtomicReader.cs (1 line):
- line 121: // TODO: make this read-only in a cleaner way?
src/Lucene.Net.TestFramework/Codecs/NestedPulsing/NestedPulsingPostingsFormat.cs (1 line):
- line 30: // TODO: if we create PulsingPostingsBaseFormat then we
src/Lucene.Net.Memory/MemoryIndex.cs (1 line):
- line 275: // TODO: deprecate & move this method into AnalyzerUtil?
src/Lucene.Net.Benchmark/ByTask/Feeds/DemoHTMLParser.cs (1 line):
- line 1: // LUCENENET TODO: Use HTML Agility pack instead of SAX ?
src/Lucene.Net/Util/Fst/ForwardBytesReader.cs (1 line):
- line 24: // TODO: can we use just ByteArrayDataInput...? need to
src/Lucene.Net.Classification/KNearestNeighborClassifier.cs (1 line):
- line 95: // TODO : improve the nearest neighbor selection
src/Lucene.Net.QueryParser/Flexible/Core/Nodes/BoostQueryNode.cs (1 line):
- line 86: return "" + f.ToString("0.0#######"); // LUCENENET TODO: Culture
src/Lucene.Net.Highlighter/PostingsHighlight/PassageScorer.cs (1 line):
- line 34: // TODO: this formula is completely made up. It might not provide relevant snippets!
src/Lucene.Net.Misc/Misc/SweetSpotSimilarity.cs (1 line):
- line 136: /// :TODO: potential optimization is to just flat out return 1.0f if numTerms
src/Lucene.Net.Codecs/BlockTerms/FixedGapTermsIndexReader.cs (1 line):
- line 375: // TODO: often we can get by w/ fewer bits per
src/Lucene.Net.QueryParser/Flexible/Standard/Nodes/PrefixWildcardQueryNode.cs (1 line):
- line 27: /// lucene parser. TODO: refactor the code to remove this special case from the
src/Lucene.Net.Analysis.Common/Analysis/Compound/HyphenationCompoundWordTokenFilterFactory.cs (1 line):
- line 96: // TODO: Broken, because we cannot resolve real system id
src/Lucene.Net.Analysis.Common/Analysis/Util/ResourceLoader.cs (1 line):
- line 44: // TODO: fix exception handling
src/Lucene.Net/Index/NormsConsumer.cs (1 line):
- line 27: // TODO FI: norms could actually be stored as doc store
src/Lucene.Net/Util/Fst/NodeHash.cs (1 line):
- line 101: // TODO: maybe if number of arcs is high we can safely subsample?
src/Lucene.Net.TestFramework/Index/BaseTermVectorsFormatTestCase.cs (1 line):
- line 232: // TODO: use CannedTokenStream?
src/Lucene.Net/Codecs/Lucene42/Lucene42NormsConsumer.cs (1 line):
- line 88: // TODO: more efficient?
src/Lucene.Net.TestFramework/Codecs/Lucene41Ords/Lucene41WithOrds.cs (1 line):
- line 41: // TODO: should we make the terms index more easily
src/Lucene.Net.Analysis.Common/Analysis/Pattern/PatternReplaceCharFilterFactory.cs (1 line):
- line 53: // TODO: warn if you set maxBlockChars or blockDelimiters ?
src/Lucene.Net.Highlighter/PostingsHighlight/PassageFormatter.cs (1 line):
- line 43: public abstract object Format(Passage[] passages, string content); // LUCENENET TODO: API Make return type generic?
src/Lucene.Net/Index/MultiDocsAndPositionsEnum.cs (1 line):
- line 183: // TODO: implement bulk read more efficiently than super
src/Lucene.Net.QueryParser/Surround/Query/NotQuery.cs (1 line):
- line 43: // FIXME: do not allow weights on prohibited subqueries.
src/Lucene.Net.Codecs/Memory/FSTTermOutputs.cs (1 line):
- line 213: // TODO: if we refactor a 'addSelf(TermData other)',
src/Lucene.Net.Spatial/Util/DistanceToShapeValueSource.cs (1 line):
- line 44: //TODO if FunctionValues returns NaN; will things be ok?
src/Lucene.Net.Benchmark/Support/TagSoup/PYXScanner.cs (1 line):
- line 111: // FIXME:
src/Lucene.Net.TestFramework/MockFile/ExtraFS.cs (1 line):
- line 28: // TODO: would be great if we overrode attributes, so file size was always zero for
src/Lucene.Net/Util/Fst/CharSequenceOutputs.cs (1 line):
- line 149: // TODO: maybe UTF8?
src/Lucene.Net.Spatial/Prefix/TermQueryPrefixTreeStrategy.cs (1 line):
- line 66: terms[i++] = new BytesRef(cell.TokenString);//TODO use cell.getTokenBytes()
src/Lucene.Net/Search/CachingCollector.cs (1 line):
- line 352: // TODO: would be nice if a collector defined a
src/Lucene.Net.Analysis.Common/Analysis/CharFilter/MappingCharFilter.cs (1 line):
- line 115: // TODO: a more efficient approach would be Aho/Corasick's
src/Lucene.Net.Analysis.Common/Analysis/Util/AnalysisSPILoader.cs (1 line):
- line 91: // TODO: Should we disallow duplicate names here?
src/Lucene.Net.Analysis.Common/Analysis/Payloads/IntegerEncoder.cs (1 line):
- line 32: int payload = ArrayUtil.ParseInt32(buffer, offset, length); //TODO: improve this so that we don't have to new Strings
src/Lucene.Net.Join/Support/ToChildBlockJoinQuery.cs (1 line):
- line 204: // TODO: would be nice to pull initial parent
src/Lucene.Net/Index/TieredMergePolicy.cs (1 line):
- line 120: // TODO: should addIndexes do explicit merging, too? And,
src/Lucene.Net/Analysis/TokenAttributes/CharTermAttribute.cs (1 line):
- line 232: new ICharTermAttribute Append(string value, int startIndex, int count); // LUCENENET TODO: API - change to startIndex/length to match .NET
src/Lucene.Net.TestFramework/Codecs/Lucene40/Lucene40RWPostingsFormat.cs (1 line):
- line 39: // TODO: should we make the terms index more easily
src/Lucene.Net.Analysis.Common/Analysis/Th/ThaiTokenizer.cs (1 line):
- line 49: private static readonly object syncLock = new object(); // LUCENENET specific - workaround until BreakIterator is made thread safe (LUCENENET TODO: TO REVERT)
src/Lucene.Net.Analysis.Common/Analysis/Ar/ArabicAnalyzer.cs (1 line):
- line 142: // TODO maybe we should make ArabicNormalization filter also KeywordAttribute aware?!
src/Lucene.Net/Support/Util/NumberFormat.cs (1 line):
- line 116: // LUCENENET TODO: Add additional functionality to edit the NumberFormatInfo
src/Lucene.Net.Facet/Taxonomy/TaxonomyWriter.cs (1 line):
- line 93: /// TODO (Facet): instead of a GetParent(ordinal) method, consider having a
src/Lucene.Net/Index/BinaryDocValuesFieldUpdates.cs (1 line):
- line 149: // TODO: if the Sorter interface changes to take long indexes, we can remove that limitation
src/Lucene.Net/Search/Payloads/PayloadNearQuery.cs (1 line):
- line 253: // TODO change the whole spans api to use bytesRef, or nuke spans
src/Lucene.Net/Codecs/Compressing/CompressingTermVectorsReader.cs (1 line):
- line 939: // TODO: slightly sheisty
src/Lucene.Net/Codecs/Lucene40/Lucene40PostingsFormat.cs (1 line):
- line 204: // TODO: this class could be created by wrapping
src/Lucene.Net/Index/Terms.cs (1 line):
- line 84: // TODO: eventually we could support seekCeil/Exact on
src/Lucene.Net.Analysis.Common/Analysis/Util/SegmentingTokenizerBase.cs (1 line):
- line 183: // TODO: refactor to a shared readFully somewhere
src/Lucene.Net.Analysis.Common/Analysis/CommonGrams/CommonGramsFilterFactory.cs (1 line):
- line 39: // TODO: shared base class for Stop/Keep/CommonGrams?
src/Lucene.Net.QueryParser/Flexible/Standard/Parser/StandardSyntaxParser.cs (1 line):
- line 321: // //TODO: figure out what to do with AND and ORs
src/Lucene.Net/Index/IndexWriterConfig.cs (1 line):
- line 72: public static readonly int DEFAULT_TERM_INDEX_INTERVAL = 32; // TODO: this should be private to the codec, not settable here
src/Lucene.Net.QueryParser/Simple/SimpleQueryParser.cs (1 line):
- line 575: int.TryParse(new string(slopText, 0, slopLength), out int fuzziness); // LUCENENET TODO: Find a way to pass culture
src/Lucene.Net/Analysis/TokenStreamToAutomaton.cs (1 line):
- line 30: // TODO: maybe also toFST? then we can translate atts into FST outputs/weights
src/Lucene.Net.Analysis.Common/Analysis/CharFilter/MappingCharFilterFactory.cs (1 line):
- line 59: // TODO: this should use inputstreams from the loader, not File!
src/Lucene.Net.Queries/Function/BoostedQuery.cs (1 line):
- line 32: // TODO: BoostedQuery and BoostingQuery in the same module?