Summary: 74 instances, 70 unique Text Count // TODO: Validate field types for histograms 1 // TODO: revisit - not entirely sure if this is the best thing to do, especially when there is a range query 1 // TODO: Let User specify their own filter query that is applied to the composite agg search 1 // TODO: Refactor so we can get isLastStep from somewhere besides an instantiated Action class so we can simplify this to a when block 1 // TODO: Validate field type for metrics, 1 // TODO: error handling 1 * TODO: When/if FieldCapabilitiesResponse and other subclasses package private constructors are elevated to public we can remove this logic. 1 // TODO: Is seqno/prim and id returned for all? 1 // TODO: Clean up runner 1 // TODO: This is a hacky solution to get the current start time off the job interval as job-scheduler currently does not 1 // TODO: Move this enum to Job Scheduler plugin 1 // TODO refactor these lists usage to map 1 * TODO situations: 1 // TODO: Should create a transport action to update metadata 1 // TODO: Read in the actual mappings from the source index and use that 1 // TODO: This is incomplete as more than keywords can be grouped as terms, need to figure out the correct way to do this check for now just 1 // TODO: Add indicesOptions? 1 return Instant.now().isAfter(metadata.continuous!!.nextWindowEndTime.plusMillis(rollup.delay ?: 0)) // TODO: !! 1 // TODO: How does this job matching work with roles/security? 1 // TODO: Verify if offset, missing value, min_doc_count, extended_bounds are usable in Composite histogram source 1 // TODO: Add some entry in metadata that will store index -> indexUUID for validated indices 1 // TODO what if update setting failed, cannot reset to -1/-2 1 // TODO: Remove this once the step interface is updated to pass in thread context information. 1 // TODO: Remove this once the step interface is updated to pass in user information. 1 // TODO: error handling - can RemoteTransportException happen here? 1 // TODO: Validate if field is date type: date, date_nanos? 1 // TODO: This can be moved to a common place, since this is shared between Rollup and ISMRollup 1 // TODO: Validation of fields across source and target indices overwriting existing rollup data 1 // TODO: If we have to set this manually for each aggregation builder then it means we could miss new ones settings in the future 1 return hasNextFullWindow(rollup, metadata) // TODO: Behavior when next full window but 0 docs/afterkey is null 1 // TODO clean up for actionIndex 1 // TODO: Scenario: The rollup job is finished, but I (the user) want to redo it all again 1 // TODO: when this happens is it failure or invalid? 1 // TODO: Should we reject all other timezones if they are specified in the query?: aggregationBuilder.timeZone() 1 // TODO: This is mostly used for the check in runJob(), maybe move this out and make that call "metadata == null || shouldProcessRollup" 1 // TODO: Allow filtering for [continuous, job state, metadata status, targetindex, sourceindex] 1 // TODO: Add circuit breaker checks - [cluster healthy, utilization within limit] 1 // TODO: Field and mappings validations of source and target index, i.e. reject a histogram agg on example_field if its not possible 1 // TODO: Make startTime public in Job Scheduler so we can just directly check the value 1 // TODO: Does schema_version get overwritten? 1 // TODO: Failed shouldn't process? How to recover from failed -> how does a user retry a failed rollup 1 // TODO: Source index could be a pattern but it's used at runtime so it could match new indices which weren't matched before 1 // TODO: restrict it for testing 1 // TODO: Perhaps we can do better than this for mappings... as it'll be dynamic for rest 1 // TODO: Verify the seqNo/primterm got updated 1 * // TODO: When FGAC is supported in transform should check the user has the correct permissions 1 // TODO: if metadata id exists delete the metadata doc else just delete transform 1 # TODO: Remove this before initial release, only for developmental purposes 1 // TODO: get rid of !! 1 // TODO: A cluster level setting to control how many rollups can run at once? Or should we be skipping when cpu/memory/jvm is high? 1 // TODO: Clean this up 1 // TODO: Doc counts for aggregations are showing the doc counts of the rollup docs and not the raw data which is expected... 1 // TODO: Is schema_version always present? 1 // TODO missing terms field 1 // TODO: Should update metadata and disable job here instead of allowing the rollup to keep going 1 // TODO: Should we store the value of the past successful page size (?) 1 // TODO this can be moved to job scheduler, so that all extended plugin 1 // TODO: Should we throw error instead? 1 // TODO: Could make this an extension function of DateHistogram and add to some utility file 1 // TODO: Not a fan of this.. but I can't find a way to overwrite the aggregations on the shallow copy or original 1 // TODO: Should we attempt to create a new document instead if failed to parse, the only reason this can happen is if someone deleted 1 // TODO: Catching general exceptions for now, can make more granular 4 // TODO should wrap document already exists exception 1 // TODO: Make sure the interval string is validated before getting here so we don't get errors 1 // TODO: The use of the master transport action UpdateRollupMappingAction will prevent 1 // TODO: Validate field types for terms 1 // TODO: Create namespaces to group properties together 1 // TODO: Wrap client calls in retry for transient failures 2 // TODO: Get all ManagedIndices at once or split into searchAfter queries? 1 .sort(dateHistogram.sourceField, SortOrder.ASC) // TODO: figure out where nulls are sorted 1