flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/HiveParserRexNodeConverter.java (8 lines): - line 326: // TODO: Verify if we need to use ConstantObjectInspector to unwrap data - line 345: // TODO: is Decimal an exact numeric or approximate numeric? - line 363: // TODO: return createNullLiteral(literal); - line 389: // TODO: The best solution is to support NaN in expression reduction. - line 528: // TODO: 1) Expand to other functions as needed 2) What about types other than primitive. - line 547: // TODO: checking 2 children is useless, compare already does that. - line 614: // TODO: Cast Function in Calcite have a bug where it infer type on cast throws - line 775: // TODO: add method to UDFBridge to say if it is a cast func flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/copy/HiveParserSemanticAnalyzer.java (7 lines): - line 512: // TODO: For now only support sampling on up to two columns - line 958: // TODO: hive doesn't break here, so we copy what's below here - line 1425: // TODO: check view references, too - line 1814: // TODO: make aliases unique, otherwise needless rewriting takes place - line 1837: // TODO: Have to put in the support for AS clause - line 1960: // TODO: excludeCols may be possible to remove using the same technique. - line 1980: // TODO: This is fraught with peril. flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/HiveParserUtils.java (7 lines): - line 603: // TODO: type conversion - line 621: // TODO: Requiring a GenericUDAFEvaluator means we only support hive UDAFs. Need to avoid this - line 1067: // TODO: we need a way to tell whether a function is built-in, for now just return false so that - line 1281: // TODO: need to support overriding hive version - line 1381: // TODO: there's a potential problem here if some table uses external schema like - line 1615: // TODO: Does HQL allows expressions as aggregate args or can it only be projections from - line 1626: // TODO: does arg need type cast? flink-sql-connector-hive-3.1.3/src/main/java/org/apache/hadoop/hive/conf/HiveConf.java (5 lines): - line 346: * TODO: Eventually auto-populate this based on prefixes. The conf variables - line 587: // TODO: this needs to be removed; see TestReplicationScenarios* comments. - line 3408: // TODO: Make use of this config to configure fetch size - line 3985: // TODO Move the following 2 properties out of Configuration to a constant. - line 3992: "TODO doc", "llap.daemon.shuffle.dir-watcher.enabled"), flink-sql-connector-hive-3.1.3/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java (4 lines): - line 1036: // TODO: Fix this - line 1492: // TODO: backward compat for Hive <= 0.12. Can be removed later. - line 1501: // TODO: in these methods, do we really need to deepcopy? - line 2909: // TODO: we could remember if it's unsupported and stop sending calls; although, it might flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/HiveParserCalcitePlanner.java (4 lines): - line 1791: // TODO: do we need to get to child? - line 2052: // TODO: will this also fix windowing? try - line 2340: // TODO: this can overwrite the mapping. Should this be allowed? - line 2595: // TODO: how to decide this? flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/copy/HiveASTParseUtils.java (3 lines): - line 191: // We have a nested setcolref. Process that and start from scratch TODO: use - line 229: // TODO: We could find a different from branch for the union, that might have an - line 259: // TODO: if a side of the union has 2 columns with the same name, noone on the higher flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/HiveParserTypeCheckProcFactory.java (3 lines): - line 1056: // TODO: should check SqlOperator first and ideally shouldn't be using - line 1195: // TODO: don't do this because older version UDF only supports 2 args - line 1207: // TODO: don't do this because older version UDF only supports 2 args flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/copy/HiveParserBaseSemanticAnalyzer.java (2 lines): - line 693: // The following 2 lines are exactly what MySQL does TODO: why do we do - line 1522: // TODO: Should we use grpbyExprNDesc.getTypeInfo()? what if expr is UDF flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/copy/HiveParserRowResolver.java (2 lines): - line 55: // TODO: Refactor this and do in a more object oriented manner - line 359: // TODO: 1) How to handle collisions? 2) Should we be cloning ColumnInfo or not? flink-connector-hive/src/main/java/org/apache/flink/table/functions/hive/HiveGenericUDAF.java (2 lines): - line 161: // TODO: investigate whether this has impact on Flink streaming job with - line 192: * getNewAggregationBuffer(). TODO: re-evaluate how this will fit into Flink's new type flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/copy/HiveParserSqlFunctionConverter.java (2 lines): - line 115: // TODO: this is not valid. Function names for built-in UDFs are specified in - line 404: // TODO: Perhaps we should do this for all functions, not just +,- flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableSink.java (2 lines): - line 327: // TODO: may append something more meaningful than a timestamp, like query ID - line 794: // TODO: may append something more meaningful than a timestamp, like query ID flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/HiveParser.java (1 line): - line 325: // TODO: flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableSource.java (1 line): - line 367: // TODO now we assume that one hive external table has only one storage file format flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/read/HiveMapredSplitReader.java (1 line): - line 69: // TODO: push projection into underlying input format that supports it flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/copy/HiveParserTypeCheckCtx.java (1 line): - line 45: * TODO: this currently will only be able to resolve reference to parent query's column this flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveCatalog.java (1 line): - line 1124: // TODO: handle GenericCatalogPartition flink-connector-hive/src/main/java/org/apache/flink/table/runtime/operators/hive/script/ScriptProcessBuilder.java (1 line): - line 68: // TODO: also should prepend the path contains files added flink-connector-hive/src/main/java/org/apache/flink/table/endpoint/hive/util/OperationExecutorFactory.java (1 line): - line 463: // TODO: remove until Catalog listFunctions return flink-connector-hive/src/main/java/org/apache/flink/table/endpoint/hive/util/ThriftObjectConversions.java (1 line): - line 320: // TODO: Support accurate start offset flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/util/HiveTableUtil.java (1 line): - line 557: // TODO: [FLINK-12398] Support partitioned view in catalog API flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/write/HiveWriterFactory.java (1 line): - line 188: // TODO: support partition properties, for now assume they're same as table properties flink-connector-hive/src/main/java/org/apache/flink/table/endpoint/hive/HiveServer2Endpoint.java (1 line): - line 637: // TODO: support completed time / start time flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/MRSplitsGetter.java (1 line): - line 131: // TODO: we should consider how to calculate the splits according to minNumSplits in the flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/copy/HiveParserDefaultGraphWalker.java (1 line): - line 90: // TODO: rewriting the logic of those walkers to use opQueue