connector/src/main/scala/com/microsoft/kusto/spark/datasink/KustoWriter.scala (4 lines): - line 91: // TODO put data.sparkSession.sparkContext.appName in client app name - line 358: // TODO - use a pool of two streams? - line 394: // TODO Is it really better > (other option is to copy the data from the stream to a new stream - which i try to avoid)? - line 417: .size() // TODO Can i simply use csvWriter.getCounter without flush? (we count all bytes and no transformation is done) connector/src/main/scala/com/microsoft/kusto/spark/datasink/FinalizeHelper.scala (3 lines): - line 257: // TODO: should we throw ? - line 264: // TODO error code should be added to java client - line 271: // TODO - think about this logic and other cases that should not throw all (maybe everything that starts with skip? this actualy connector/src/main/scala/com/microsoft/kusto/spark/utils/KustoConstants.scala (2 lines): - line 38: // TODO - make it configureable, user can then fine tune it using the SDK logs reporting fallback to queue (we should - line 41: // TODO when this is configureable - we need 3 tests: 1) max = 10 < one row size 2) row size < max=20< size(all the data), 3) size(data)< max = 4mb connector/src/main/scala/com/microsoft/kusto/spark/datasource/TransientStorageParameters.scala (2 lines): - line 45: // TODO next breaking - change to Option[String] - line 108: // TODO next breaking - change to "authMethod:" connector/src/main/scala/com/microsoft/kusto/spark/utils/ExtendedKustoClient.scala (2 lines): - line 296: // TODO: use count over the show operations - line 440: // TODO handle specific errors connector/src/main/scala/com/microsoft/kusto/spark/datasource/KustoRelation.scala (1 line): - line 258: // TODO revisit this block and refactor connector/src/main/scala/com/microsoft/kusto/spark/utils/KustoClientCache.scala (1 line): - line 18: // TODO Clear cache after a while so that ingestClient can be closed connector/src/main/scala/com/microsoft/kusto/spark/utils/ContainerProvider.scala (1 line): - line 53: // TODO the only difference between this and the one in ExtendedKustoClient is the maxAttempts. Should we refactor ? connector/src/main/scala/com/microsoft/kusto/spark/datasource/KustoReader.scala (1 line): - line 70: TODO - add test connector/src/main/scala/com/microsoft/kusto/spark/common/KustoOptions.scala (1 line): - line 9: // TODO validate for each option given by user that it exists in the set connector/src/main/scala/com/microsoft/kusto/spark/utils/KustoDataSourceUtils.scala (1 line): - line 389: // TODO get defaults from KustoWriter()