dialect predefines following set of options fieldsTerminatedBy()

in unittest/scripts/auto/js_shell/validation/util_help_norecord.js [2570:2843]


      dialect predefines following set of options fieldsTerminatedBy (FT),
      fieldsEnclosedBy (FE), fieldsOptionallyEnclosed (FOE), fieldsEscapedBy
      (FESC) and linesTerminatedBy (LT) in following manner:

      - default: no quoting, tab-separated, lf line endings. (LT=<LF>,
        FESC='\', FT=<TAB>, FE=<empty>, FOE=false)
      - csv: optionally quoted, comma-separated, crlf line endings.
        (LT=<CR><LF>, FESC='\', FT=",", FE='"', FOE=true)
      - tsv: optionally quoted, tab-separated, crlf line endings. (LT=<CR><LF>,
        FESC='\', FT=<TAB>, FE='"', FOE=true)
      - json: one JSON document per line. (LT=<LF>, FESC=<empty>, FT=<LF>,
        FE=<empty>, FOE=false)
      - csv-unix: fully quoted, comma-separated, lf line endings. (LT=<LF>,
        FESC='\', FT=",", FE='"', FOE=false)

      If the schema is not provided, an active schema on the global session, if
      set, will be used.

      If the input values are not necessarily enclosed within fieldsEnclosedBy,
      set fieldsOptionallyEnclosed to true.

      If you specify one separator that is the same as or a prefix of another,
      LOAD DATA INFILE cannot interpret the input properly.

      Connection options set in the global session, such as compression,
      ssl-mode, etc. are used in parallel connections.

      Each parallel connection sets the following session variables:

      - SET SQL_MODE = ''; -- Clear SQL Mode
      - SET NAMES ?; -- Set to characterSet option if provided by user.
      - SET unique_checks = 0
      - SET foreign_key_checks = 0
      - SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

//@<OUT> util loadDump help
NAME
      loadDump - Loads database dumps created by MySQL Shell.

SYNTAX
      util.loadDump(url[, options])

WHERE
      url: defines the location of the dump to be loaded
      options: Dictionary with load options

DESCRIPTION
      Depending on how the dump was created, the url identifies the location
      and in some cases the access method to the dump, i.e. for dumps to be
      loaded using pre-authenticated requests (PAR). Allowed values:

      - /path/to/folder - to load a dump from local storage
      - /oci/bucket/path - to load a dump from OCI Object Storage using an OCI
        profile
      - /aws/bucket/path - to load a dump from AWS S3 Object Storage using the
        AWS settings stored in the credentials and config files
      - PAR to the dump manifest - to load a dump from OCI Object Storage
        created with the ociParManifest option
      - PAR to the dump location - to load a dump from OCI Object Storage using
        a single PAR

      loadDump() will load a dump from the specified path. It transparently
      handles compressed files and directly streams data when loading from
      remote storage (currently HTTP, OCI Object Storage, AWS S3 Object Storage
      and Azure Containers). If the 'waitDumpTimeout' option is set, it will
      load a dump on-the-fly, loading table data chunks as the dumper produces
      them.

      Table data will be loaded in parallel using the configured number of
      threads (4 by default). Multiple threads per table can be used if the
      dump was created with table chunking enabled. Data loads are scheduled
      across threads in a way that tries to maximize parallelism, while also
      minimizing lock contention from concurrent loads to the same table. If
      there are more tables than threads, different tables will be loaded per
      thread, larger tables first. If there are more threads than tables, then
      chunks from larger tables will be proportionally assigned more threads.

      LOAD DATA LOCAL INFILE is used to load table data and thus, the
      'local_infile' MySQL global setting must be enabled.

      Resuming

      The load command will store progress information into a file for each
      step of the loading process, including successfully completed and
      interrupted/failed ones. If that file already exists, its contents will
      be used to skip steps that have already been completed and retry those
      that failed or didn't start yet.

      When resuming, table chunks that have started being loaded but didn't
      finish are loaded again. Duplicate rows are discarded by the server.
      Tables that do not have unique keys are truncated before the load is
      resumed.

      IMPORTANT: Resuming assumes that no changes have been made to the
      partially loaded data between the failure and the retry. Resuming after
      external changes has undefined behavior and may lead to data loss.

      The progress state file has a default name of
      load-progress.<server_uuid>.json and is written to the same location as
      the dump. If 'progressFile' is specified, progress will be written to
      either a local file at the given path, or, if the HTTP(S) scheme is used,
      to a remote file using HTTP PUT requests. Setting it to '' will disable
      progress tracking and resuming.

      If the 'resetProgress' option is enabled, progress information from
      previous load attempts of the dump to the destination server is discarded
      and the load is restarted. You may use this option to retry loading the
      whole dump from the beginning. However, changes made to the database are
      not reverted, so previously loaded objects should be manually dropped
      first.

      Options dictionary:

      - analyzeTables: "off", "on", "histogram" (default: off) - If 'on',
        executes ANALYZE TABLE for all tables, once loaded. If set to
        'histogram', only tables that have histogram information stored in the
        dump will be analyzed. This option can be used even if all 'load'
        options are disabled.
      - backgroundThreads: int (default not set) - Number of additional threads
        to use to fetch contents of metadata and DDL files. If not set, loader
        will use the value of the threads option in case of a local dump, or
        four times that value in case on a non-local dump.
      - characterSet: string (default taken from dump) - Overrides the
        character set to be used for loading dump data. By default, the same
        character set used for dumping will be used (utf8mb4 if not set on
        dump).
      - createInvisiblePKs: bool (default taken from dump) - Automatically
        create an invisible Primary Key for each table which does not have one.
        By default, set to true if dump was created with create_invisible_pks
        compatibility option, false otherwise. Requires server 8.0.24 or newer.
      - deferTableIndexes: "off", "fulltext", "all" (default: fulltext) - If
        "all", creation of "all" indexes except PRIMARY is deferred until after
        table data is loaded, which in many cases can reduce load times. If
        "fulltext", only full-text indexes will be deferred.
      - dryRun: bool (default: false) - Scans the dump and prints everything
        that would be performed, without actually doing so.
      - excludeEvents: array of strings (default not set) - Skip loading
        specified events from the dump. Strings are in format schema.event,
        quoted using backtick characters when required.
      - excludeRoutines: array of strings (default not set) - Skip loading
        specified routines from the dump. Strings are in format schema.routine,
        quoted using backtick characters when required.
      - excludeSchemas: array of strings (default not set) - Skip loading
        specified schemas from the dump.
      - excludeTables: array of strings (default not set) - Skip loading
        specified tables from the dump. Strings are in format schema.table,
        quoted using backtick characters when required.
      - excludeTriggers: array of strings (default not set) - Skip loading
        specified triggers from the dump. Strings are in format schema.table
        (all triggers from the specified table) or schema.table.trigger (the
        individual trigger), quoted using backtick characters when required.
      - excludeUsers: array of strings (default not set) - Skip loading
        specified users from the dump. Each user is in the format of
        'user_name'[@'host']. If the host is not specified, all the accounts
        with the given user name are excluded.
      - handleGrantErrors: "abort", "drop_account", "ignore" (default: abort) -
        Specifies action to be performed in case of errors related to the
        GRANT/REVOKE statements, "abort": throws an error and aborts the load,
        "drop_account": deletes the problematic account and continues,
        "ignore": ignores the error and continues loading the account.
      - ignoreExistingObjects: bool (default false) - Load the dump even if it
        contains user accounts or DDL objects that already exist in the target
        database. If this option is set to false, any existing object results
        in an error. Setting it to true ignores existing objects, but the
        CREATE statements are still going to be executed.
      - ignoreVersion: bool (default false) - Load the dump even if the major
        version number of the server where it was created is different from
        where it will be loaded.
      - includeEvents: array of strings (default not set) - Loads only the
        specified events from the dump. Strings are in format schema.event,
        quoted using backtick characters when required. By default, all events
        are included.
      - includeRoutines: array of strings (default not set) - Loads only the
        specified routines from the dump. Strings are in format schema.routine,
        quoted using backtick characters when required. By default, all
        routines are included.
      - includeSchemas: array of strings (default not set) - Loads only the
        specified schemas from the dump. By default, all schemas are included.
      - includeTables: array of strings (default not set) - Loads only the
        specified tables from the dump. Strings are in format schema.table,
        quoted using backtick characters when required. By default, all tables
        from all schemas are included.
      - includeTriggers: array of strings (default not set) - Loads only the
        specified triggers from the dump. Strings are in format schema.table
        (all triggers from the specified table) or schema.table.trigger (the
        individual trigger), quoted using backtick characters when required. By
        default, all triggers are included.
      - includeUsers: array of strings (default not set) - Load only the
        specified users from the dump. Each user is in the format of
        'user_name'[@'host']. If the host is not specified, all the accounts
        with the given user name are included. By default, all users are
        included.
      - loadData: bool (default: true) - Loads table data from the dump.
      - loadDdl: bool (default: true) - Executes DDL/SQL scripts in the dump.
      - loadIndexes: bool (default: true) - use together with deferTableIndexes
        to control whether secondary indexes should be recreated at the end of
        the load. Useful when loading DDL and data separately.
      - loadUsers: bool (default: false) - Executes SQL scripts for user
        accounts, roles and grants contained in the dump. Note: statements for
        the current user will be skipped.
      - maxBytesPerTransaction: string (default taken from dump) - Specifies
        the maximum number of bytes that can be loaded from a dump data file
        per single LOAD DATA statement. Supports unit suffixes: k (kilobytes),
        M (Megabytes), G (Gigabytes). Minimum value: 4096. If this option is
        not specified explicitly, the value of the bytesPerChunk dump option is
        used, but only in case of the files with data size greater than 1.5 *
        bytesPerChunk.
      - progressFile: path (default: load-progress.<server_uuid>.progress) -
        Stores load progress information in the given local file path.
      - resetProgress: bool (default: false) - Discards progress information of
        previous load attempts to the destination server and loads the whole
        dump again.
      - schema: string (default not set) - Load the dump into the given schema.
        This option can only be used when loading just one schema, (either only
        one schema was dumped, or schema filters result in only one schema).
      - sessionInitSql: list of strings (default: []) - execute the given list
        of SQL statements in each session about to load data.
      - showMetadata: bool (default: false) - Displays the metadata information
        stored in the dump files, i.e. binary log file name and position.
      - showProgress: bool (default: true if stdout is a tty, false otherwise)
        - Enable or disable import progress information.
      - skipBinlog: bool (default: false) - Disables the binary log for the
        MySQL sessions used by the loader (set sql_log_bin=0).
      - threads: int (default: 4) - Number of threads to use to import table
        data.
      - updateGtidSet: "off", "replace", "append" (default: off) - if set to a
        value other than 'off' updates GTID_PURGED by either replacing its
        contents or appending to it the gtid set present in the dump.
      - waitDumpTimeout: float (default: 0) - Loads a dump while it's still
        being created. Once all uploaded tables are processed the command will
        either wait for more data, the dump is marked as completed or the given
        timeout (in seconds) passes. <= 0 disables waiting.
      - sessionInitSql: list of strings (default: []) - execute the given list
        of SQL statements in each session about to load data.
      - osBucketName: string (default: not set) - Use specified OCI bucket for
        the location of the dump.
      - osNamespace: string (default: not set) - Specifies the namespace where
        the bucket is located, if not given it will be obtained using the
        tenancy id on the OCI configuration.
      - ociConfigFile: string (default: not set) - Use the specified OCI
        configuration file instead of the one at the default location.
      - ociProfile: string (default: not set) - Use the specified OCI profile
        instead of the default one.
      - s3BucketName: string (default: not set) - Name of the AWS S3 bucket to
        use. The bucket must already exist.
      - s3CredentialsFile: string (default: not set) - Use the specified AWS
        credentials file.
      - s3ConfigFile: string (default: not set) - Use the specified AWS config
        file.
      - s3Profile: string (default: not set) - Use the specified AWS profile.
      - s3Region: string (default: not set) - Use the specified AWS region.
      - s3EndpointOverride: string (default: not set) - Use the specified AWS
        S3 API endpoint instead of the default one.
      - azureContainerName: string (default: not set) - Name of the Azure
        container to use. The container must already exist.
      - azureConfigFile: string (default: not set) - Use the specified Azure
        configuration file instead of the one at the default location.
      - azureStorageAccount: string (default: not set) - The account to be used
        for the operation.
      - azureStorageSasToken: string (default: not set) - Azure Shared Access
        Signature (SAS) token, to be used for the authentication of the
        operation, instead of a key.

      Connection options set in the global session, such as compression,
      ssl-mode, etc. are inherited by load sessions.

      Examples:

      util.loadDump('sakila_dump')

      util.loadDump('mysql/sales', {
          'osBucketName': 'mybucket',    // OCI Object Storage bucket
          'waitDumpTimeout': 1800        // wait for new data for up to 30mins
      })