Extended Json()

in unittest/scripts/auto/js_shell/validation/util_help_norecord.js [2175:2497]


      Extended Json (strict mode) into MySQL values.

      The options dictionary supports the following options:

      - schema: string - name of target schema.
      - collection: string - name of collection where the data will be
        imported.
      - table: string - name of table where the data will be imported.
      - tableColumn: string (default: "doc") - name of column in target table
        where the imported JSON documents will be stored.
      - convertBsonTypes: bool (default: false) - enables the BSON data type
        conversion.
      - convertBsonOid: bool (default: the value of convertBsonTypes) - enables
        conversion of the BSON ObjectId values.
      - extractOidTime: string (default: empty) - creates a new field based on
        the ObjectID timestamp. Only valid if convertBsonOid is enabled.

      The following options are valid only when convertBsonTypes is enabled.
      They are all boolean flags. ignoreRegexOptions is enabled by default,
      rest are disabled by default.

      - ignoreDate: disables conversion of BSON Date values
      - ignoreTimestamp: disables conversion of BSON Timestamp values
      - ignoreRegex: disables conversion of BSON Regex values.
      - ignoreBinary: disables conversion of BSON BinData values.
      - decimalAsDouble: causes BSON Decimal values to be imported as double
        values.
      - ignoreRegexOptions: causes regex options to be ignored when processing
        a Regex BSON value. This option is only valid if ignoreRegex is
        disabled.

      If the schema is not provided, an active schema on the global session, if
      set, will be used.

      The collection and the table options cannot be combined. If they are not
      provided, the basename of the file without extension will be used as
      target collection name.

      If the target collection or table does not exist, they are created,
      otherwise the data is inserted into the existing collection or table.

      The tableColumn implies the use of the table option and cannot be
      combined with the collection option.

      BSON Data Type Processing.

      If only convertBsonOid is enabled, no conversion will be done on the rest
      of the BSON Data Types.

      To use extractOidTime, it should be set to a name which will be used to
      insert an additional field into the main document. The value of the new
      field will be the timestamp obtained from the ObjectID value. Note that
      this will be done only for an ObjectID value associated to the '_id'
      field of the main document.

      NumberLong and NumberInt values will be converted to integer values.

      NumberDecimal values are imported as strings, unless decimalAsDouble is
      enabled.

      Regex values will be converted to strings containing the regular
      expression. The regular expression options are ignored unless
      ignoreRegexOptions is disabled. When ignoreRegexOptions is disabled the
      regular expression will be converted to the form: /<regex>/<options>.

EXCEPTIONS
      Throws ArgumentError when:

      - Option name is invalid
      - Required options are not set and cannot be deduced
      - Shell is not connected to MySQL Server using X Protocol
      - Schema is not provided and there is no active schema on the global
        session
      - Both collection and table are specified

      Throws LogicError when:

      - Path to JSON document does not exists or is not a file

      Throws RuntimeError when:

      - The schema does not exists
      - MySQL Server returns an error

      Throws InvalidJson when:

      - JSON document is ill-formed

//@<OUT> util importTable help
NAME
      importTable - Import table dump stored in files to target table using
                    LOAD DATA LOCAL INFILE calls in parallel connections.

SYNTAX
      util.importTable(files[, options])

WHERE
      files: Path or list of paths to files with user data. Path name can
             contain a glob pattern with wildcard '*' and/or '?'. All selected
             files must be chunks of the same target table.
      options: Dictionary with import options

DESCRIPTION
      The scheme part of a filename contains infomation about the transport
      backend. Supported transport backends are: file://, http://, https://. If
      the scheme part of a filename is omitted, then file:// transport backend
      will be chosen.

      Supported filename formats:

      - /path/to/file - Path to a locally or remotely (e.g. in OCI Object
        Storage) accessible file or directory
      - file:///path/to/file - Path to a locally accessible file or directory
      - http[s]://host.domain[:port]/path/to/file - Location of a remote file
        accessible through HTTP(s) (importTable() only)

      If the osBucketName option is given, the path argument must specify a
      plain path in that OCI (Oracle Cloud Infrastructure) Object Storage
      bucket.

      The OCI configuration profile is located through the oci.profile and
      oci.configFile global shell options and can be overridden with ociProfile
      and ociConfigFile, respectively.

      If the s3BucketName option is given, the path argument must specify a
      plain path in that AWS S3 bucket.

      If the azureContainerName option is given, the path argument must specify
      a plain path in that Azure container.

      Options dictionary:

      - schema: string (default: current shell active schema) - Name of target
        schema
      - table: string (default: filename without extension) - Name of target
        table
      - columns: array of strings and/or integers (default: empty array) - This
        option takes an array of column names as its value. The order of the
        column names indicates how to match data file columns with table
        columns. Use non-negative integer `i` to capture column value into user
        variable @i. With user variables, the decodeColumns option enables you
        to perform preprocessing transformations on their values before
        assigning the result to columns.
      - fieldsTerminatedBy: string (default: "\t") - This option has the same
        meaning as the corresponding clause for LOAD DATA INFILE.
      - fieldsEnclosedBy: char (default: '') - This option has the same meaning
        as the corresponding clause for LOAD DATA INFILE.
      - fieldsEscapedBy: char (default: '\') - This option has the same meaning
        as the corresponding clause for LOAD DATA INFILE.
      - fieldsOptionallyEnclosed: bool (default: false) - Set to true if the
        input values are not necessarily enclosed within quotation marks
        specified by fieldsEnclosedBy option. Set to false if all fields are
        quoted by character specified by fieldsEnclosedBy option.
      - linesTerminatedBy: string (default: "\n") - This option has the same
        meaning as the corresponding clause for LOAD DATA INFILE. For example,
        to import Windows files that have lines terminated with carriage
        return/linefeed pairs, use --lines-terminated-by="\r\n". (You might
        have to double the backslashes, depending on the escaping conventions
        of your command interpreter.) See Section 13.2.7, "LOAD DATA INFILE
        Syntax".
      - replaceDuplicates: bool (default: false) - If true, input rows that
        have the same value for a primary key or unique index as an existing
        row will be replaced, otherwise input rows will be skipped.
      - threads: int (default: 8) - Use N threads to sent file chunks to the
        server.
      - bytesPerChunk: string (minimum: "131072", default: "50M") - Send
        bytesPerChunk (+ bytes to end of the row) in single LOAD DATA call.
        Unit suffixes, k - for Kilobytes (n * 1'000 bytes), M - for Megabytes
        (n * 1'000'000 bytes), G - for Gigabytes (n * 1'000'000'000 bytes),
        bytesPerChunk="2k" - ~2 kilobyte data chunk will send to the MySQL
        Server. Not available for multiple files import.
      - maxBytesPerTransaction: string (default: empty) - Specifies the maximum
        number of bytes that can be loaded from a dump data file per single
        LOAD DATA statement. If a content size of data file is bigger than this
        option value, then multiple LOAD DATA statements will be executed per
        single file. If this option is not specified explicitly, dump data file
        sub-chunking will be disabled. Use this option with value less or equal
        to global variable 'max_binlog_cache_size' to mitigate "MySQL Error
        1197 (HY000): Multi-statement transaction required more than
        'max_binlog_cache_size' bytes of storage". Unit suffixes: k
        (Kilobytes), M (Megabytes), G (Gigabytes). Minimum value: 4096.
      - maxRate: string (default: "0") - Limit data send throughput to maxRate
        in bytes per second per thread. maxRate="0" - no limit. Unit suffixes,
        k - for Kilobytes (n * 1'000 bytes), M - for Megabytes (n * 1'000'000
        bytes), G - for Gigabytes (n * 1'000'000'000 bytes), maxRate="2k" -
        limit to 2 kilobytes per second.
      - showProgress: bool (default: true if stdout is a tty, false otherwise)
        - Enable or disable import progress information.
      - skipRows: int (default: 0) - Skip first N physical lines from each of
        the imported files. You can use this option to skip an initial header
        line containing column names.
      - dialect: enum (default: "default") - Setup fields and lines options
        that matches specific data file format. Can be used as base dialect and
        customized with fieldsTerminatedBy, fieldsEnclosedBy,
        fieldsOptionallyEnclosed, fieldsEscapedBy and linesTerminatedBy
        options. Must be one of the following values: default, csv, tsv, json
        or csv-unix.
      - decodeColumns: map (default: not set) - a map between columns names and
        SQL expressions to be applied on the loaded data. Column value captured
        in 'columns' by integer is available as user variable '@i', where `i`
        is that integer. Requires 'columns' to be set.
      - characterSet: string (default: not set) - Interpret the information in
        the input file using this character set encoding. characterSet set to
        "binary" specifies "no conversion". If not set, the server will use the
        character set indicated by the character_set_database system variable
        to interpret the information in the file.
      - sessionInitSql: list of strings (default: []) - execute the given list
        of SQL statements in each session about to load data.

      OCI Object Storage Options

      - osBucketName: string (default: not set) - Name of the OCI Object
        Storage bucket to use. The bucket must already exist.
      - osNamespace: string (default: not set) - Specifies the namespace where
        the bucket is located, if not given it will be obtained using the
        tenancy id on the OCI configuration.
      - ociConfigFile: string (default: not set) - Override oci.configFile
        shell option, to specify the path to the OCI configuration file.
      - ociProfile: string (default: not set) - Override oci.profile shell
        option, to specify the name of the OCI profile to use.

      AWS S3 Object Storage Options

      - s3BucketName: string (default: not set) - Name of the AWS S3 bucket to
        use. The bucket must already exist.
      - s3CredentialsFile: string (default: not set) - Use the specified AWS
        credentials file.
      - s3ConfigFile: string (default: not set) - Use the specified AWS config
        file.
      - s3Profile: string (default: not set) - Use the specified AWS profile.
      - s3Region: string (default: not set) - Use the specified AWS region.
      - s3EndpointOverride: string (default: not set) - Use the specified AWS
        S3 API endpoint instead of the default one.

      If the s3BucketName option is used, the dump is stored in the specified
      AWS S3 bucket. Connection is established using default local AWS
      configuration paths and profiles, unless overridden. The directory
      structure is simulated within the object name.

      The s3CredentialsFile, s3ConfigFile, s3Profile, s3Region and
      s3EndpointOverride options cannot be used if the s3BucketName option is
      not set or set to an empty string.

      Handling of the AWS settings

      The AWS options are evaluated in the order of precedence, the first
      available value is used.

      1. Name of the AWS profile:

      - the s3Profile option
      - the AWS_PROFILE environment variable
      - the AWS_DEFAULT_PROFILE environment variable
      - the default value of default

      2. Location of the credentials file:

      - the s3CredentialsFile option
      - the AWS_SHARED_CREDENTIALS_FILE environment variable
      - the default value of ~/.aws/credentials

      3. Location of the config file:

      - the s3ConfigFile option
      - the AWS_CONFIG_FILE environment variable
      - the default value of ~/.aws/config

      4. Name of the AWS region:

      - the s3Region option
      - the AWS_REGION environment variable
      - the AWS_DEFAULT_REGION environment variable
      - the region setting from the config file for the specified profile
      - the default value of us-east-1

      5. URI of AWS S3 API endpoint

      - the s3EndpointOverride option
      - the default value of https://<s3BucketName>.s3.<region>.amazonaws.com

      The AWS credentials are fetched from the following providers, in the
      order of precedence:

      1. Environment variables:

      - AWS_ACCESS_KEY_ID
      - AWS_SECRET_ACCESS_KEY
      - AWS_SESSION_TOKEN

      2. Settings from the credentials file for the specified profile:

      - aws_access_key_id
      - aws_secret_access_key
      - aws_session_token

      3. Process specified by the credential_process setting from the config
         file for the specified profile:

      - AccessKeyId
      - SecretAccessKey
      - SessionToken

      4. Settings from the config file for the specified profile:

      - aws_access_key_id
      - aws_secret_access_key
      - aws_session_token

      The items specified above correspond to the following credentials:

      - the AWS access key
      - the secret key associated with the AWS access key
      - the AWS session token for the temporary security credentials

      The process/command line specified by the credential_process setting must
      write a JSON object to the standard output in the following form:
      {
        "Version": 1,
        "AccessKeyId": "AWS access key",
        "SecretAccessKey": "secret key associated with the AWS access key",
        "SessionToken": "temporary AWS session token, optional",
        "Expiration": "RFC3339 timestamp, optional"
      }