It takes a single argument: The json schema()

in bigtop-packages/src/common/hadoop/init-hcfs.groovy [34:102]


    It takes a single argument: The json schema (a list of lists),
    of 4 element tuples.  For an example , see the bigtop init-hcfs.json
    file.  The main elements of the JSON file are:

    A copy of init-hcfs.json ships with bigtop distributions.

    dir: list of dirs to create with permissions.
    user: list of users to setup home dirs with permissions.
    root_user: The root owner of distributed FS, to run shell commands.

    To run this script, you will want to setup your environment using
    init-hcfs.json,
    which defines the properties above, and then invoke this script.

    Details below.

    SETUP YOUR CLUSTER ENVIRONMENT

    As mentinoed above, the init-hcfs.json file is what guides the
    directories/users to setup.
    So first you will want to edit that file as you need to.  Some common
    modifications:


    - Usually the "root_user" on HDFS is just hdfs.  For other file systems
    the root user might be "root".
    - The default hadoop users you may find in the init-hcfs.json template
    you follow "tom"/"alice"/etc.. aren't necessarily on all clusters.

    HOW TO INVOKE:

    1) Simple groovy based method:  Just manually construct a hadoop classpath:

    groovy -classpath /usr/lib/hadoop/hadoop-common-2.0.6-alpha.jar
    :/usr/lib/hadoop/lib/guava-11.0.2.jar
    :/etc/hadoop/conf/:/usr/lib/hadoop/hadoop-common-2.0.6-alpha.jar
    :/usr/lib/hadoop/lib/commons-configuration-1.6.jar
    :/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/hadoop-auth.jar
    :/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar
    :/usr/lib/hadoop-hdfs/hadoop-hdfs.jar
    :/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar /vagrant/init-hcfs.groovy
    /vagrant/init-hcfs.json

    2) Another method: Follow the instructions on groovy.codehaus.org/Running
     for setting up groovy runtime environment with
    CLASSPATH and/or append those libraries to the shebang command as
    necessary, and then simply do:

    chmod +x init-hcfs.groovy
    ./init-hcfs.groovy init-hcfs.json

    *********************************************************************
"""

/**
 * The HCFS generic provisioning process:
 *
 *   1) Create a file system skeleton.
 *   2) Create users with home dirs in /user.
 *
 *   In the future maybe we will add more optional steps (i.e. adding libs to
 *   the distribtued cache, mounting FUSE over HDFS, etc...).
 **/

def errors = [
    ("0: No init-hcfs.json input file provided !"): {
      LOG.info("Checking argument length: " + args.length + " " + args);
      return args.length == 1
    },