This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/fluo-uno.git


The following commit(s) were added to refs/heads/master by this push:
     new 2b15723  Separated setup into install & run command (#205)
2b15723 is described below

commit 2b1572322d0955e8ae39818ad8b7c27c37fb7057
Author: Mike Walch <[email protected]>
AuthorDate: Thu Oct 18 16:22:11 2018 -0400

    Separated setup into install & run command (#205)
    
    * Created plugin system
    * Updated source code headers for Apache
---
 AUTHORS                                            |   5 -
 NOTICE                                             |   5 +
 README.md                                          |  47 +++---
 bin/impl/fetch.sh                                  | 100 +++---------
 bin/impl/install.sh                                |  46 ++++++
 .../{setup-accumulo.sh => install/accumulo.sh}     |  51 ++----
 bin/impl/install/fluo-yarn.sh                      |  55 +++++++
 bin/impl/{setup-fluo.sh => install/fluo.sh}        |  15 +-
 bin/impl/{setup-hadoop.sh => install/hadoop.sh}    |  29 +---
 .../{setup-zookeeper.sh => install/zookeeper.sh}   |  21 ++-
 bin/impl/kill.sh                                   |  40 ++++-
 bin/impl/load-env.sh                               |  13 +-
 bin/impl/print-env.sh                              |  13 +-
 bin/impl/run.sh                                    |  49 ++++++
 bin/impl/run/accumulo.sh                           |  40 +++++
 .../spark-env.sh => bin/impl/run/fluo-yarn.sh      |  18 ++-
 conf/spark/spark-env.sh => bin/impl/run/fluo.sh    |  21 ++-
 conf/spark/spark-env.sh => bin/impl/run/hadoop.sh  |  31 +++-
 .../spark-env.sh => bin/impl/run/zookeeper.sh      |  24 ++-
 bin/impl/setup-fluo-yarn.sh                        |  58 -------
 bin/impl/setup-metrics.sh                          | 132 ----------------
 bin/impl/setup.sh                                  |  50 ++++++
 bin/impl/start.sh                                  |  29 +---
 bin/impl/stop.sh                                   |  20 +--
 bin/impl/util.sh                                   | 103 ++++++++++--
 bin/impl/version.sh                                |  13 +-
 bin/uno                                            | 176 ++++++++-------------
 conf/uno.conf                                      |  37 +++--
 .../spark-env.sh => plugins/accumulo-encryption.sh |  21 ++-
 plugins/influx-metrics.sh                          | 164 +++++++++++++++++++
 .../influx-metrics}/accumulo-dashboard.json        |   0
 .../grafana => plugins/influx-metrics}/custom.ini  |   0
 .../influx-metrics}/influxdb.conf                  |   0
 bin/impl/setup-spark.sh => plugins/spark.sh        |  27 +++-
 {conf => plugins}/spark/spark-defaults.conf        |   0
 {conf => plugins}/spark/spark-env.sh               |   0
 36 files changed, 834 insertions(+), 619 deletions(-)

diff --git a/AUTHORS b/AUTHORS
deleted file mode 100644
index d413329..0000000
--- a/AUTHORS
+++ /dev/null
@@ -1,5 +0,0 @@
-AUTHORS
--------
-
-Keith Turner - Peterson Technologies
-Mike Walch - Peterson Technologies
diff --git a/NOTICE b/NOTICE
new file mode 100644
index 0000000..5aa6d46
--- /dev/null
+++ b/NOTICE
@@ -0,0 +1,5 @@
+Apache Fluo
+Copyright 2017 The Apache Software Foundation.
+
+This product includes software developed at
+The Apache Software Foundation (http://www.apache.org/).
diff --git a/README.md b/README.md
index 0315142..d829dc7 100644
--- a/README.md
+++ b/README.md
@@ -109,17 +109,16 @@ With `uno` script set up, you can now use it to download, 
configure, and run Flu
 The `uno fetch <component>` command fetches the tarballs of a component and 
its dependencies for later
 use by the `setup` command. By default, the `fetch` command downloads tarballs 
but you can configure it
 to build Fluo or Accumulo from a local git repo by setting `FLUO_REPO` or 
`ACCUMULO_REPO` in `uno.conf`.
-
-If `uno fetch all` is run, all possible components will be either downloaded 
or built. If you
-would like to only fetch certain components, run `uno fetch` to see a list of 
possible components.
+Run `uno fetch` to see a list of possible components.
 
 After the `fetch` command is run for the first time, it only needs to run 
again if you want to
 upgrade components and need to download/build the latest version.
 
 ## Setup command
 
-The `uno setup` command will install the downloaded tarballs to the directory 
set by `$INSTALL` in your
-`uno.conf` and run you local development cluster. The command can be run in 
several different ways:
+The `uno setup` command combines `uno install` and `uno run` into one command. 
 It will install the
+downloaded tarballs to the directory set by `$INSTALL` in your `uno.conf` and 
run you local development
+cluster. The command can be run in several different ways:
 
 1. Sets up Apache Accumulo and its dependencies of Hadoop, ZooKeeper. This 
starts all processes and
    will wipe Accumulo/Hadoop if this command was run previously.
@@ -137,34 +136,36 @@ The `uno setup` command will install the downloaded 
tarballs to the directory se
         uno setup fluo --no-deps
         uno setup accumulo --no-deps
 
-4. Sets up metrics service (InfluxDB + Grafana).
+You can confirm that everything started by checking the monitoring pages below:
 
-        uno setup metrics
+ * [Hadoop NameNode](http://localhost:50070/)
+ * [Hadoop ResourceManager](http://localhost:8088/)
+ * [Accumulo Monitor](http://localhost:9995/)
 
-5. Sets up Apache Spark and starts Spark's History Server.
+If you run some tests and then want a fresh cluster, run the `setup` command 
again which will
+kill all running processes, clear any data and logs, and restart your cluster.
 
-        uno setup spark
+## Plugins
 
-6. Sets up all components (Fluo, Accumulo, Hadoop, ZooKeeper, Spark, metrics 
service).
+Uno is focused on running Accumulo & Fluo.  Optional features and service can 
be run using plugins.
+These plugins can optionally execute after the `install` or `run` commands.  
They are configured by
+setting `POST_INSTALL_PLUGINS` and `POST_RUN_PLUGINS` in `uno.conf`.
 
-        uno setup all
+### Post install plugins
 
-You can confirm that everything started by checking the monitoring pages below:
+These plugins can optionally execute after the `install` command for Accumulo 
and Fluo:
 
- * [Hadoop NameNode](http://localhost:50070/)
- * [Hadoop ResourceManager](http://localhost:8088/)
- * [Accumulo Monitor](http://localhost:9995/)
- * [Spark HistoryServer](http://localhost:18080/)
- * [Grafana](http://localhost:3000/) (optional)
- * [InfluxDB Admin](http://localhost:8083/) (optional)
+* `accumulo-encryption` - Turns on Accumulo encryption
+* `influx-metrics` - Install and run metrics service using InfluxDB & Grafana
+  * [Grafana](http://localhost:3000/)
+  * [InfluxDB Admin](http://localhost:8083/)
 
-You can verify that Fluo was installed correctly by running the `fluo` command 
which you can use
-to administer Fluo:
+### Post run plugins
 
-    ./install/fluo-1.0.0-beta-1/bin/fluo
+These plugins can optionally execute after the `run` command for Accumulo and 
Fluo:
 
-If you run some tests and then want a fresh cluster, run the `setup` command 
again which will
-kill all running processes, clear any data and logs, and restart your cluster.
+* `spark` - Install Apache Spark and start Spark's History server
+  * [Spark HistoryServer](http://localhost:18080/)
 
 ## Wipe command
 
diff --git a/bin/impl/fetch.sh b/bin/impl/fetch.sh
index d5ba55a..006567e 100755
--- a/bin/impl/fetch.sh
+++ b/bin/impl/fetch.sh
@@ -1,12 +1,13 @@
 #! /usr/bin/env bash
 
-# Copyright 2014 Uno authors (see AUTHORS)
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
 #
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
@@ -14,35 +15,23 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-source "$UNO_HOME"/bin/impl/util.sh
-
-function download_other() {
-  local url_prefix=$1
-  local tarball=$2
-  local expected_hash=$3
-
-  wget -c -P "$DOWNLOADS" "$url_prefix/$tarball"
-  verify_exist_hash "$tarball" "$expected_hash"
-  echo "$tarball exists in downloads/ and matches expected checksum 
($expected_hash)"
-}
-
-function download_apache() {
-  local url_prefix=$1
-  local tarball=$2
-  local expected_hash=$3
 
-  if [ -n "$apache_mirror" ]; then
-    wget -c -P "$DOWNLOADS" "$apache_mirror/$url_prefix/$tarball"
-  fi 
-
-  if [[ ! -f "$DOWNLOADS/$tarball" ]]; then
-    echo "Downloading $tarball from Apache archive"
-    wget -c -P "$DOWNLOADS" 
"https://archive.apache.org/dist/$url_prefix/$tarball";
-  fi
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
 
-  verify_exist_hash "$tarball" "$expected_hash"
-  echo "$tarball exists in downloads/ and matches expected checksum 
($expected_hash)"
-}
+source "$UNO_HOME"/bin/impl/util.sh
 
 function fetch_hadoop() {
   download_apache "hadoop/common/hadoop-$HADOOP_VERSION" "$HADOOP_TARBALL" 
"$HADOOP_HASH"
@@ -116,9 +105,6 @@ if [ -z "$apache_mirror" ]; then
 fi
 
 case "$1" in
-spark)
-  download_apache "spark/spark-$SPARK_VERSION" "$SPARK_TARBALL" "$SPARK_HASH"
-  ;;
 accumulo)
   fetch_accumulo "$2"
   ;;
@@ -142,61 +128,21 @@ fluo-yarn)
     fi
     cp "$built_tarball" "$DOWNLOADS"/
   else
-    [[ $FLUO_VERSION =~ .*-incubating ]] && 
apache_mirror="${apache_mirror}/incubator"
-    download_apache "fluo/fluo/$FLUO_VERSION" "$FLUO_TARBALL" "$FLUO_HASH"
+    download_apache "fluo/fluo-yarn/$FLUO_YARN_VERSION" "$FLUO_YARN_TARBALL" 
"$FLUO_YARN_HASH"
   fi
   ;;
 hadoop)
   fetch_hadoop
   ;;
-metrics)
-  if [[ "$OSTYPE" == "darwin"* ]]; then
-    echo "The metrics services (InfluxDB and Grafana) are not supported on Mac 
OS X at this time."
-    exit 1
-  fi
-
-  BUILD=$DOWNLOADS/build
-  rm -rf "$BUILD"
-  mkdir -p "$BUILD"
-  IF_DIR=influxdb-$INFLUXDB_VERSION
-  IF_PATH=$BUILD/$IF_DIR
-  GF_DIR=grafana-$GRAFANA_VERSION
-  GF_PATH=$BUILD/$GF_DIR
-
-  INFLUXDB_TARBALL=influxdb_"$INFLUXDB_VERSION"_x86_64.tar.gz
-  download_other https://s3.amazonaws.com/influxdb "$INFLUXDB_TARBALL" 
"$INFLUXDB_HASH"
-
-  tar xzf "$DOWNLOADS/$INFLUXDB_TARBALL" -C "$BUILD"
-  mv "$BUILD/influxdb_${INFLUXDB_VERSION}_x86_64" "$IF_PATH"
-  mkdir "$IF_PATH"/bin
-  mv "$IF_PATH/opt/influxdb/versions/$INFLUXDB_VERSION"/* "$IF_PATH"/bin
-  rm -rf "$IF_PATH"/opt
-
-  cd "$BUILD"
-  tar czf influxdb-"$INFLUXDB_VERSION".tar.gz "$IF_DIR"
-  rm -rf "$IF_PATH"
-
-  GRAFANA_TARBALL=grafana-"$GRAFANA_VERSION".linux-x64.tar.gz
-  download_other https://grafanarel.s3.amazonaws.com/builds "$GRAFANA_TARBALL" 
"$GRAFANA_HASH"
-
-  tar xzf "$DOWNLOADS/$GRAFANA_TARBALL" -C "$BUILD"
-
-  cd "$BUILD"
-  tar czf grafana-"$GRAFANA_VERSION".tar.gz "$GF_DIR"
-  rm -rf "$GF_PATH"
-  ;;
 zookeeper)
   fetch_zookeeper
   ;;
 *)
   echo "Usage: uno fetch <component>"
   echo -e "\nPossible components:\n"
-  echo "    all        Fetches all binary tarballs of the following components"
   echo "    accumulo   Downloads Accumulo, Hadoop & ZooKeeper. Builds Accumulo 
if repo set in uno.conf"
   echo "    fluo       Downloads Fluo, Accumulo, Hadoop & ZooKeeper. Builds 
Fluo or Accumulo if repo set in uno.conf"
   echo "    hadoop     Downloads Hadoop"
-  echo "    metrics    Downloads InfluxDB and Grafana"
-  echo "    spark      Downloads Spark"
   echo "    zookeeper  Downloads ZooKeeper"
   echo "Options:"
   echo "    --no-deps  Dependencies will be fetched unless this option is 
specified. Only works for fluo & accumulo components."
diff --git a/bin/impl/install.sh b/bin/impl/install.sh
new file mode 100755
index 0000000..29af8b6
--- /dev/null
+++ b/bin/impl/install.sh
@@ -0,0 +1,46 @@
+#! /usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+source "$UNO_HOME"/bin/impl/util.sh
+
+case "$1" in
+  accumulo|fluo|fluo-yarn)
+    install_component "$1" "$2"
+    ;;
+  hadoop|zookeeper)
+    install_component "$1"
+    ;;
+  *)
+    echo "Usage: uno install <component> [--no-deps]"
+    echo -e "\nPossible components:\n"
+    echo "    accumulo   Installs Apache Accumulo and its dependencies (Hadoop 
& ZooKeeper)"
+    echo "    fluo       Installs Apache Fluo and its dependencies (Accumulo, 
Hadoop, & ZooKeeper)"
+    echo "    fluo-yarn  Installs Apache Fluo YARN"
+    echo "    hadoop     Installs Apache Hadoop"
+    echo -e "    zookeeper  Installs Apache ZooKeeper\n"
+    echo "Options:"
+    echo "    --no-deps  Dependencies will be setup unless this option is 
specified. Only works for fluo & accumulo components."
+    exit 1
+    ;;
+esac
+
+if [[ "$?" == 0 ]]; then
+  echo "Install complete."
+else
+  echo "Install failed!"
+  false
+fi
diff --git a/bin/impl/setup-accumulo.sh b/bin/impl/install/accumulo.sh
similarity index 65%
rename from bin/impl/setup-accumulo.sh
rename to bin/impl/install/accumulo.sh
index e7ccd80..3fdc011 100755
--- a/bin/impl/setup-accumulo.sh
+++ b/bin/impl/install/accumulo.sh
@@ -1,12 +1,13 @@
 #! /usr/bin/env bash
 
-# Copyright 2014 Uno authors (see AUTHORS)
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
 #
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
@@ -26,13 +27,11 @@ if [[ -z "$ACCUMULO_REPO" ]]; then
 fi
 
 if [[ $1 != "--no-deps" ]]; then
-  run_setup_script Hadoop
-  run_setup_script ZooKeeper
+  install_component Hadoop
+  install_component ZooKeeper
 fi
 
-print_to_console "Setting up Apache Accumulo $ACCUMULO_VERSION at 
$ACCUMULO_HOME"
-print_to_console "    * Accumulo Monitor: http://localhost:9995/";
-print_to_console "    * view logs at $ACCUMULO_LOG_DIR"
+print_to_console "Installing Apache Accumulo $ACCUMULO_VERSION at 
$ACCUMULO_HOME"
 
 rm -rf "$INSTALL"/accumulo-*
 rm -f "$ACCUMULO_LOG_DIR"/*
@@ -59,13 +58,6 @@ else
   $SED 
"s#instance[.]zookeepers=localhost:2181#instance.zookeepers=$UNO_HOST:2181#" 
"$conf"/accumulo-client.properties
   $SED "s#auth[.]principal=#auth.principal=$ACCUMULO_USER#" 
"$conf"/accumulo-client.properties
   $SED "s#auth[.]token=#auth.token=$ACCUMULO_PASSWORD#" 
"$conf"/accumulo-client.properties
-  if [[ "$ACCUMULO_CRYPTO" == "true" ]]; then
-    encrypt_key=$ACCUMULO_HOME/conf/data-encryption.key
-    openssl rand -out $encrypt_key 32
-    echo "instance.crypto.opts.key.provider=uri" >> "$accumulo_conf"
-    echo "instance.crypto.opts.key.location=file://$encrypt_key" >> 
"$accumulo_conf"
-    echo 
"instance.crypto.service=org.apache.accumulo.core.security.crypto.impl.AESCryptoService"
 >> "$accumulo_conf"
-  fi
 fi
 $SED "s#localhost#$UNO_HOST#" "$conf/masters" "$conf/monitor" "$conf/gc"
 $SED "s#export ZOOKEEPER_HOME=[^ ]*#export ZOOKEEPER_HOME=$ZOOKEEPER_HOME#" 
"$conf"/accumulo-env.sh
@@ -89,19 +81,6 @@ $SED "s#ACCUMULO_INSTANCE#$ACCUMULO_INSTANCE#" "$it_props"
 $SED "s#HADOOP_CONF_DIR#$HADOOP_CONF_DIR#" "$it_props"
 $SED "s#ACCUMULO_HOME#$ACCUMULO_HOME#" "$it_props"
 
-if [[ "$1" == "--with-metrics" ]]; then
-  metrics_props=hadoop-metrics2-accumulo.properties
-  cp "$conf"/templates/"$metrics_props" "$conf"/
-  $SED "/accumulo.sink.graphite/d" "$conf"/"$metrics_props"
-  {
-    echo 
"accumulo.sink.graphite.class=org.apache.hadoop.metrics2.sink.GraphiteSink"
-    echo "accumulo.sink.graphite.server_host=localhost"
-    echo "accumulo.sink.graphite.server_port=2004"
-    echo "accumulo.sink.graphite.metrics_prefix=accumulo"
-  } >> "$conf"/"$metrics_props"
-  run_setup_script Metrics
-fi
-
 if [[ "$ACCUMULO_USE_NATIVE_MAP" == "true" ]]; then
   if [[ $ACCUMULO_VERSION =~ ^1\..*$ ]]; then
     "$ACCUMULO_HOME"/bin/build_native_library.sh
@@ -109,13 +88,3 @@ if [[ "$ACCUMULO_USE_NATIVE_MAP" == "true" ]]; then
     "$ACCUMULO_HOME"/bin/accumulo-util build-native
   fi
 fi
-
-"$HADOOP_HOME"/bin/hadoop fs -rm -r /accumulo 2> /dev/null || true
-"$ACCUMULO_HOME"/bin/accumulo init --clear-instance-name --instance-name 
"$ACCUMULO_INSTANCE" --password "$ACCUMULO_PASSWORD"
-
-if [[ $ACCUMULO_VERSION =~ ^1\..*$ ]]; then
-  "$ACCUMULO_HOME"/bin/start-all.sh
-else
-  "$ACCUMULO_HOME"/bin/accumulo-cluster start
-fi
-
diff --git a/bin/impl/install/fluo-yarn.sh b/bin/impl/install/fluo-yarn.sh
new file mode 100755
index 0000000..aebd17d
--- /dev/null
+++ b/bin/impl/install/fluo-yarn.sh
@@ -0,0 +1,55 @@
+#! /usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+source "$UNO_HOME"/bin/impl/util.sh
+
+# stop if any command fails
+set -e
+
+if [[ -z "$FLUO_YARN_REPO" ]]; then
+  verify_exist_hash "$FLUO_YARN_TARBALL" "$FLUO_YARN_HASH"
+fi
+
+if [[ -f "$DOWNLOADS/$FLUO_YARN_TARBALL" ]]; then
+  print_to_console "WARNING: Apache Fluo YARN launcher tarball 
'$FLUO_YARN_TARBALL' was not found in $DOWNLOADS."
+  print_to_console "Apache Fluo YARN launcher will not be set up!"
+fi
+
+print_to_console "Setting up Apache Fluo YARN launcher at $FLUO_YARN_HOME"
+# Don't stop if pkills fail
+set +e
+pkill -f "fluo\.yarn"
+pkill -f twill.launcher
+set -e
+
+rm -rf "$INSTALL"/fluo-yarn*
+
+tar xzf "$DOWNLOADS/$FLUO_YARN_TARBALL" -C "$INSTALL"/
+
+yarn_props=$FLUO_YARN_HOME/conf/fluo-yarn.properties
+$SED "s#.*fluo.yarn.zookeepers=.*#fluo.yarn.zookeepers=$UNO_HOST/fluo-yarn#g" 
"$yarn_props"
+$SED 
"s/.*fluo.yarn.resource.manager=.*/fluo.yarn.resource.manager=$UNO_HOST/g" 
"$yarn_props"
+$SED "s#.*fluo.yarn.dfs.root=.*#fluo.yarn.dfs.root=hdfs://$UNO_HOST:8020/#g" 
"$yarn_props"
+$SED 
"s/.*fluo.yarn.worker.max.memory.mb=.*/fluo.yarn.worker.max.memory.mb=$FLUO_WORKER_MEM_MB/g"
 "$yarn_props"
+$SED 
"s/.*fluo.yarn.worker.instances=.*/fluo.yarn.worker.instances=$FLUO_WORKER_INSTANCES/g"
 "$yarn_props"
+$SED "s#FLUO_HOME=.*#FLUO_HOME=$FLUO_HOME#g" 
"$FLUO_YARN_HOME"/conf/fluo-yarn-env.sh
+$SED "s#HADOOP_PREFIX=.*#HADOOP_PREFIX=$HADOOP_HOME#g" 
"$FLUO_YARN_HOME"/conf/fluo-yarn-env.sh
+$SED "s#ZOOKEEPER_HOME=.*#ZOOKEEPER_HOME=$ZOOKEEPER_HOME#g" 
"$FLUO_YARN_HOME"/conf/fluo-yarn-env.sh
+
+"$FLUO_YARN_HOME"/lib/fetch.sh
+
+stty sane
diff --git a/bin/impl/setup-fluo.sh b/bin/impl/install/fluo.sh
similarity index 86%
rename from bin/impl/setup-fluo.sh
rename to bin/impl/install/fluo.sh
index 19abe54..28d5092 100755
--- a/bin/impl/setup-fluo.sh
+++ b/bin/impl/install/fluo.sh
@@ -1,12 +1,13 @@
 #! /usr/bin/env bash
 
-# Copyright 2014 Uno authors (see AUTHORS)
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
 #
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
@@ -24,7 +25,7 @@ if [[ -z "$FLUO_REPO" ]]; then
 fi
 
 if [[ $1 != "--no-deps" ]]; then
-  run_setup_script Accumulo
+  install_component Accumulo
 fi
 
 if [[ -f "$DOWNLOADS/$FLUO_TARBALL" ]]; then
diff --git a/bin/impl/setup-hadoop.sh b/bin/impl/install/hadoop.sh
similarity index 70%
rename from bin/impl/setup-hadoop.sh
rename to bin/impl/install/hadoop.sh
index bd5a7fa..743c83a 100755
--- a/bin/impl/setup-hadoop.sh
+++ b/bin/impl/install/hadoop.sh
@@ -1,12 +1,13 @@
 #! /usr/bin/env bash
 
-# Copyright 2014 Uno authors (see AUTHORS)
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
 #
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
@@ -24,16 +25,7 @@ set -e
 
 verify_exist_hash "$HADOOP_TARBALL" "$HADOOP_HASH"
 
-namenode_port=9870
-if [[ $HADOOP_VERSION =~ ^2\..*$ ]]; then
-  namenode_port=50070
-  export HADOOP_PREFIX=$HADOOP_HOME
-fi
-
-print_to_console "Setting up Apache Hadoop $HADOOP_VERSION at $HADOOP_HOME"
-print_to_console "    * NameNode status: http://localhost:$namenode_port/";
-print_to_console "    * ResourceManager status: http://localhost:8088/";
-print_to_console "    * view logs at $HADOOP_LOG_DIR"
+print_to_console "Installing Apache Hadoop $HADOOP_VERSION at $HADOOP_HOME"
 
 rm -rf "$INSTALL"/hadoop-*
 rm -rf "$HADOOP_LOG_DIR"/*
@@ -63,8 +55,3 @@ echo "export HADOOP_MAPRED_HOME=$HADOOP_HOME" >> 
"$hadoop_conf/hadoop-env.sh"
 if [[ $HADOOP_VERSION =~ ^2\..*$ ]]; then
   echo "export YARN_LOG_DIR=$HADOOP_LOG_DIR" >> "$hadoop_conf/yarn-env.sh"
 fi
-
-"$HADOOP_HOME"/bin/hdfs namenode -format
-"$HADOOP_HOME"/sbin/start-dfs.sh
-"$HADOOP_HOME"/sbin/start-yarn.sh
-
diff --git a/bin/impl/setup-zookeeper.sh b/bin/impl/install/zookeeper.sh
similarity index 60%
rename from bin/impl/setup-zookeeper.sh
rename to bin/impl/install/zookeeper.sh
index 80254a2..eb1ba89 100755
--- a/bin/impl/setup-zookeeper.sh
+++ b/bin/impl/install/zookeeper.sh
@@ -1,12 +1,13 @@
 #! /usr/bin/env bash
 
-# Copyright 2014 Uno authors (see AUTHORS)
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
 #
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
@@ -23,18 +24,14 @@ set -e
 
 verify_exist_hash "$ZOOKEEPER_TARBALL" "$ZOOKEEPER_HASH"
 
-print_to_console "Setting up Apache ZooKeeper $ZOOKEEPER_VERSION at 
$ZOOKEEPER_HOME"
-print_to_console "    * view logs at $ZOO_LOG_DIR"
+print_to_console "Installing Apache ZooKeeper $ZOOKEEPER_VERSION at 
$ZOOKEEPER_HOME"
 
 rm -rf "$INSTALL"/zookeeper-*
 rm -f "$ZOO_LOG_DIR"/*
+rm -rf "$DATA_DIR"/zookeeper
 mkdir -p "$ZOO_LOG_DIR"
 
 tar xzf "$DOWNLOADS/$ZOOKEEPER_TARBALL" -C "$INSTALL"
 
 cp "$UNO_HOME"/conf/zookeeper/* "$ZOOKEEPER_HOME"/conf/
 $SED "s#DATA_DIR#$DATA_DIR#g" "$ZOOKEEPER_HOME"/conf/zoo.cfg
-
-rm -rf "$DATA_DIR"/zookeeper
-"$ZOOKEEPER_HOME"/bin/zkServer.sh start
-
diff --git a/bin/impl/kill.sh b/bin/impl/kill.sh
index 9a437c9..633d4f1 100755
--- a/bin/impl/kill.sh
+++ b/bin/impl/kill.sh
@@ -1,12 +1,29 @@
 #! /usr/bin/env bash
 
-# Copyright 2014 Uno authors (see AUTHORS)
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
 #
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
-#    http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
@@ -20,6 +37,13 @@ pkill -f accumulo\\.start
 pkill -f hadoop\\.hdfs
 pkill -f hadoop\\.yarn
 pkill -f QuorumPeerMain
-pkill -f org\\.apache\\.spark\\.deploy\\.history\\.HistoryServer
-pkill -f influxdb
-pkill -f grafana-server
+
+if [[ -d "$SPARK_HOME" ]]; then
+  pkill -f org\\.apache\\.spark\\.deploy\\.history\\.HistoryServer
+fi
+if [[ -d "$INFLUXDB_HOME" ]]; then
+  pkill -f influxdb
+fi
+if [[ -d "$GRAFNA_HOME" ]]; then
+  pkill -f grafana-server
+fi
diff --git a/bin/impl/load-env.sh b/bin/impl/load-env.sh
index 7d80223..b5033ef 100755
--- a/bin/impl/load-env.sh
+++ b/bin/impl/load-env.sh
@@ -1,12 +1,13 @@
 #! /usr/bin/env bash
 
-# Copyright 2014 Uno authors (see AUTHORS)
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
 #
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
diff --git a/bin/impl/print-env.sh b/bin/impl/print-env.sh
index c576da4..32419ba 100755
--- a/bin/impl/print-env.sh
+++ b/bin/impl/print-env.sh
@@ -1,12 +1,13 @@
 #! /usr/bin/env bash
 
-# Copyright 2014 Uno authors (see AUTHORS)
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
 #
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
diff --git a/bin/impl/run.sh b/bin/impl/run.sh
new file mode 100755
index 0000000..f05fcb7
--- /dev/null
+++ b/bin/impl/run.sh
@@ -0,0 +1,49 @@
+#! /usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+source "$UNO_HOME"/bin/impl/util.sh
+
+[[ -n $LOGS_DIR ]] && rm -f "$LOGS_DIR"/setup/*.{out,err}
+echo "Running $1 (detailed logs in $LOGS_DIR/setup)..."
+save_console_fd
+case "$1" in
+  hadoop|zookeeper)
+    run_component "$1"
+    ;;
+  accumulo|fluo|fluo-yarn)
+    run_component "$1" "$2"
+    ;;
+  *)
+    echo "Usage: uno run <component> [--no-deps]"
+    echo -e "\nPossible components:\n"
+    echo "    accumulo   Runs Apache Accumulo and its dependencies (Hadoop & 
ZooKeeper)"
+    echo "    hadoop     Runs Apache Hadoop"
+    echo "    fluo       Runs Apache Fluo and its dependencies (Accumulo, 
Hadoop, & ZooKeeper)"
+    echo "    fluo-yarn  Runs Apache Fluo YARN and its dependencies (Fluo, 
Accumulo, Hadoop, & ZooKeeper)"
+    echo -e "    zookeeper     Runs Apache ZooKeeper\n"
+    echo "Options:"
+    echo "    --no-deps  Dependencies will be setup unless this option is 
specified. Only works for fluo & accumulo components."
+    exit 1
+    ;;
+esac
+
+if [[ "$?" == 0 ]]; then
+  echo "Run complete."
+else
+  echo "Run failed!"
+  false
+fi
diff --git a/bin/impl/run/accumulo.sh b/bin/impl/run/accumulo.sh
new file mode 100755
index 0000000..30c515a
--- /dev/null
+++ b/bin/impl/run/accumulo.sh
@@ -0,0 +1,40 @@
+#! /usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+source "$UNO_HOME"/bin/impl/util.sh
+
+pkill -f accumulo.start
+
+# stop if any command fails
+set -e
+
+if [[ $1 != "--no-deps" ]]; then
+  run_component hadoop
+  run_component zookeeper
+fi
+
+"$HADOOP_HOME"/bin/hadoop fs -rm -r /accumulo 2> /dev/null || true
+"$ACCUMULO_HOME"/bin/accumulo init --clear-instance-name --instance-name 
"$ACCUMULO_INSTANCE" --password "$ACCUMULO_PASSWORD"
+if [[ $ACCUMULO_VERSION =~ ^1\..*$ ]]; then
+  "$ACCUMULO_HOME"/bin/start-all.sh
+else
+  "$ACCUMULO_HOME"/bin/accumulo-cluster start
+fi
+
+print_to_console "Apache Accumulo $ACCUMULO_VERSION is running"
+print_to_console "    * Accumulo Monitor: http://localhost:9995/";
+print_to_console "    * view logs at $ACCUMULO_LOG_DIR"
diff --git a/conf/spark/spark-env.sh b/bin/impl/run/fluo-yarn.sh
similarity index 71%
copy from conf/spark/spark-env.sh
copy to bin/impl/run/fluo-yarn.sh
index 016dd17..05262e6 100755
--- a/conf/spark/spark-env.sh
+++ b/bin/impl/run/fluo-yarn.sh
@@ -1,6 +1,5 @@
-#!/usr/bin/env bash
+#! /usr/bin/env bash
 
-#
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
 # this work for additional information regarding copyright ownership.
@@ -8,17 +7,20 @@
 # (the "License"); you may not use this file except in compliance with
 # the License.  You may obtain a copy of the License at
 #
-#    http://www.apache.org/licenses/LICENSE-2.0
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-#
 
-# This file is sourced when running various Spark programs.
-# Copy it as spark-env.sh and edit that to configure Spark for your site.
+source "$UNO_HOME"/bin/impl/util.sh
+
+# stop if any command fails
+set -e
+
+if [[ $1 != "--no-deps" ]]; then
+  run_component fluo
+fi
 
-SPARK_DIST_CLASSPATH=$("$HADOOP_HOME"/bin/hadoop classpath)
-export SPARK_DIST_CLASSPATH
diff --git a/conf/spark/spark-env.sh b/bin/impl/run/fluo.sh
similarity index 71%
copy from conf/spark/spark-env.sh
copy to bin/impl/run/fluo.sh
index 016dd17..6dbea2e 100755
--- a/conf/spark/spark-env.sh
+++ b/bin/impl/run/fluo.sh
@@ -1,6 +1,5 @@
-#!/usr/bin/env bash
+#! /usr/bin/env bash
 
-#
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
 # this work for additional information regarding copyright ownership.
@@ -8,17 +7,23 @@
 # (the "License"); you may not use this file except in compliance with
 # the License.  You may obtain a copy of the License at
 #
-#    http://www.apache.org/licenses/LICENSE-2.0
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-#
 
-# This file is sourced when running various Spark programs.
-# Copy it as spark-env.sh and edit that to configure Spark for your site.
+source "$UNO_HOME"/bin/impl/util.sh
+
+pkill -f fluo.yarn
+pkill -f MiniFluo
+pkill -f twill.launcher
+
+# stop if any command fails
+set -e
 
-SPARK_DIST_CLASSPATH=$("$HADOOP_HOME"/bin/hadoop classpath)
-export SPARK_DIST_CLASSPATH
+if [[ $2 != "--no-deps" ]]; then
+  run_component accumulo
+fi
diff --git a/conf/spark/spark-env.sh b/bin/impl/run/hadoop.sh
similarity index 52%
copy from conf/spark/spark-env.sh
copy to bin/impl/run/hadoop.sh
index 016dd17..e6a752d 100755
--- a/conf/spark/spark-env.sh
+++ b/bin/impl/run/hadoop.sh
@@ -1,6 +1,5 @@
-#!/usr/bin/env bash
+#! /usr/bin/env bash
 
-#
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
 # this work for additional information regarding copyright ownership.
@@ -8,17 +7,33 @@
 # (the "License"); you may not use this file except in compliance with
 # the License.  You may obtain a copy of the License at
 #
-#    http://www.apache.org/licenses/LICENSE-2.0
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-#
 
-# This file is sourced when running various Spark programs.
-# Copy it as spark-env.sh and edit that to configure Spark for your site.
+source "$UNO_HOME"/bin/impl/util.sh
+
+pkill -f hadoop.hdfs
+pkill -f hadoop.yarn
+
+# stop if any command fails
+set -e
+
+"$HADOOP_HOME"/bin/hdfs namenode -format
+"$HADOOP_HOME"/sbin/start-dfs.sh
+"$HADOOP_HOME"/sbin/start-yarn.sh
+
+namenode_port=9870
+if [[ $HADOOP_VERSION =~ ^2\..*$ ]]; then
+  namenode_port=50070
+  export HADOOP_PREFIX=$HADOOP_HOME
+fi
 
-SPARK_DIST_CLASSPATH=$("$HADOOP_HOME"/bin/hadoop classpath)
-export SPARK_DIST_CLASSPATH
+print_to_console "Apache Hadoop $HADOOP_VERSION is running"
+print_to_console "    * NameNode status: http://localhost:$namenode_port/";
+print_to_console "    * ResourceManager status: http://localhost:8088/";
+print_to_console "    * view logs at $HADOOP_LOG_DIR"
diff --git a/conf/spark/spark-env.sh b/bin/impl/run/zookeeper.sh
similarity index 64%
copy from conf/spark/spark-env.sh
copy to bin/impl/run/zookeeper.sh
index 016dd17..4a6143e 100755
--- a/conf/spark/spark-env.sh
+++ b/bin/impl/run/zookeeper.sh
@@ -1,6 +1,5 @@
-#!/usr/bin/env bash
+#! /usr/bin/env bash
 
-#
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
 # this work for additional information regarding copyright ownership.
@@ -8,17 +7,26 @@
 # (the "License"); you may not use this file except in compliance with
 # the License.  You may obtain a copy of the License at
 #
-#    http://www.apache.org/licenses/LICENSE-2.0
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-#
 
-# This file is sourced when running various Spark programs.
-# Copy it as spark-env.sh and edit that to configure Spark for your site.
+source "$UNO_HOME"/bin/impl/util.sh
+
+pkill -f QuorumPeerMain
+
+# stop if any command fails
+set -e
+
+rm -f "$ZOO_LOG_DIR"/*
+rm -rf "$DATA_DIR"/zookeeper
+mkdir -p "$ZOO_LOG_DIR"
+
+"$ZOOKEEPER_HOME"/bin/zkServer.sh start
 
-SPARK_DIST_CLASSPATH=$("$HADOOP_HOME"/bin/hadoop classpath)
-export SPARK_DIST_CLASSPATH
+print_to_console "Apache ZooKeeper $ZOOKEEPER_VERSION is running"
+print_to_console "    * view logs at $ZOO_LOG_DIR"
diff --git a/bin/impl/setup-fluo-yarn.sh b/bin/impl/setup-fluo-yarn.sh
deleted file mode 100755
index bd98b0a..0000000
--- a/bin/impl/setup-fluo-yarn.sh
+++ /dev/null
@@ -1,58 +0,0 @@
-#! /usr/bin/env bash
-
-# Copyright 2014 Uno authors (see AUTHORS)
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-source "$UNO_HOME"/bin/impl/util.sh
-
-# stop if any command fails
-set -e
-
-if [[ -z "$FLUO_YARN_REPO" ]]; then
-  verify_exist_hash "$FLUO_YARN_TARBALL" "$FLUO_YARN_HASH"
-fi
-
-if [[ $1 != "--no-deps" ]]; then
-  run_setup_script Fluo
-fi
-
-if [[ -f "$DOWNLOADS/$FLUO_YARN_TARBALL" ]]; then
-  print_to_console "Setting up Apache Fluo YARN launcher at $FLUO_YARN_HOME"
-  # Don't stop if pkills fail
-  set +e
-  pkill -f "fluo\.yarn"
-  pkill -f twill.launcher
-  set -e
-
-  rm -rf "$INSTALL"/fluo-yarn*
-
-  tar xzf "$DOWNLOADS/$FLUO_YARN_TARBALL" -C "$INSTALL"/
-
-  yarn_props=$FLUO_YARN_HOME/conf/fluo-yarn.properties
-  $SED 
"s#.*fluo.yarn.zookeepers=.*#fluo.yarn.zookeepers=$UNO_HOST/fluo-yarn#g" 
"$yarn_props"
-  $SED 
"s/.*fluo.yarn.resource.manager=.*/fluo.yarn.resource.manager=$UNO_HOST/g" 
"$yarn_props"
-  $SED "s#.*fluo.yarn.dfs.root=.*#fluo.yarn.dfs.root=hdfs://$UNO_HOST:8020/#g" 
"$yarn_props"
-  $SED 
"s/.*fluo.yarn.worker.max.memory.mb=.*/fluo.yarn.worker.max.memory.mb=$FLUO_WORKER_MEM_MB/g"
 "$yarn_props"
-  $SED 
"s/.*fluo.yarn.worker.instances=.*/fluo.yarn.worker.instances=$FLUO_WORKER_INSTANCES/g"
 "$yarn_props"
-  $SED "s#FLUO_HOME=.*#FLUO_HOME=$FLUO_HOME#g" 
"$FLUO_YARN_HOME"/conf/fluo-yarn-env.sh
-  $SED "s#HADOOP_PREFIX=.*#HADOOP_PREFIX=$HADOOP_HOME#g" 
"$FLUO_YARN_HOME"/conf/fluo-yarn-env.sh
-  $SED "s#ZOOKEEPER_HOME=.*#ZOOKEEPER_HOME=$ZOOKEEPER_HOME#g" 
"$FLUO_YARN_HOME"/conf/fluo-yarn-env.sh
-
-  "$FLUO_YARN_HOME"/lib/fetch.sh
-
-  stty sane
-else
-  print_to_console "WARNING: Apache Fluo YARN launcher tarball 
'$FLUO_YARN_TARBALL' was not found in $DOWNLOADS."
-  print_to_console "Apache Fluo YARN launcher will not be set up!"
-fi
diff --git a/bin/impl/setup-metrics.sh b/bin/impl/setup-metrics.sh
deleted file mode 100755
index b992a9e..0000000
--- a/bin/impl/setup-metrics.sh
+++ /dev/null
@@ -1,132 +0,0 @@
-#! /usr/bin/env bash
-
-# Copyright 2014 Uno authors (see AUTHORS)
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-source "$UNO_HOME"/bin/impl/util.sh
-
-if [[ "$OSTYPE" == "darwin"* ]]; then
-  print_to_console "The metrics services (InfluxDB and Grafana) are not 
supported on Mac OS X at this time."
-  exit 1
-fi
-
-print_to_console "Killing InfluxDB & Grafana (if running)"
-pkill -f influxdb
-pkill -f grafana-server
-
-# verify downloaded tarballs
-INFLUXDB_TARBALL=influxdb_"$INFLUXDB_VERSION"_x86_64.tar.gz
-GRAFANA_TARBALL=grafana-"$GRAFANA_VERSION".linux-x64.tar.gz
-verify_exist_hash "$INFLUXDB_TARBALL" "$INFLUXDB_HASH"
-verify_exist_hash "$GRAFANA_TARBALL" "$GRAFANA_HASH"
-
-# make sure built tarballs exist
-INFLUXDB_TARBALL=influxdb-"$INFLUXDB_VERSION".tar.gz
-GRAFANA_TARBALL=grafana-"$GRAFANA_VERSION".tar.gz
-if [[ ! -f "$DOWNLOADS/build/$INFLUXDB_TARBALL" ]]; then
-  print_to_console "InfluxDB tarball $INFLUXDB_TARBALL does not exists in 
downloads/build/"
-  exit 1
-fi
-if [[ ! -f "$DOWNLOADS/build/$GRAFANA_TARBALL" ]]; then
-  print_to_console "Grafana tarball $GRAFANA_TARBALL does not exists in 
downloads/build"
-  exit 1
-fi
-
-if [[ ! -d "$FLUO_HOME" ]]; then
-  print_to_console "Fluo must be installed before setting up metrics"
-  exit 1
-fi
-
-# stop if any command fails
-set -e
-
-print_to_console "Removing previous versions of InfluxDB & Grafana"
-rm -rf "$INSTALL"/influxdb-*
-rm -rf "$INSTALL"/grafana-*
-
-print_to_console "Remove previous log and data dirs"
-rm -f "$LOGS_DIR"/metrics/*
-rm -rf "$DATA_DIR"/influxdb
-mkdir -p "$LOGS_DIR"/metrics
-
-print_to_console "Setting up metrics (influxdb + grafana)..."
-tar xzf "$DOWNLOADS/build/$INFLUXDB_TARBALL" -C "$INSTALL"
-"$INFLUXDB_HOME"/bin/influxd config -config 
"$UNO_HOME"/conf/influxdb/influxdb.conf > "$INFLUXDB_HOME"/influxdb.conf
-if [[ ! -f "$INFLUXDB_HOME"/influxdb.conf ]]; then
-  print_to_console "Failed to create $INFLUXDB_HOME/influxdb.conf"
-  exit 1
-fi
-$SED "s#DATA_DIR#$DATA_DIR#g" "$INFLUXDB_HOME"/influxdb.conf
-"$INFLUXDB_HOME"/bin/influxd -config "$INFLUXDB_HOME"/influxdb.conf &> 
"$LOGS_DIR"/metrics/influxdb.log &
-
-tar xzf "$DOWNLOADS/build/$GRAFANA_TARBALL" -C "$INSTALL"
-cp "$UNO_HOME"/conf/grafana/custom.ini "$GRAFANA_HOME"/conf/
-$SED "s#GRAFANA_HOME#$GRAFANA_HOME#g" "$GRAFANA_HOME"/conf/custom.ini
-$SED "s#LOGS_DIR#$LOGS_DIR#g" "$GRAFANA_HOME"/conf/custom.ini
-mkdir "$GRAFANA_HOME"/dashboards
-cp "$FLUO_HOME"/contrib/grafana/* "$GRAFANA_HOME"/dashboards/
-cp "$UNO_HOME"/conf/grafana/accumulo-dashboard.json "$GRAFANA_HOME"/dashboards/
-"$GRAFANA_HOME"/bin/grafana-server -homepath="$GRAFANA_HOME" 2> /dev/null &
-
-print_to_console "Configuring Fluo to send metrics to InfluxDB"
-if [[ $FLUO_VERSION =~ ^1\.[0-1].*$ ]]; then
-  FLUO_PROPS=$FLUO_HOME/conf/fluo.properties
-else
-  FLUO_PROPS=$FLUO_HOME/conf/fluo-app.properties
-fi
-
-$SED "/fluo.metrics.reporter.graphite/d" "$FLUO_PROPS"
-{
-  echo "fluo.metrics.reporter.graphite.enable=true"
-  echo "fluo.metrics.reporter.graphite.host=$UNO_HOST"
-  echo "fluo.metrics.reporter.graphite.port=2003"
-  echo "fluo.metrics.reporter.graphite.frequency=30"
-} >> "$FLUO_PROPS"
-
-print_to_console "Configuring InfluxDB..."
-sleep 10
-"$INFLUXDB_HOME"/bin/influx -import -path 
"$FLUO_HOME"/contrib/influxdb/fluo_metrics_setup.txt
-
-# allow commands to fail
-set +e
-
-print_to_console "Configuring Grafana..."
-
-sleep 5
-
-function add_datasource() {
-  retcode=1
-  while [[ $retcode != 0 ]];  do
-    curl 'http://admin:admin@localhost:3000/api/datasources' -X POST -H 
'Content-Type: application/json;charset=UTF-8' \
-      --data-binary "$1"
-    retcode=$?
-    if [[ $retcode != 0 ]]; then
-      print_to_console "Failed to add Grafana data source. Retrying in 5 sec.."
-      sleep 5
-    fi
-  done
-  print_to_console ""
-}
-
-accumulo_data='{"name":"accumulo_metrics","type":"influxdb","url":"http://'
-accumulo_data+=$UNO_HOST
-accumulo_data+=':8086","access":"direct","isDefault":true,"database":"accumulo_metrics","user":"accumulo","password":"secret"}'
-add_datasource $accumulo_data
-
-fluo_data='{"name":"fluo_metrics","type":"influxdb","url":"http://'
-fluo_data+=$UNO_HOST
-fluo_data+=':8086","access":"direct","isDefault":false,"database":"fluo_metrics","user":"fluo","password":"secret"}'
-add_datasource $fluo_data
-
-stty sane
diff --git a/bin/impl/setup.sh b/bin/impl/setup.sh
new file mode 100755
index 0000000..b84318c
--- /dev/null
+++ b/bin/impl/setup.sh
@@ -0,0 +1,50 @@
+#! /usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+source "$UNO_HOME"/bin/impl/util.sh
+
+[[ -n $LOGS_DIR ]] && rm -f "$LOGS_DIR"/setup/*.{out,err}
+echo "Beginning setup (detailed logs in $LOGS_DIR/setup)..."
+save_console_fd
+
+case "$1" in
+  accumulo|fluo)
+    setup_component "$1" "$2"
+    ;;
+  hadoop|zookeeper|fluo-yarn)
+    setup_component "$1"
+    ;;
+  *)
+    echo "Usage: uno setup <component> [--no-deps]"
+    echo -e "\nPossible components:\n"
+    echo "    accumulo   Sets up Apache Accumulo and its dependencies (Hadoop 
& ZooKeeper)"
+    echo "    hadoop     Sets up Apache Hadoop"
+    echo "    fluo       Sets up Apache Fluo and its dependencies (Accumulo, 
Hadoop, & ZooKeeper)"
+    echo "    fluo-yarn  Sets up Apache Fluo YARN and its dependencies (Fluo, 
Accumulo, Hadoop, & ZooKeeper)"
+    echo -e "    zookeeper  Sets up Apache ZooKeeper\n"
+    echo "Options:"
+    echo "    --no-deps  Dependencies will be setup unless this option is 
specified. Only works for fluo & accumulo components."
+    exit 1
+    ;;
+esac
+
+if [[ "$?" == 0 ]]; then
+  echo "Setup complete."
+else
+  echo "Setup failed!"
+  false
+fi
diff --git a/bin/impl/start.sh b/bin/impl/start.sh
index 6117e13..969126a 100755
--- a/bin/impl/start.sh
+++ b/bin/impl/start.sh
@@ -1,12 +1,13 @@
 #! /usr/bin/env bash
 
-# Copyright 2014 Uno authors (see AUTHORS)
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
 #
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
@@ -76,26 +77,11 @@ case "$1" in
     else echo "ZooKeeper   already running at: $tmp"
     fi
     ;;
-  metrics)
-    tmp="$(pgrep -f influxd | tr '\n' ' ')"
-    if [[ -z "$tmp" ]]; then
-      "$INFLUXDB_HOME"/bin/influxd -config "$INFLUXDB_HOME"/influxdb.conf &> 
"$LOGS_DIR"/metrics/influxdb.log &
-    else echo "InfluxDB already running at: $tmp"
-    fi
-    tmp="$(pgrep -f grafana-server | tr '\n' ' ')"
-    if [[ -z "$tmp" ]]; then
-      "$GRAFANA_HOME"/bin/grafana-server -homepath="$GRAFANA_HOME" 2> 
/dev/null &
-    else echo "Grafana already running at: $tmp"
-    fi
-    ;;
 
   # NYI
   # fluo)
   #   
   #   ;;
-  # spark)
-  #   
-  #   ;;
 
   *)
     echo "Usage: uno start <component> [--no-deps]"
@@ -103,7 +89,6 @@ case "$1" in
     echo "    accumulo   Start Apache Accumulo plus dependencies: Hadoop, 
ZooKeeper"
     echo "    hadoop     Start Apache Hadoop"
     echo "    zookeeper  Start Apache ZooKeeper"
-    echo "    metrics    Start InfluxDB and Grafana"
     echo "Options:"
     echo "    --no-deps  Dependencies will start unless this option is 
specified. Only works for accumulo component."
     exit 1
diff --git a/bin/impl/stop.sh b/bin/impl/stop.sh
index ac4af5c..aa85ebb 100755
--- a/bin/impl/stop.sh
+++ b/bin/impl/stop.sh
@@ -1,12 +1,13 @@
 #! /usr/bin/env bash
 
-# Copyright 2014 Uno authors (see AUTHORS)
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
 #
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
@@ -67,13 +68,6 @@ case "$1" in
   # fluo)
   #   
   #   ;;
-  # spark)
-  #   
-  #   ;;
-  # metrics)
-  #   
-  #   ;;
-
   *)
     echo "Usage: uno stop <component> [--no-deps]"
     echo -e "\nPossible components:\n"
diff --git a/bin/impl/util.sh b/bin/impl/util.sh
index 14c75e8..7556e52 100755
--- a/bin/impl/util.sh
+++ b/bin/impl/util.sh
@@ -1,12 +1,13 @@
 #! /usr/bin/env bash
 
-# Copyright 2014 Uno authors (see AUTHORS)
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
 #
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
@@ -52,12 +53,63 @@ function check_dirs() {
   done
 }
 
-function run_setup_script() {
-  local SCRIP; SCRIP=$(echo "$1" | tr '[:upper:] ' '[:lower:]-')
-  local L_DIR; L_DIR="$LOGS_DIR/setup"
-  mkdir -p "$L_DIR"
+function post_install_plugins() {
+  for plugin in $POST_INSTALL_PLUGINS
+  do
+    echo "Executing post install plugin: $plugin"
+    plugin_script="${UNO_HOME}/plugins/${plugin}.sh"
+    if [[ ! -f "$plugin_script" ]]; then
+      echo "Plugin does not exist: $plugin_script"
+      exit 1
+    fi
+    $plugin_script
+  done  
+}
+
+function post_run_plugins() {
+  for plugin in $POST_RUN_PLUGINS
+  do
+    echo "Executing post run plugin: $plugin"
+    plugin_script="${UNO_HOME}/plugins/${plugin}.sh"
+    if [[ ! -f "$plugin_script" ]]; then
+      echo "Plugin does not exist: $plugin_script"
+      exit 1
+    fi
+    $plugin_script
+  done  
+}
+
+function install_component() {
+  local component; component=$(echo "$1" | tr '[:upper:] ' '[:lower:]-')
   shift
-  "$UNO_HOME/bin/impl/setup-$SCRIP.sh" "$@" 1>"$L_DIR/$SCRIP.stdout" 
2>"$L_DIR/$SCRIP.stderr"
+  "$UNO_HOME/bin/impl/install/$component.sh" "$@"
+  case "$component" in
+    accumulo|fluo)
+      post_install_plugins
+      ;;
+    *)
+      ;;
+  esac
+}
+
+function run_component() {
+  local component; component=$(echo "$1" | tr '[:upper:] ' '[:lower:]-')
+  local logs; logs="$LOGS_DIR/setup"
+  mkdir -p "$logs"
+  shift
+  "$UNO_HOME/bin/impl/run/$component.sh" "$component" "$@" 
1>"$logs/${component}.out" 2>"$logs/${component}.err"
+  case "$component" in
+    accumulo|fluo)
+      post_run_plugins
+      ;;
+    *)
+      ;;
+  esac
+}
+
+function setup_component() {
+  install_component $1
+  run_component $1
 }
 
 function save_console_fd {
@@ -76,3 +128,32 @@ function print_to_console {
     echo "$@" >&${UNO_CONSOLE_FD}
   fi
 }
+
+function download_tarball() {
+  local url_prefix=$1
+  local tarball=$2
+  local expected_hash=$3
+
+  wget -c -P "$DOWNLOADS" "$url_prefix/$tarball"
+  verify_exist_hash "$tarball" "$expected_hash"
+  echo "$tarball exists in downloads/ and matches expected checksum 
($expected_hash)"
+}
+
+function download_apache() {
+  local url_prefix=$1
+  local tarball=$2
+  local expected_hash=$3
+
+  if [ -n "$apache_mirror" ]; then
+    wget -c -P "$DOWNLOADS" "$apache_mirror/$url_prefix/$tarball"
+  fi 
+
+  if [[ ! -f "$DOWNLOADS/$tarball" ]]; then
+    echo "Downloading $tarball from Apache archive"
+    wget -c -P "$DOWNLOADS" 
"https://archive.apache.org/dist/$url_prefix/$tarball";
+  fi
+
+  verify_exist_hash "$tarball" "$expected_hash"
+  echo "$tarball exists in downloads/ and matches expected checksum 
($expected_hash)"
+}
+
diff --git a/bin/impl/version.sh b/bin/impl/version.sh
index fb0e551..1235704 100755
--- a/bin/impl/version.sh
+++ b/bin/impl/version.sh
@@ -1,12 +1,13 @@
 #! /usr/bin/env bash
 
-# Copyright 2014 Uno authors (see AUTHORS)
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
 #
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
diff --git a/bin/uno b/bin/uno
index d856b29..fdc456a 100755
--- a/bin/uno
+++ b/bin/uno
@@ -1,12 +1,13 @@
 #! /usr/bin/env bash
 
-# Copyright 2014 Uno authors (see AUTHORS)
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
 #
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
@@ -28,105 +29,66 @@ source "$bin"/impl/load-env.sh "$1"
 source "$UNO_HOME"/bin/impl/util.sh
 
 case "$1" in
-fetch)
-  hash mvn 2>/dev/null || { echo >&2 "Maven must be installed & on PATH. 
Aborting."; exit 1; }
-  hash wget 2>/dev/null || { echo >&2 "wget must be installed & on PATH. 
Aborting."; exit 1; }
-  if [[ "$2" == "all" ]]; then
-    "$bin"/impl/fetch.sh fluo && \
-    "$bin"/impl/fetch.sh spark && \
-    "$bin"/impl/fetch.sh metrics
-  else
-    "$bin"/impl/fetch.sh "$2" "$3"
-  fi
-       ;;
-setup)
-  [[ -n $LOGS_DIR ]] && rm -f "$LOGS_DIR"/setup/*.std{out,err}
-  echo "Beginning setup (detailed logs in $LOGS_DIR/setup)..."
-  save_console_fd
-  case "$2" in
-    all)
-      run_setup_script Fluo
-      run_setup_script Spark
-      run_setup_script Metrics
-      ;;
-    accumulo)
-      run_setup_script Accumulo "$3"
-      ;;
-    fluo)
-      run_setup_script Fluo "$3"
-      ;;
-    fluo-yarn)
-      run_setup_script "Fluo Yarn" "$3"
-      ;;
-    spark)
-      run_setup_script Spark
-      ;;
-    metrics)
-      run_setup_script Metrics
-      ;;
-    *)
-      echo "Usage: uno setup <component> [--no-deps]"
-      echo -e "\nPossible components:\n"
-      echo "    all        Sets up all of the following components"
-      echo "    accumulo   Sets up Apache Accumulo and its dependencies 
(Hadoop & ZooKeeper)"
-      echo "    spark      Sets up Apache Spark"
-      echo "    fluo       Sets up Apache Fluo and its dependencies (Accumulo, 
Hadoop, & ZooKeeper)"
-      echo "    fluo-yarn  Sets up Apache Fluo YARN and its dependencies 
(Fluo, Accumulo, Hadoop, & ZooKeeper)"
-      echo -e "    metrics    Sets up metrics service (InfluxDB + Grafana)\n"
-      echo "Options:"
-      echo "    --no-deps  Dependencies will be setup unless this option is 
specified. Only works for fluo & accumulo components."
-      exit 1
-      ;;
-  esac
-  if [[ "$?" == 0 ]]; then
-    echo "Setup complete."
-  else
-    echo "Setup failed!"
-    false
-  fi
-  ;;
-kill)
-  "$bin"/impl/kill.sh "${@:2}"
-       ;;
-ashell)
-  check_dirs ACCUMULO_HOME
-  "$ACCUMULO_HOME"/bin/accumulo shell -u "$ACCUMULO_USER" -p 
"$ACCUMULO_PASSWORD" "${@:2}"
-       ;;
-start)
-  "$bin"/impl/start.sh "${@:2}"
-  ;;
-stop)
-  "$bin"/impl/stop.sh "${@:2}"
-  ;;
-env)
-  "$bin"/impl/print-env.sh "${@:2}"
-  ;;
-version)
-  "$bin"/impl/version.sh "${@:2}"
-  ;;
-wipe)
-  "$bin"/impl/kill.sh
-  if [[ -d "$INSTALL" ]]; then
-    echo "removing $INSTALL"
-    rm -rf "$INSTALL"
-  fi
-  ;;
-*)
-  echo -e "Usage: uno <command> (<argument>)\n"
-  echo -e "Possible commands:\n"
-  echo "  fetch <component>      Fetches binary tarballs of component and it 
dependencies by either building or downloading"
-  echo "                         the tarball (as configured by uno.conf). Run 
'uno fetch all' to fetch all binary tarballs."
-  echo "                         Run 'uno fetch' for a list of possible 
components."
-  echo "  setup <component>      Sets up component and its dependencies 
(clearing any existing data)"
-  echo "                         Run 'uno setup' for list of components."
-  echo "  start <component>      Start ZooKeeper, Hadoop, Accumulo, if not 
running."
-  echo "  stop  <component>      Stop Accumulo, Hadoop, ZooKeeper, if running."
-  echo "  kill                   Kills all processes"
-  echo "  ashell                 Runs the Accumulo shell"
-  echo "  env                    Prints out shell configuration for PATH and 
common environment variables."
-  echo "                         Add '--paths' or '--vars' command to limit 
what is printed."
-  echo "  version <dep>          Prints out configured version for dependency"
-  echo "  wipe                   Kills all processes and clears install 
directory"
-  echo " "
-  exit 1
+  ashell)
+    check_dirs ACCUMULO_HOME
+    "$ACCUMULO_HOME"/bin/accumulo shell -u "$ACCUMULO_USER" -p 
"$ACCUMULO_PASSWORD" "${@:2}"
+    ;;
+  env)
+    "$bin"/impl/print-env.sh "${@:2}"
+    ;;
+  fetch)
+    hash mvn 2>/dev/null || { echo >&2 "Maven must be installed & on PATH. 
Aborting."; exit 1; }
+    hash wget 2>/dev/null || { echo >&2 "wget must be installed & on PATH. 
Aborting."; exit 1; }
+    if [[ "$2" == "all" ]]; then
+      "$bin"/impl/fetch.sh fluo
+    else
+      "$bin"/impl/fetch.sh "$2" "$3"
+    fi
+    ;;
+  install)
+    "$bin"/impl/install.sh "${@:2}"
+    ;;
+  kill)
+    "$bin"/impl/kill.sh "${@:2}"
+    ;;
+  run)
+    "$bin"/impl/run.sh "${@:2}"
+    ;;
+  setup)
+    "$bin"/impl/setup.sh "${@:2}"
+    ;;
+  start)
+    "$bin"/impl/start.sh "${@:2}"
+    ;;
+  stop)
+    "$bin"/impl/stop.sh "${@:2}"
+    ;;
+  version)
+    "$bin"/impl/version.sh "${@:2}"
+    ;;
+  wipe)
+    "$bin"/impl/kill.sh
+    if [[ -d "$INSTALL" ]]; then
+      echo "removing $INSTALL"
+      rm -rf "$INSTALL"
+    fi
+    ;;
+  *)
+    echo -e "Usage: uno <command> (<argument>)\n"
+    echo -e "Possible commands:\n"
+    echo "  fetch <component>      Fetches binary tarballs of component and it 
dependencies by either building or downloading"
+    echo "                         the tarball (as configured by uno.conf). 
Run 'uno fetch all' to fetch all binary tarballs."
+    echo "  install <component>    Installs component and its dependencies 
(clearing any existing data)"
+    echo "  run <component>        Runs component and its dependencies 
(clearing any existing data)"
+    echo "  setup <component>      Installs and runs component and its 
dependencies (clearing any existing data)"
+    echo "  start <component>      Start ZooKeeper, Hadoop, Accumulo, if not 
running."
+    echo "  stop  <component>      Stop Accumulo, Hadoop, ZooKeeper, if 
running."
+    echo "  kill                   Kills all processes"
+    echo "  ashell                 Runs the Accumulo shell"
+    echo "  env                    Prints out shell configuration for PATH and 
common environment variables."
+    echo "                         Add '--paths' or '--vars' command to limit 
what is printed."
+    echo "  version <dep>          Prints out configured version for 
dependency"
+    echo -e "  wipe                   Kills all processes and clears install 
directory\n"
+    echo "Possible components: accumulo, fluo, fluo-yarn, hadoop, zookeeper"
+    exit 1
 esac
diff --git a/conf/uno.conf b/conf/uno.conf
index 3b6fd53..035a67a 100644
--- a/conf/uno.conf
+++ b/conf/uno.conf
@@ -6,7 +6,6 @@
 export HADOOP_VERSION=${HADOOP_VERSION:-3.1.1}
 export ZOOKEEPER_VERSION=${ZOOKEEPER_VERSION:-3.4.13}
 export ACCUMULO_VERSION=${ACCUMULO_VERSION:-2.0.0-alpha-1}
-export SPARK_VERSION=${SPARK_VERSION:-1.6.3}
 export FLUO_VERSION=${FLUO_VERSION:-1.2.0}
 export FLUO_YARN_VERSION=${FLUO_YARN_VERSION:-1.0.0}
 
@@ -14,14 +13,11 @@ export FLUO_YARN_VERSION=${FLUO_YARN_VERSION:-1.0.0}
 # --------------
 # Hashes below match default versions above. If you change a version above,
 # you must also change the hash below.
-export 
FLUO_HASH=037f89cd2bfdaf76a1368256c52de46d6b9a85c9c1bfc776ec4447d02c813fb2
-export 
FLUO_YARN_HASH=c6220d35cf23127272f3b5638c44586504dc17a46f5beecdfee5027b5ff874b0
 export HADOOP_HASH=$(grep -F hadoop:${HADOOP_VERSION}: 
$UNO_HOME/conf/checksums | cut -d : -f 3)
 export ZOOKEEPER_HASH=$(grep -F zookeeper:${ZOOKEEPER_VERSION}: 
$UNO_HOME/conf/checksums | cut -d : -f 3)
 export ACCUMULO_HASH=$(grep -F accumulo:${ACCUMULO_VERSION}: 
$UNO_HOME/conf/checksums | cut -d : -f 3)
-export 
SPARK_HASH=d13358a2d45e78d7c8cf22656d63e5715a5900fab33b3340df9e11ce3747e314
-export 
INFLUXDB_HASH=fe4269500ae4d3d936b1ccdd9106c5e82c56751bcf0625ed36131a51a20a1c0c
-export 
GRAFANA_HASH=d3eaa2c45ae9f8e7424a7b0b74fa8c8360bd25a1f49545d8fb5a874ebf0530fe
+export 
FLUO_HASH=037f89cd2bfdaf76a1368256c52de46d6b9a85c9c1bfc776ec4447d02c813fb2
+export 
FLUO_YARN_HASH=c6220d35cf23127272f3b5638c44586504dc17a46f5beecdfee5027b5ff874b0
 
 # Network configuration
 # ---------------------
@@ -36,7 +32,6 @@ export DOWNLOADS=$UNO_HOME/downloads
 export ACCUMULO_TARBALL=accumulo-$ACCUMULO_VERSION-bin.tar.gz
 export HADOOP_TARBALL=hadoop-"$HADOOP_VERSION".tar.gz
 export ZOOKEEPER_TARBALL=zookeeper-"$ZOOKEEPER_VERSION".tar.gz
-export SPARK_TARBALL=spark-$SPARK_VERSION-bin-without-hadoop.tgz
 export FLUO_TARBALL=fluo-$FLUO_VERSION-bin.tar.gz
 export FLUO_YARN_TARBALL=fluo-yarn-$FLUO_YARN_VERSION-bin.tar.gz
 
@@ -124,7 +119,6 @@ export DATA_DIR=$INSTALL/data
 export ZOOKEEPER_HOME=$INSTALL/zookeeper-$ZOOKEEPER_VERSION
 export HADOOP_HOME=$INSTALL/hadoop-$HADOOP_VERSION
 export ACCUMULO_HOME=$INSTALL/accumulo-$ACCUMULO_VERSION
-export SPARK_HOME=$INSTALL/spark-$SPARK_VERSION-bin-without-hadoop
 export FLUO_HOME=$INSTALL/fluo-$FLUO_VERSION
 export FLUO_YARN_HOME=$INSTALL/fluo-yarn-$FLUO_YARN_VERSION
 # Config directories
@@ -143,20 +137,31 @@ export ACCUMULO_INSTANCE=uno
 export ACCUMULO_USER=root
 # Accumulo password
 export ACCUMULO_PASSWORD=secret
-# Accumulo crypto option, 'true' to run with encryption, 'false' to run without
-export ACCUMULO_CRYPTO=false
 
-# Metrics configuration
-# ---------------------
-# Metrics can only be set up on Linux. Mac OS X is not supported.
+# Plugin configuration
+# --------------------
+# Post-install plugins. Example: "influx-metrics accumulo-encryption"
+export POST_INSTALL_PLUGINS=""
+# Post-run plugins. Example: "spark"
+export POST_RUN_PLUGINS=""
+# Configuration for 'spark' plugin
+export SPARK_VERSION=${SPARK_VERSION:-1.6.3}
+export SPARK_HOME=$INSTALL/spark-$SPARK_VERSION-bin-without-hadoop
+export SPARK_TARBALL=spark-${SPARK_VERSION}-bin-without-hadoop.tgz
+export 
SPARK_HASH=d13358a2d45e78d7c8cf22656d63e5715a5900fab33b3340df9e11ce3747e314
+# Configuration for 'influxdb-metrics' plugin
+# InfluxDB metrics can only be set up on Linux. Mac OS X is not supported.
 export INFLUXDB_VERSION=0.9.4.2
 export INFLUXDB_HOME=$INSTALL/influxdb-"$INFLUXDB_VERSION"
+export INFLUXDB_TARBALL=influxdb-"$INFLUXDB_VERSION".tar.gz
+export 
INFLUXDB_HASH=fe4269500ae4d3d936b1ccdd9106c5e82c56751bcf0625ed36131a51a20a1c0c
 export GRAFANA_VERSION=2.5.0
 export GRAFANA_HOME=$INSTALL/grafana-"$GRAFANA_VERSION"
+export GRAFANA_TARBALL=grafana-"$GRAFANA_VERSION".tar.gz
+export 
GRAFANA_HASH=d3eaa2c45ae9f8e7424a7b0b74fa8c8360bd25a1f49545d8fb5a874ebf0530fe
 
-#Performance Profiles
-#--------------------
-
+# Performance Profiles
+# --------------------
 PERFORMACE_PROFILE=8GX2
 
 case "$PERFORMACE_PROFILE" in
diff --git a/conf/spark/spark-env.sh b/plugins/accumulo-encryption.sh
similarity index 57%
copy from conf/spark/spark-env.sh
copy to plugins/accumulo-encryption.sh
index 016dd17..876aa2d 100755
--- a/conf/spark/spark-env.sh
+++ b/plugins/accumulo-encryption.sh
@@ -1,6 +1,5 @@
-#!/usr/bin/env bash
+#! /usr/bin/env bash
 
-#
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
 # this work for additional information regarding copyright ownership.
@@ -8,17 +7,23 @@
 # (the "License"); you may not use this file except in compliance with
 # the License.  You may obtain a copy of the License at
 #
-#    http://www.apache.org/licenses/LICENSE-2.0
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-#
 
-# This file is sourced when running various Spark programs.
-# Copy it as spark-env.sh and edit that to configure Spark for your site.
+source "$UNO_HOME"/bin/impl/util.sh
+
+if [[ $ACCUMULO_VERSION =~ ^1\..*$ ]]; then
+  echo "Encryption cannot be enabled for Accumulo 1.x"
+  exit 1
+fi
 
-SPARK_DIST_CLASSPATH=$("$HADOOP_HOME"/bin/hadoop classpath)
-export SPARK_DIST_CLASSPATH
+accumulo_conf=$ACCUMULO_HOME/conf/accumulo.properties
+encrypt_key=$ACCUMULO_HOME/conf/data-encryption.key
+openssl rand -out $encrypt_key 32
+echo "instance.crypto.opts.key.uri=file://$encrypt_key" >> "$accumulo_conf"
+echo 
"instance.crypto.service=org.apache.accumulo.core.security.crypto.impl.AESCryptoService"
 >> "$accumulo_conf"
diff --git a/plugins/influx-metrics.sh b/plugins/influx-metrics.sh
new file mode 100755
index 0000000..23a887b
--- /dev/null
+++ b/plugins/influx-metrics.sh
@@ -0,0 +1,164 @@
+#! /usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+source "$UNO_HOME"/bin/impl/util.sh
+
+if [[ "$OSTYPE" == "darwin"* ]]; then
+  echo "The metrics services (InfluxDB and Grafana) are not supported on Mac 
OS X at this time."
+  exit 1
+fi
+
+pkill -f influxdb
+pkill -f grafana-server
+
+# stop if any command fails
+set -e
+
+BUILD=$DOWNLOADS/build
+
+if [[ ! -f "$BUILD/$INFLUXDB_TARBALL" ]]; then
+  IF_DIR=influxdb-$INFLUXDB_VERSION
+  IF_PATH=$BUILD/$IF_DIR
+  influx_tarball=influxdb_"$INFLUXDB_VERSION"_x86_64.tar.gz
+  download_tarball https://s3.amazonaws.com/influxdb "$influx_tarball" 
"$INFLUXDB_HASH"
+  tar xzf "$DOWNLOADS/$influx_tarball" -C "$BUILD"
+  mv "$BUILD/influxdb_${INFLUXDB_VERSION}_x86_64" "$IF_PATH"
+  mkdir "$IF_PATH"/bin
+  mv "$IF_PATH/opt/influxdb/versions/$INFLUXDB_VERSION"/* "$IF_PATH"/bin
+  rm -rf "$IF_PATH"/opt
+  cd "$BUILD"
+  tar czf influxdb-"$INFLUXDB_VERSION".tar.gz "$IF_DIR"
+  rm -rf "$IF_PATH"
+fi
+
+if [[ ! -f "$BUILD/$GRAFANA_TARBALL" ]]; then
+  GF_DIR=grafana-$GRAFANA_VERSION
+  GF_PATH=$BUILD/$GF_DIR
+  graf_tarball=grafana-"$GRAFANA_VERSION".linux-x64.tar.gz
+  download_tarball https://grafanarel.s3.amazonaws.com/builds "$graf_tarball" 
"$GRAFANA_HASH"
+  tar xzf "$DOWNLOADS/$graf_tarball" -C "$BUILD"
+  cd "$BUILD"
+  tar czf grafana-"$GRAFANA_VERSION".tar.gz "$GF_DIR"
+  rm -rf "$GF_PATH"
+fi
+
+rm -rf "$INSTALL"/influxdb-*
+rm -rf "$INSTALL"/grafana-*
+rm -f "$LOGS_DIR"/metrics/*
+rm -rf "$DATA_DIR"/influxdb
+mkdir -p "$LOGS_DIR"/metrics
+
+echo "Installing InfluxDB $INFLUXDB_VERSION to $INFLUXDB_HOME"
+
+tar xzf "$DOWNLOADS/build/$INFLUXDB_TARBALL" -C "$INSTALL"
+"$INFLUXDB_HOME"/bin/influxd config -config 
"$UNO_HOME"/plugins/influx-metrics/influxdb.conf > 
"$INFLUXDB_HOME"/influxdb.conf
+if [[ ! -f "$INFLUXDB_HOME"/influxdb.conf ]]; then
+  print_to_console "Failed to create $INFLUXDB_HOME/influxdb.conf"
+  exit 1
+fi
+$SED "s#DATA_DIR#$DATA_DIR#g" "$INFLUXDB_HOME"/influxdb.conf
+
+echo "Installing Grafana $GRAFANA_VERSION to $GRAFANA_HOME"
+
+tar xzf "$DOWNLOADS/build/$GRAFANA_TARBALL" -C "$INSTALL"
+cp "$UNO_HOME"/plugins/influx-metrics/custom.ini "$GRAFANA_HOME"/conf/
+$SED "s#GRAFANA_HOME#$GRAFANA_HOME#g" "$GRAFANA_HOME"/conf/custom.ini
+$SED "s#LOGS_DIR#$LOGS_DIR#g" "$GRAFANA_HOME"/conf/custom.ini
+mkdir "$GRAFANA_HOME"/dashboards
+
+if [[ -d "$ACCUMULO_HOME" ]]; then
+  echo "Configuring Accumulo metrics"
+  cp "$UNO_HOME"/plugins/influx-metrics/accumulo-dashboard.json 
"$GRAFANA_HOME"/dashboards/
+  conf=$ACCUMULO_HOME/conf
+  metrics_props=hadoop-metrics2-accumulo.properties
+  cp "$conf"/templates/"$metrics_props" "$conf"/
+  $SED "/accumulo.sink.graphite/d" "$conf"/"$metrics_props"
+  {
+    echo 
"accumulo.sink.graphite.class=org.apache.hadoop.metrics2.sink.GraphiteSink"
+    echo "accumulo.sink.graphite.server_host=localhost"
+    echo "accumulo.sink.graphite.server_port=2004"
+    echo "accumulo.sink.graphite.metrics_prefix=accumulo"
+  } >> "$conf"/"$metrics_props"
+fi
+
+if [[ -d "$FLUO_HOME" ]]; then
+  echo "Configuring Fluo metrics"
+  cp "$FLUO_HOME"/contrib/grafana/* "$GRAFANA_HOME"/dashboards/
+  if [[ $FLUO_VERSION =~ ^1\.[0-1].*$ ]]; then
+    FLUO_PROPS=$FLUO_HOME/conf/fluo.properties
+  else
+    FLUO_PROPS=$FLUO_HOME/conf/fluo-app.properties
+  fi
+  $SED "/fluo.metrics.reporter.graphite/d" "$FLUO_PROPS"
+  {
+    echo "fluo.metrics.reporter.graphite.enable=true"
+    echo "fluo.metrics.reporter.graphite.host=$UNO_HOST"
+    echo "fluo.metrics.reporter.graphite.port=2003"
+    echo "fluo.metrics.reporter.graphite.frequency=30"
+  } >> "$FLUO_PROPS"
+fi
+
+"$INFLUXDB_HOME"/bin/influxd -config "$INFLUXDB_HOME"/influxdb.conf &> 
"$LOGS_DIR"/metrics/influxdb.log &
+
+"$GRAFANA_HOME"/bin/grafana-server -homepath="$GRAFANA_HOME" 2> /dev/null &
+
+sleep 10
+
+if [[ -d "$FLUO_HOME" ]]; then
+  "$INFLUXDB_HOME"/bin/influx -import -path 
"$FLUO_HOME"/contrib/influxdb/fluo_metrics_setup.txt
+fi
+
+# allow commands to fail
+set +e
+
+sleep 5
+
+function add_datasource() {
+  retcode=1
+  while [[ $retcode != 0 ]];  do
+    curl 'http://admin:admin@localhost:3000/api/datasources' -X POST -H 
'Content-Type: application/json;charset=UTF-8' \
+      --data-binary "$1"
+    retcode=$?
+    if [[ $retcode != 0 ]]; then
+      print_to_console "Failed to add Grafana data source. Retrying in 5 sec.."
+      sleep 5
+    fi
+  done
+  echo ""
+}
+
+if [[ -d "$ACCUMULO_HOME" ]]; then
+  accumulo_data='{"name":"accumulo_metrics","type":"influxdb","url":"http://'
+  accumulo_data+=$UNO_HOST
+  
accumulo_data+=':8086","access":"direct","isDefault":true,"database":"accumulo_metrics","user":"accumulo","password":"secret"}'
+  add_datasource $accumulo_data
+fi
+
+if [[ -d "$FLUO_HOME" ]]; then
+  fluo_data='{"name":"fluo_metrics","type":"influxdb","url":"http://'
+  fluo_data+=$UNO_HOST
+  
fluo_data+=':8086","access":"direct","isDefault":false,"database":"fluo_metrics","user":"fluo","password":"secret"}'
+  add_datasource $fluo_data
+fi
+
+stty sane
+
+print_to_console "InfluxDB $INFLUXDB_VERSION is running"
+print_to_console "Grafana $GRAFANA_VERSION is running"
+print_to_console "    * UI: http://$UNO_HOST:3000/";
+
+stty sane
diff --git a/conf/grafana/accumulo-dashboard.json 
b/plugins/influx-metrics/accumulo-dashboard.json
similarity index 100%
rename from conf/grafana/accumulo-dashboard.json
rename to plugins/influx-metrics/accumulo-dashboard.json
diff --git a/conf/grafana/custom.ini b/plugins/influx-metrics/custom.ini
similarity index 100%
rename from conf/grafana/custom.ini
rename to plugins/influx-metrics/custom.ini
diff --git a/conf/influxdb/influxdb.conf b/plugins/influx-metrics/influxdb.conf
similarity index 100%
rename from conf/influxdb/influxdb.conf
rename to plugins/influx-metrics/influxdb.conf
diff --git a/bin/impl/setup-spark.sh b/plugins/spark.sh
similarity index 51%
rename from bin/impl/setup-spark.sh
rename to plugins/spark.sh
index efea054..941ed53 100755
--- a/bin/impl/setup-spark.sh
+++ b/plugins/spark.sh
@@ -1,12 +1,13 @@
 #! /usr/bin/env bash
 
-# Copyright 2014 Uno authors (see AUTHORS)
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
 #
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#    http://www.apache.org/licenses/LICENSE-2.0
+#     http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
@@ -16,6 +17,14 @@
 
 source "$UNO_HOME"/bin/impl/util.sh
 
+if [[ ! -f "$DOWNLOADS/$SPARK_TARBALL" ]]; then
+  apache_mirror=$(curl -sk https://apache.org/mirrors.cgi?as_json | grep 
preferred | cut -d \" -f 4)
+  if [ -z "$apache_mirror" ]; then
+    echo "Failed querying apache.org for best download mirror!"
+  fi
+  download_apache "spark/spark-$SPARK_VERSION" "$SPARK_TARBALL" "$SPARK_HASH"
+fi
+
 verify_exist_hash "$SPARK_TARBALL" "$SPARK_HASH"
 
 if [[ ! -d "$HADOOP_HOME" ]]; then
@@ -23,7 +32,7 @@ if [[ ! -d "$HADOOP_HOME" ]]; then
   exit 1
 fi
 
-print_to_console "Setting up Apache Spark at $SPARK_HOME"
+print_to_console "Installing Apache Spark at $SPARK_HOME"
 
 pkill -f org.apache.spark.deploy.history.HistoryServer
 
@@ -38,10 +47,12 @@ mkdir -p "$DATA_DIR"/spark/events
 
 tar xzf "$DOWNLOADS/$SPARK_TARBALL" -C "$INSTALL"
 
-cp "$UNO_HOME"/conf/spark/* "$SPARK_HOME"/conf
+cp "$UNO_HOME"/plugins/spark/* "$SPARK_HOME"/conf
 $SED "s#DATA_DIR#$DATA_DIR#g" "$SPARK_HOME"/conf/spark-defaults.conf
 $SED "s#LOGS_DIR#$LOGS_DIR#g" "$SPARK_HOME"/conf/spark-defaults.conf
 
 export SPARK_LOG_DIR=$LOGS_DIR/spark
 "$SPARK_HOME"/sbin/start-history-server.sh
 
+print_to_console "Apache Spark History Server is running"
+print_to_console "    * view at http://localhost:18080/";
diff --git a/conf/spark/spark-defaults.conf b/plugins/spark/spark-defaults.conf
similarity index 100%
rename from conf/spark/spark-defaults.conf
rename to plugins/spark/spark-defaults.conf
diff --git a/conf/spark/spark-env.sh b/plugins/spark/spark-env.sh
similarity index 100%
rename from conf/spark/spark-env.sh
rename to plugins/spark/spark-env.sh

Reply via email to