I tried out Spark on YARN recently. It is very straight forward. If you are planning to use MLlib, make sure you have gcc-gfortran installed on every cluster node.
1) First, you have to download and install spark-1.0.0-bin-hadoop2
2) Set the YARN_CONF_DIR to point to the location of your hadoop configuration files.
You can now start spark shell as follows
./bin/spark-shell —master yarn-client –driver-java-options “-Dspark.executor.memory=2g”