I am trying to follow the examples on the Apache Spark documentation site: https://spark.apache.org/docs/2.0.0-preview/submitting-applications.html
I started a Spark standalone cluster and want to run the example Python application. I am in my spark-2.0.0-bin-hadoop2.7 directory and ran the following command
./bin/spark-submit \
--master spark://207.184.161.138:7077 \
examples/src/main/python/pi.py \
1000
However, I get the error
jupyter: '/Users/MyName/spark-2.0.0-bin- \
hadoop2.7/examples/src/main/python/pi.py' is not a Jupyter command
This is what my bash_profile looks like
#setting path for Spark
export SPARK_PATH=~/spark-2.0.0-bin-hadoop2.7
export PYSPARK_DRIVER_PYTHON="jupyter"
export PYSPARK_DRIVER_PYTHON_OPTS="notebook"
alias snotebook='$SPARK_PATH/bin/pyspark --master local[2]'
What am I doing wrong?
PYSPARK_DRIVER_PYTHONandPYSPARK_DRIVER_PYTHON_OPTSbefore submitting.