In Java I was able to expose dataframe as temp tables and read the table content via beeline (as like regular hive table)
I have n't posted the entire program (with the assumption that you know already how to create dataframes)
import org.apache.spark.sql.hive.thriftserver.*;
HiveContext sqlContext = new org.apache.spark.sql.hive.HiveContext(sc.sc());
DataFrame orgDf = sqlContext.createDataFrame(orgPairRdd.values(), OrgMaster.class);
orgPairRdd is a JavaPairRDD, orgPairRdd.values() -> contains the entire class value (Row fetched from Hbase)
OrgMaster is a java bean serializable class
orgDf.registerTempTable("spark_org_master_table");
HiveThriftServer2.startWithContext(sqlContext);
I submitted the program locally (as the Hive thrift server is not running in port 10000 in that machine)
hadoop_classpath=$(hadoop classpath)
HBASE_CLASSPATH=$(hbase classpath)
spark-1.5.2/bin/spark-submit --name tempSparkTable --class packageName.SparkCreateOrgMasterTableFile --master local[4] --num-executors 4 --executor-cores 4 --executor-memory 8G --conf "spark.executor.extraClassPath=${HBASE_CLASSPATH}" --conf "spark.driver.extraClassPath=${HBASE_CLASSPATH}" --conf "spark.executor.extraClassPath=${hadoop_classpath}" --conf --jars /path/programName-SNAPSHOT-jar-with-dependencies.jar
/path/programName-SNAPSHOT.jar
In another terminal start the beeline pointing to this thrift service started using this spark program
/opt/hive/hive-1.2/bin/beeline -u jdbc:hive2://<ipaddressofMachineWhereSparkPgmRunninglocally>:10000 -n anyUsername
Show tables -> command will display the table that you registered in Spark
You can do describe also
In this example
describe spark_org_master_table;
then you can run regular queries in beeline against this table. (Until you kill the spark program execution)