0

I'm trying to read data in R from the hdfs. One thing I'm struggling with when using sparklyr is deciphering the error messages ...because I am not a java programmer.

Consider this example:

DO THIS IN R

create abalone dataframe - abalone is a dataset used for machine learning examples

load pivotal R package #contains abalone data and create dataframe
if (!require(PivotalR)){ 
  install.packages(PivotalR) }

data(abalone)

#sample of data
head(abalone)

#export data to a CSV file
if (!require(readr)){ 
  install.packages(readr) }
write_csv(abalone,'abalone.csv')
DO THIS AT THE COMMAND LINE
hdfs dfs -put abalone.csv abalone.csv
#check to see if the file is on the hdfs
hdfs dfs -ls

DO THIS IN R This is set up to use your current version of spark you might have to change spark_home

  library(sparklyr)
    library(SparkR)
    sc = spark_connect(master = 'yarn-client',
                       spark_home = '/usr/hdp/current/spark-client',
                       app_name = 'sparklyr',
                       config = list(
                         "sparklyr.shell.executor-memory" = "1G",
                         "sparklyr.shell.driver-memory"   = "4G",
                         "spark.driver.maxResultSize"     = "2G" # may need to transfer a lot of data into R 
    )
    )

Read in abalone file that we just wrote to the HDFS. You will have to change the path to match your path.

df <- spark_read_csv(sc,name='abalone',path='hdfs://pnhadoop/user/stc004/abalone.csv',delimiter=",",
                         header=TRUE)

I'm getting the following error:

Error: java.lang.IllegalArgumentException: invalid method csv for object 63
        at sparklyr.Invoke$.invoke(invoke.scala:113)
        at sparklyr.StreamHandler$.handleMethodCall(stream.scala:89)
        at sparklyr.StreamHandler$.read(stream.scala:55)
        at sparklyr.BackendHandler.channelRead0(handler.scala:49)
        at sparklyr.BackendHandler.channelRead0(handler.scala:14)
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
        at java.lang.Thread.run(Thread.java:745)

No idea what's going on . I've used spark_read_csv previously without error. I don't know how to decipher the java errors. Thoughts?

1
  • first thing I'd check is access rights to the file, is it granted? Commented Jun 1, 2017 at 14:06

1 Answer 1

1

Spark 2.1.0

sparkR.session( sparkConfig = list(),enableHiveSupport= FALSE)
df1 <- read.df(path="hdfs://<yourpath>/*",source="csv",na.strings = "NA", delimiter="\u0001")
head(df1)
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.