4

When executing a code to get a spark dataframe from HDFS and then convert it to pandas dataframe,

spark_df = spark.read.parquet(*data_paths)
# other code in the process like filtering, groupby etc.
# ....
# write sparkdf to hadoop, get n rows if specified
        if n:
            spark_df.limit(n).write.csv(tmpfoldername, sep=csv_sep, quote=csv_quote)
        else:
            spark_df.write.csv(tmpfoldername, sep=csv_sep, quote=csv_quote)

I get an error:

/home/sarah/anaconda3/envs/py27/lib/python2.7/site-packages/dspipeline/core/wf_spark.pyc in to_pd(spark_df, n, save_csv, csv_sep, csv_quote, quick)
    215         # write sparkdf to hadoop, get n rows if specified
    216         if n:
--> 217             spark_df.limit(n).write.csv(tmpfoldername, sep=csv_sep, quote=csv_quote)
    218         else:
    219             spark_df.write.csv(tmpfoldername, sep=csv_sep, quote=csv_quote)

/opt/spark-2.3.0-SNAPSHOT-bin-spark-master/python/pyspark/sql/dataframe.py in limit(self, num)
    472         []
    473         """
--> 474         jdf = self._jdf.limit(num)
    475         return DataFrame(jdf, self.sql_ctx)
    476 

/opt/spark-2.3.0-SNAPSHOT-bin-spark-master/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1131         answer = self.gateway_client.send_command(command)
   1132         return_value = get_return_value(
-> 1133             answer, self.gateway_client, self.target_id, self.name)
   1134 
   1135         for temp_arg in temp_args:

/opt/spark-2.3.0-SNAPSHOT-bin-spark-master/python/pyspark/sql/utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/opt/spark-2.3.0-SNAPSHOT-bin-spark-master/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    321                 raise Py4JError(
    322                     "An error occurred while calling {0}{1}{2}. Trace:\n{3}\n".
--> 323                     format(target_id, ".", name, value))
    324         else:
    325             raise Py4JError(

Py4JError: An error occurred while calling o1086.limit. Trace:
py4j.Py4JException: Method limit([class java.lang.String]) does not exist
    at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:318)
    at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:326)
    at py4j.Gateway.invoke(Gateway.java:272)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:745)

As I found the function limit(num) in pyspark documentation, I guess the reason is that I'm not correctly using it. Any help?

0

3 Answers 3

5

The exception is pretty clear here:

Method limit([class java.lang.String]) does not exist

n you are trying to pass to limit is not an int but a str.

You should go back to the point where n is defined, and fix it.

Sign up to request clarification or add additional context in comments.

Comments

0

There exits .limit method for DataFrame, if you want to get the n rows from a DataFrame, you can use .limit(n) method, but parameter n is must be integer. example:

df.limit(10)

if you use the other param like df.limit('10'), an error will occurre: py4j.Py4JException: Method limit([class java.lang.String]) does not exist.

Comments

0

If you pass a number greater than 2,147,483,647 you will get the same error because limit expects an IntegerType.

IntegerType: Represents 4-byte signed integer numbers. The range of numbers is from -2147483648 to 2147483647.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.