4

I use Spark Streaming 2.10, Kafka_2.11-0.10.0.0, and Spark-streaming-0-10-2.11-2.10.

spark-submit --version
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
      /_/

Using Scala version 2.11.8, Java HotSpot(TM) 64-Bit Server VM, 1.7.0_80
Branch 
Compiled by user jenkins on 2016-12-16T02:04:48Z

I use maven to build the project. The following are the dependencies.

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.11</artifactId>
  <version>2.1.0</version>
</dependency>
<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-streaming_2.11</artifactId>
  <version>2.1.0</version>
  <scope>provided</scope>
</dependency> 
<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
  <version>2.1.0</version>
 </dependency>

I use the command to run the application in Eclipse correctly.

right click -> run as -> maven build -> clean install -> run

However, when I spark-submit the application as follows:

spark-submit \
  --jars=/opt/ibudata/binlogSparkStreaming/kafka_2.11-0.8.2.2.‌​jar,/opt/ibudata/bin‌​logSparkStreaming/ka‌​fka-clients-0.8.2.2.‌​jar,/opt/ibudata/bin‌​logSparkStreaming/me‌​trics-core-2.2.0.jar‌​,/opt/ibudata/binlog‌​SparkStreaming/spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar,/opt‌​/ibudata/binlogSpark‌​Streaming/zkclient-0‌​.3.jar \
  --class com.br.sparkStreaming.wordcount \
  --master spark:m20p183:7077 \
  --executor-memory 2g \
  --num-executors 3 \
  /opt/ibudata/binlogSparkStreaming/jars/wordcounttest8-0.0.1-‌​SNAPSHOT.jar

...it fails with the following error:

> io.netty.handler.codec.EncoderException: java.lang.NoSuchMethodError:
> io.netty.channel.DefaultFileRegion.<init>(Ljava/io/File;JJ)V  at
> io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:107)
>   at
> io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:658)
>   at
> io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:716)
>   at
> io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:651)
>   at
> io.netty.handler.timeout.IdleStateHandler.write(IdleStateHandler.java:266)
>   at
> io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:658)
>   at
> io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:716)
>   at
> io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:706)
>   at
> io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:741)
>   at
> io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:895)
>   at
> io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:240)
>   at
> org.apache.spark.network.server.TransportRequestHandler.respond(TransportRequestHandler.java:194)
>   at
> org.apache.spark.network.server.TransportRequestHandler.processStreamRequest(TransportRequestHandler.java:150)
>   at
> org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:111)
>   at
> org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119)
>   at
> org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
>   at
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>   at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>   at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>   at
> io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
>   at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>   at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>   at
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
>   at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>   at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>   at
> org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
>   at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>   at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>   at
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
>   at
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
>   at
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
>   at
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>   at
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)     at
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>   at
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>   at java.lang.Thread.run(Thread.java:745) Caused by:
> java.lang.NoSuchMethodError:
> io.netty.channel.DefaultFileRegion.<init>(Ljava/io/File;JJ)V  at
> org.apache.spark.network.buffer.FileSegmentManagedBuffer.convertToNetty(FileSegmentManagedBuffer.java:133)
>   at
> org.apache.spark.network.protocol.MessageEncoder.encode(MessageEncoder.java:54)
>   at
> org.apache.spark.network.protocol.MessageEncoder.encode(MessageEncoder.java:33)
>   at
> io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
>   ... 36 more

any suggestion will be appreciate

After all kinds trying,but no good result, then I tried run-exampe given by Spark Streaming + Kafka Integration Guide (Kafka broker version 0.8.2.1 or higher) , run example with command: bin/run-example streaming.JavaDirectKafkaWordCount 172.18.30.22:9092 \test,

but it reported the same error: java.lang.NoSuchMethodError: io.netty.channel.DefaultFileRegion.;

so I doubt it maybe some jars in the classpath caused this problem, my spark-env.sh :

export JAVA_HOME=//opt/jdk1.7
export export SCALA_HOME=/opt/scala
export export SPARK_HOME=/opt/spark
export HADOOP_HOME=/opt/hadoop2.7.3
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop  
#export SPARK_MASTER_IP=master1  
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=m20p180:2181,m20p181:2181,m20p182:2181 -Dspark.deploy.zookeeper.dir=/spark"  
export SPARK_WORKER_MEMORY=1g  
export SPARK_EXECUTOR_MEMORY=1g  
export SPARK_DRIVER_MEMORY=1g  
export SPARK_WORKDER_CORES=4
export HIVE_CONF_DIR=/opt/hadoop2.7.3/hive/conf
export SPARK_CLASSPATH=$SPARK_CLASSPATH:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/ibudata/binlogSparkStreaming/netty-all-4.1.12.Final.jar

As you can see I added netty-all-4.1.12.Final.jar to the classpath ,but it didn't work. ::q!

I also started the example with command: SPARK_PRINT_LAUNCH_COMMAND=1 ./bin/run-example streaming.JavaDirectKafkaWordCount 172.18.30.22:9092 \test

the output :

Spark Command: //opt/jdk1.7/bin/java -cp /opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn/:/opt/hadoop2.7.3/share/hadoop/yarn/lib/:/opt/hadoop2.7.3/share/hadoop/common/:/opt/hadoop2.7.3/share/hadoop/common/lib/:/opt/hadoop2.7.3/share/hadoop/hdfs/:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/:/opt/hadoop2.7.3/share/hadoop/mapreduce/:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/:/opt/hadoop2.7.3/share/hadoop/tools/lib/:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn/:/opt/hadoop2.7.3/share/hadoop/yarn/lib/:/opt/hadoop2.7.3/share/hadoop/common/:/opt/hadoop2.7.3/share/hadoop/common/lib/:/opt/hadoop2.7.3/share/hadoop/hdfs/:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/:/opt/hadoop2.7.3/share/hadoop/mapreduce/:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/:/opt/hadoop2.7.3/share/hadoop/tools/lib/:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/ibudata/binlogSparkStreaming/netty-all-4.1.12.Final.jar:/opt/spark/conf/:/opt/spark/jars/*:/opt/hadoop2.7.3/etc/hadoop/ -Xmx1g -XX:MaxPermSize=256m org.apache.spark.deploy.SparkSubmit --jars /opt/spark/examples/jars/spark-examples_2.11-2.1.0.jar,/opt/spark/examples/jars/scopt_2.11-3.3.0.jar --class org.apache.spark.examples.streaming.JavaDirectKafkaWordCount spark-internal 172.18.30.22:9092 test
========================================
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/spark/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2017-07-04 18:37:01,123 INFO  [main] spark.SparkContext (Logging.scala:logInfo(54)) - Running Spark version 2.1.0
2017-07-04 18:37:01,129 WARN  [main] spark.SparkContext (Logging.scala:logWarning(66)) - Support for Java 7 is deprecated as of Spark 2.0.0
2017-07-04 18:37:02,304 WARN  [main] spark.SparkConf (Logging.scala:logWarning(66)) - 
SPARK_CLASSPATH was detected (set to ':/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn:/opt/hadoop2.7.3/share/hadoop/yarn/lib:/opt/hadoop2.7.3/share/hadoop/common:/opt/hadoop2.7.3/share/hadoop/common/lib:/opt/hadoop2.7.3/share/hadoop/hdfs:/opt/hadoop2.7.3/share/hadoop/hdfs/lib:/opt/hadoop2.7.3/share/hadoop/mapreduce:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib:/opt/hadoop2.7.3/share/hadoop/tools/lib::/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn:/opt/hadoop2.7.3/share/hadoop/yarn/lib:/opt/hadoop2.7.3/share/hadoop/common:/opt/hadoop2.7.3/share/hadoop/common/lib:/opt/hadoop2.7.3/share/hadoop/hdfs:/opt/hadoop2.7.3/share/hadoop/hdfs/lib:/opt/hadoop2.7.3/share/hadoop/mapreduce:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib:/opt/hadoop2.7.3/share/hadoop/tools/lib::/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/ibudata/binlogSparkStreaming/netty-all-4.1.12.Final.jar').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with --driver-class-path to augment the driver classpath
 - spark.executor.extraClassPath to augment the executor classpath

2017-07-04 18:37:02,308 WARN  [main] spark.SparkConf (Logging.scala:logWarning(66)) - Setting 'spark.executor.extraClassPath' to ':/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn:/opt/hadoop2.7.3/share/hadoop/yarn/lib:/opt/hadoop2.7.3/share/hadoop/common:/opt/hadoop2.7.3/share/hadoop/common/lib:/opt/hadoop2.7.3/share/hadoop/hdfs:/opt/hadoop2.7.3/share/hadoop/hdfs/lib:/opt/hadoop2.7.3/share/hadoop/mapreduce:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib:/opt/hadoop2.7.3/share/hadoop/tools/lib::/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn:/opt/hadoop2.7.3/share/hadoop/yarn/lib:/opt/hadoop2.7.3/share/hadoop/common:/opt/hadoop2.7.3/share/hadoop/common/lib:/opt/hadoop2.7.3/share/hadoop/hdfs:/opt/hadoop2.7.3/share/hadoop/hdfs/lib:/opt/hadoop2.7.3/share/hadoop/mapreduce:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib:/opt/hadoop2.7.3/share/hadoop/tools/lib::/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/ibudata/binlogSparkStreaming/netty-all-4.1.12.Final.jar' as a work-around.
2017-07-04 18:37:02,309 WARN  [main] spark.SparkConf (Logging.scala:logWarning(66)) - Setting 'spark.driver.extraClassPath' to ':/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn:/opt/hadoop2.7.3/share/hadoop/yarn/lib:/opt/hadoop2.7.3/share/hadoop/common:/opt/hadoop2.7.3/share/hadoop/common/lib:/opt/hadoop2.7.3/share/hadoop/hdfs:/opt/hadoop2.7.3/share/hadoop/hdfs/lib:/opt/hadoop2.7.3/share/hadoop/mapreduce:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib:/opt/hadoop2.7.3/share/hadoop/tools/lib::/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn:/opt/hadoop2.7.3/share/hadoop/yarn/lib:/opt/hadoop2.7.3/share/hadoop/common:/opt/hadoop2.7.3/share/hadoop/common/lib:/opt/hadoop2.7.3/share/hadoop/hdfs:/opt/hadoop2.7.3/share/hadoop/hdfs/lib:/opt/hadoop2.7.3/share/hadoop/mapreduce:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib:/opt/hadoop2.7.3/share/hadoop/tools/lib::/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/share/hadoop/yarn/*:/opt/hadoop2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop2.7.3/share/hadoop/common/*:/opt/hadoop2.7.3/share/hadoop/common/lib/*:/opt/hadoop2.7.3/share/hadoop/hdfs/*:/opt/hadoop2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/*:/opt/hadoop2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop2.7.3/share/hadoop/tools/lib/*:/opt/hadoop2.7.3/hive/lib/mysql-connector-java-5.1.40-bin.jar:/opt/ibudata/binlogSparkStreaming/netty-all-4.1.12.Final.jar' as a work-around.
2017-07-04 18:37:02,524 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(54)) - Changing view acls to: ibudata
2017-07-04 18:37:02,526 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(54)) - Changing modify acls to: ibudata
2017-07-04 18:37:02,528 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(54)) - Changing view acls groups to: 
2017-07-04 18:37:02,530 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(54)) - Changing modify acls groups to: 
2017-07-04 18:37:02,532 INFO  [main] spark.SecurityManager (Logging.scala:logInfo(54)) - SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(ibudata); groups with view permissions: Set(); users  with modify permissions: Set(ibudata); groups with modify permissions: Set()
2017-07-04 18:37:03,091 INFO  [main] util.Utils (Logging.scala:logInfo(54)) - Successfully started service 'sparkDriver' on port 35480.
2017-07-04 18:37:03,127 INFO  [main] spark.SparkEnv (Logging.scala:logInfo(54)) - Registering MapOutputTracker
2017-07-04 18:37:03,162 INFO  [main] spark.SparkEnv (Logging.scala:logInfo(54)) - Registering BlockManagerMaster
2017-07-04 18:37:03,166 INFO  [main] storage.BlockManagerMasterEndpoint (Logging.scala:logInfo(54)) - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
2017-07-04 18:37:03,167 INFO  [main] storage.BlockManagerMasterEndpoint (Logging.scala:logInfo(54)) - BlockManagerMasterEndpoint up
2017-07-04 18:37:03,185 INFO  [main] storage.DiskBlockManager (Logging.scala:logInfo(54)) - Created local directory at /tmp/blockmgr-20f80f78-0e27-462d-b5a4-1e0067308861
2017-07-04 18:37:03,214 INFO  [main] memory.MemoryStore (Logging.scala:logInfo(54)) - MemoryStore started with capacity 408.9 MB
2017-07-04 18:37:03,330 INFO  [main] spark.SparkEnv (Logging.scala:logInfo(54)) - Registering OutputCommitCoordinator
2017-07-04 18:37:03,458 INFO  [main] util.log (Log.java:initialized(186)) - Logging initialized @4162ms
2017-07-04 18:37:03,623 INFO  [main] server.Server (Server.java:doStart(327)) - jetty-9.2.z-SNAPSHOT
2017-07-04 18:37:03,652 INFO  [main] handler.ContextHandler (ContextHandler.java:doStart(744)) - Started o.s.j.s.ServletContextHandler@ef93f0{/jobs,null,AVAILABLE}
2017-07-04 18:37:03,653 INFO  [main] handler.ContextHandler (ContextHandler.java:doStart(744)) - Started o.s.j.s.ServletContextHandler@70d9720a{/jobs/json,null,AVAILABLE}
2017-07-04 18:37:03,653 INFO  [main] handler.ContextHandler (ContextHandler.java:doStart(744)) - Started o.s.j.s.ServletContextHandler@53ce2867{/jobs/job,null,AVAILABLE}
2017-07-04 18:37:03,654 INFO  [main] handler.ContextHandler (ContextHandler.java:doStart(744)) - Started o.s.j.s.ServletContextHandler@3bead2d{/jobs/job/json,null,AVAILABLE}
2017-07-04 18:37:03,654 INFO  [main] handler.ContextHandler (ContextHandler.java:doStart(744)) - Started o.s.j.s.ServletContextHandler@5b5b6746{/stages,null,AVAILABLE}
.....................
client.TransportClientFactory (TransportClientFactory.java:createClient(250)) - Successfully created connection to /192.168.22.197:35480 after 64 ms (0 ms spent in bootstraps)
2017-07-04 18:37:07,261 INFO  [Executor task launch worker-0] util.Utils (Logging.scala:logInfo(54)) - Fetching spark://192.168.22.197:35480/jars/spark-examples_2.11-2.1.0.jar to /tmp/spark-5110e687-a732-4762-8d74-a7c13a035681/userFiles-69ba7cd2-0014-40d7-8ac8-6b73cd07ce41/fetchFileTemp1907532202323588721.tmp
2017-07-04 18:37:07,367 ERROR [shuffle-server-3-2] server.TransportRequestHandler (TransportRequestHandler.java:operationComplete(201)) - Error sending result StreamResponse{streamId=/jars/spark-examples_2.11-2.1.0.jar, byteCount=1950712, body=FileSegmentManagedBuffer{file=/opt/spark/examples/jars/spark-examples_2.11-2.1.0.jar, offset=0, length=1950712}} to /192.168.22.197:41069; closing connection
io.netty.handler.codec.EncoderException: java.lang.NoSuchMethodError: io.netty.channel.DefaultFileRegion.<init>(Ljava/io/File;JJ)V
    at io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:107)
    at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:658)
    at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:716)
    at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:651)
    at io.netty.handler.timeout.IdleStateHandler.write(IdleStateHandler.java:266)
    at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:658)
    at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:716)
    at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:706)
    at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:741)
    at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:895)
    at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:240)
    at org.apache.spark.network.server.TransportRequestHandler.respond(TransportRequestHandler.java:194)
    at org.apache.spark.network.server.TransportRequestHandler.processStreamRequest(TransportRequestHandler.java:150)
    at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:111)
    at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:119)
    at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51)
    at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
    at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
    at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoSuchMethodError: io.netty.channel.DefaultFileRegion.<init>(Ljava/io/File;JJ)V
    at org.apache.spark.network.buffer.FileSegmentManagedBuffer.convertToNetty(FileSegmentManagedBuffer.java:133)
    at org.apache.spark.network.protocol.MessageEncoder.encode(MessageEncoder.java:54)
    at org.apache.spark.network.protocol.MessageEncoder.encode(MessageEncoder.java:33)
    at io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
    ... 36 more
2017-07-04 18:37:07,379 ERROR [shuffle-client-6-1] client.TransportResponseHandler (TransportResponseHandler.java:channelInactive(126)) - Still have 1 requests outstanding when connection from /192.168.22.197:35480 is closed
2017-07-04 18:37:08,054 INFO  [JobGenerator] scheduler.JobScheduler (Logging.scala:logInfo(54)) - Added jobs for time 1499164628000 ms
9
  • How did you start the Spark Standalone? What's the version? Can you paste the master's web UI at m20p183:8080? Commented Jun 30, 2017 at 13:54
  • this problem have not been solved, not only run my program, but also run the example provided by Spark Streaming + Kafka Integration Guide ,it reports the same error , but I have added newest version netty-all-4.1.1 2.Final to spark/jars instead the other version netty jars, but it did not make any difference ,this really confused me a lot! Commented Jul 4, 2017 at 8:31
  • How did you run the example? Can you edit your question and add the output of env? Can you run the example using run-example with SPARK_PRINT_LAUNCH_COMMAND=1 ./bin/run-example and paste the output to the question? Thanks! Commented Jul 4, 2017 at 9:26
  • Can you make sure that you don't set SPARK_CLASSPATH? Please re-run run-example with SPARK_CLASSPATH= before the command, i.e. SPARK_CLASSPATH= run-example. I think SPARK_CLASSPATH is the culprit. Commented Jul 4, 2017 at 11:14
  • I tried the command with spark_classpath:SPARK_CLASSPATH= ./bin/run-example streaming.JavaDirectKafkaWordCount 172.18.30.22:9092 \test.It reported anther error:java.lang.AbstractMethodError Commented Jul 4, 2017 at 11:21

3 Answers 3

4

After all kinds trying ,finally it was resolved by add the netty-4.0.42.Final to the spark_classpath, must remember that your spark is a cluster ,not only change the master ,but also change the slaves , is that the reason that blocked me a long time . At last ,Many thanks for Jacek Laskowski ,you're very kindful.

Sign up to request clarification or add additional context in comments.

4 Comments

Glad I could help. I'm still puzzled why you did so many changes to your environment that in the end required the extra netty-4.0.42.Final jar?!
I’m also puzzled, It is the Operation Engineer did that .
Talk to the engineer since Spark may have been broken by how it was installed. Unless there are valid reasons, I'd avoid operating such Spark installation.
thanks, man for 3 days i was fixing this issue on the thread. this post helped!
1

You definitely don't want to spark-submit with the following jars:

--jars=/opt/ibudata/binlogSparkStreaming/kafka_2.11-0.8.2.2.‌​jar,/opt/ibudata/bin‌​logSparkStreaming/ka‌​fka-clients-0.8.2.2.‌​jar,/opt/ibudata/bin‌​logSparkStreaming/me‌​trics-core-2.2.0.jar‌​,/opt/ibudata/binlog‌​SparkStreaming/spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar,/opt‌​/ibudata/binlogSpark‌​Streaming/zkclient-0‌​.3.jar

You only want to include spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar that could also be too high as compared to your deployment environment.

--jars=/opt/ibudata/binlog‌​SparkStreaming/spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar

You should remove --jars from spark-submit.

I'd start with a local deployment environment first and only when it runs spark-submit the Spark application to Hadoop YARN.

Try the following first and get it working:

spark-submit \
  --jars /opt/ibudata/binlog‌​SparkStreaming/spark‌​-streaming-kafka-0-8‌​_2.11-2.1.0.jar \
  --class com.br.sparkStreaming.wordcount \
  /opt/ibudata/binlogSparkStreaming/jars/wordcounttest8-0.0.1-‌​SNAPSHOT.jar

Note that --jars does not use = to specify the parameters (I didn't know it'd be accepted).

My guess is that you spark-submit to the environment with a different Spark version which is below 2.1.0 and is incompatible with what you bundled in an uber jar (I suspect that you assembled an uber jar that you eventually spark-submit).

As you can see in the stack trace the error is due to:

java.lang.NoSuchMethodError: io.netty.channel.DefaultFileRegion(Ljava/io/File;JJ)V
  at org.apache.spark.network.buffer.FileSegmentManagedBuffer.convertToNetty(FileSegmentManagedBuffer.java:133)

That particular line 133 was changed quite recently in [SPARK-15178][CORE] Remove LazyFileRegion instead use netty's DefaultFileRegion and is only available in 2.1.0 and higher which you happen to use.

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.11</artifactId>
  <version>2.1.0</version>
</dependency>

Comments

0

Use this dependency, do not skip version tag keep it

<dependency>
    <groupId>io.netty</groupId>
    <artifactId>netty-all</artifactId>
    <version>4.0.42.Final</version>
</dependency>

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.