Skip to content
This repository has been archived by the owner on Dec 20, 2022. It is now read-only.

Error in accept call on a passive RdmaChannel #31

Open
rmunoz527 opened this issue May 2, 2019 · 23 comments · May be fixed by #32
Open

Error in accept call on a passive RdmaChannel #31

rmunoz527 opened this issue May 2, 2019 · 23 comments · May be fixed by #32

Comments

@rmunoz527
Copy link

Hi,

I am currently evaluating this library and not having done any specific configuration with respect to the infiniband network the spark nodes interconnect on. Can you point me in right direction on what might be cause of issue. See config and stacktrace below.

spark2-submit -v --num-executors 10 --executor-cores 5 --executor-memory 4G --conf spark.driver.extraClassPath=/opt/mellanox/spark-rdma-3.1.jar --conf spark.executor.extraClassPath=/opt/mellanox/spark-rdma-3.1.jar --conf spark.shuffle.manager=org.apache.spark.shuffle.rdma.RdmaShuffleManager --class com.github.ehiggs.spark.terasort.TeraSort /tmp/spark-terasort-1.1-SNAPSHOT-jar-with-dependencies.jar /tmp/data/terasort_in /tmp/data/terasort_out

Parsed arguments:
master yarn
deployMode client
executorMemory 4G
executorCores 5
totalExecutorCores null
propertiesFile /opt/cloudera/parcels/SPARK2-2.2.0.cloudera1-1.cdh5.12.0.p0.142354/lib/spark2/conf/spark-defaults.conf
driverMemory null
driverCores null
driverExtraClassPath /opt/mellanox/spark-rdma-3.1.jar
driverExtraLibraryPath /opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hadoop/lib/native
driverExtraJavaOptions null
supervise false
queue null
numExecutors 10
files null
pyFiles null
archives null
mainClass com.github.ehiggs.spark.terasort.TeraSort
primaryResource file:/tmp/spark-terasort-1.1-SNAPSHOT-jar-with-dependencies.jar
name com.github.ehiggs.spark.terasort.TeraSort
childArgs [/tmp/data/terasort_in /tmp/data/terasort_out]
jars null
packages null
packagesExclusions null
repositories null

(spark.shuffle.manager,org.apache.spark.shuffle.rdma.RdmaShuffleManager)
(spark.executor.extraLibraryPath,/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hadoop/lib/native)
(spark.authenticate,false)
(spark.yarn.jars,local:/opt/cloudera/parcels/SPARK2-2.2.0.cloudera1-1.cdh5.12.0.p0.142354/lib/spark2/jars/)
(spark.driver.extraLibraryPath,/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hadoop/lib/native)
(spark.yarn.historyServer.address,


(spark.yarn.am.extraLibraryPath,/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hadoop/lib/native)
(spark.eventLog.enabled,true)
(spark.dynamicAllocation.schedulerBacklogTimeout,1)
(spark.yarn.config.gatewayPath,/opt/cloudera/parcels)
(spark.ui.killEnabled,true)
(spark.dynamicAllocation.maxExecutors,148)
(spark.serializer,org.apache.spark.serializer.KryoSerializer)
(spark.shuffle.service.enabled,true)
(spark.hadoop.yarn.application.classpath,)
(spark.dynamicAllocation.minExecutors,0)
(spark.dynamicAllocation.executorIdleTimeout,60)
(spark.yarn.config.replacementPath,{{HADOOP_COMMON_HOME}}/../../..)
(spark.sql.hive.metastore.version,1.1.0)
(spark.submit.deployMode,client)
(spark.shuffle.service.port,7337)
(spark.executor.extraClassPath,/opt/mellanox/spark-rdma-3.1.jar)
(spark.hadoop.mapreduce.application.classpath,)
(spark.eventLog.dir,
(spark.master,yarn)
(spark.dynamicAllocation.enabled,true)
(spark.sql.catalogImplementation,hive)
(spark.sql.hive.metastore.jars,${env:HADOOP_COMMON_HOME}/../hive/lib/
:${env:HADOOP_COMMON_HOME}/client/*)
(spark.driver.extraClassPath,/opt/mellanox/spark-rdma-3.1.jar)

19/05/02 13:54:17 WARN spark.SparkContext: Using an existing SparkContext; some configuration may not take effect.
[Stage 0:> (0 + 17) / 45]19/05/02 13:54:18 ERROR rdma.RdmaNode: Error in accept call on a passive RdmaChannel: java.io.IOException: createCQ() failed
java.lang.NullPointerException
at org.apache.spark.shuffle.rdma.RdmaChannel.processRdmaCmEvent(RdmaChannel.java:345)
at org.apache.spark.shuffle.rdma.RdmaChannel.stop(RdmaChannel.java:894)
at org.apache.spark.shuffle.rdma.RdmaNode.lambda$new$0(RdmaNode.java:176)
at java.lang.Thread.run(Thread.java:748)
Exception in thread "RdmaNode connection listening thread" java.lang.RuntimeException: Exception in RdmaNode listening thread java.lang.NullPointerException
at org.apache.spark.shuffle.rdma.RdmaNode.lambda$new$0(RdmaNode.java:210)
at java.lang.Thread.run(Thread.java:748)
19/05/02 13:54:20 WARN scheduler.TaskSetManager: Lost task 9.0 in stage 0.0 (TID 3, , executor 3): java.lang.ArithmeticException: / by zero
at org.apache.spark.shuffle.rdma.RdmaNode.getNextCpuVector(RdmaNode.java:278)
at org.apache.spark.shuffle.rdma.RdmaNode.getRdmaChannel(RdmaNode.java:301)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.org$apache$spark$shuffle$rdma$RdmaShuffleManager$$getRdmaChannel(RdmaShuffleManager.scala:314)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.getRdmaChannelToDriver(RdmaShuffleManager.scala:322)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.publishMapTaskOutput(RdmaShuffleManager.scala:410)
at org.apache.spark.shuffle.rdma.writer.wrapper.RdmaWrapperShuffleWriter.stop(RdmaWrapperShuffleWriter.scala:118)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:97)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

@rmunoz527
Copy link
Author

More notes

-- IbvContext supports only -1333102304 CPU cores? Why negative?

9/05/02 15:53:42 WARN rdma.RdmaNode: IbvContext supports only -1333102304 CPU cores, while there are 88 CPU cores in the system. This may lead to under-utilization of the system's CPU cores. This limitation may be adjustable in the RDMA device configuration.
19/05/02 15:53:53 WARN spark.SparkContext: Using an existing SparkContext; some configuration may not take effect.
[Stage 0:> (0 + 41) / 45]java.lang.ArithmeticException: / by zero
at org.apache.spark.shuffle.rdma.RdmaNode.getNextCpuVector(RdmaNode.java:278)
at org.apache.spark.shuffle.rdma.RdmaNode.lambda$new$0(RdmaNode.java:158)
at java.lang.Thread.run(Thread.java:748)
Exception in thread "RdmaNode connection listening thread" java.lang.RuntimeException: Exception in RdmaNode listening thread java.lang.ArithmeticException: / by zero
at org.apache.spark.shuffle.rdma.RdmaNode.lambda$new$0(RdmaNode.java:210)
at java.lang.Thread.run(Thread.java:748)
19/05/02 15:53:56 WARN scheduler.TaskSetManager: Lost task 7.0 in stage 0.0 (TID 6, bda65node05.core.pimcocloud.net, executor 3): java.lang.ArithmeticException: / by zero
at org.apache.spark.shuffle.rdma.RdmaNode.getNextCpuVector(RdmaNode.java:278)
at org.apache.spark.shuffle.rdma.RdmaNode.getRdmaChannel(RdmaNode.java:301)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.org$apache$spark$shuffle$rdma$RdmaShuffleManager$$getRdmaChannel(RdmaShuffleManager.scala:314)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.getRdmaChannelToDriver(RdmaShuffleManager.scala:322)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.publishMapTaskOutput(RdmaShuffleManager.scala:410)
at org.apache.spark.shuffle.rdma.writer.wrapper.RdmaWrapperShuffleWriter.stop(RdmaWrapperShuffleWriter.scala:118)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:97)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

@petro-rudenko
Copy link
Member

Hi, seems like you're using spark shuffle service for dynamic allocation (spark.shuffle.service.enabled). SparkRDMA currently not support it.

@rmunoz527
Copy link
Author

Hi Petro, thanks for reviewing my file. I tried disabling spark.shuffle.service in config as well as disable dynamicAllocation and it still throws same error:


Spark properties used, including those specified through
--conf and those from the properties file /opt/cloudera/parcels/SPARK2-2.2.0.cloudera1-1.cdh5.12.0.p0.142354/lib/spark2/conf/spark-defaults.conf:
(spark.shuffle.manager,org.apache.spark.shuffle.rdma.RdmaShuffleManager)
(spark.executor.extraLibraryPath,/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hadoop/lib/native)
(spark.authenticate,false)
(spark.yarn.jars,local:/opt/cloudera/parcels/SPARK2-2.2.0.cloudera1-1.cdh5.12.0.p0.142354/lib/spark2/jars/)
(spark.driver.extraLibraryPath,/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hadoop/lib/native)
(spark.yarn.historyServer.address,)
(spark.yarn.am.extraLibraryPath,/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hadoop/lib/native)
(spark.shuffle.rdma.driverPort,3037)
(spark.eventLog.enabled,true)
(spark.dynamicAllocation.schedulerBacklogTimeout,1)
(spark.yarn.config.gatewayPath,/opt/cloudera/parcels)
(spark.ui.killEnabled,true)
(spark.dynamicAllocation.maxExecutors,148)
(spark.serializer,org.apache.spark.serializer.KryoSerializer)
(spark.shuffle.service.enabled,false)
(spark.hadoop.yarn.application.classpath,)
(spark.dynamicAllocation.minExecutors,0)
(spark.dynamicAllocation.executorIdleTimeout,60)
(spark.yarn.config.replacementPath,{{HADOOP_COMMON_HOME}}/../../..)
(spark.shuffle.rdma.executorPort,4037)
(spark.sql.hive.metastore.version,1.1.0)
(spark.submit.deployMode,client)
(spark.shuffle.service.port,7337)
(spark.hadoop.mapreduce.application.classpath,)
(spark.executor.extraClassPath,/opt/mellanox/spark-rdma-3.1.jar)
(spark.eventLog.dir,
(spark.master,yarn)
(spark.dynamicAllocation.enabled,false)
(spark.sql.catalogImplementation,hive)
(spark.sql.hive.metastore.jars,${env:HADOOP_COMMON_HOME}/../hive/lib/
:${env:HADOOP_COMMON_HOME}/client/*)
(spark.driver.extraClassPath,/opt/mellanox/spark-rdma-3.1.jar)

Main class:
com.github.ehiggs.spark.terasort.TeraSort
Arguments:
/tmp/data/terasort_in
/tmp/data/terasort_out
System properties:
(spark.shuffle.manager,org.apache.spark.shuffle.rdma.RdmaShuffleManager)
(spark.executor.extraLibraryPath,/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hadoop/lib/native)
(spark.executor.memory,4G)
(spark.authenticate,false)
(spark.executor.instances,10)
(spark.driver.extraLibraryPath,/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hadoop/lib/native)
(spark.yarn.jars,local:/opt/cloudera/parcels/SPARK2-2.2.0.cloudera1-1.cdh5.12.0.p0.142354/lib/spark2/jars/)
(spark.yarn.historyServer.address,
(spark.yarn.am.extraLibraryPath,/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hadoop/lib/native)
(spark.shuffle.rdma.driverPort,3037)
(spark.eventLog.enabled,true)
(spark.dynamicAllocation.schedulerBacklogTimeout,1)
(SPARK_SUBMIT,true)
(spark.yarn.config.gatewayPath,/opt/cloudera/parcels)
(spark.ui.killEnabled,true)
(spark.dynamicAllocation.maxExecutors,148)
(spark.serializer,org.apache.spark.serializer.KryoSerializer)
(spark.app.name,com.github.ehiggs.spark.terasort.TeraSort)
(spark.dynamicAllocation.executorIdleTimeout,60)
(spark.dynamicAllocation.minExecutors,0)
(spark.shuffle.service.enabled,false)
(spark.hadoop.yarn.application.classpath,)
(spark.yarn.config.replacementPath,{{HADOOP_COMMON_HOME}}/../../..)
(spark.shuffle.rdma.executorPort,4037)
(spark.sql.hive.metastore.version,1.1.0)
(spark.jars,file:/tmp/spark-terasort-1.1-SNAPSHOT-jar-with-dependencies.jar)
(spark.submit.deployMode,client)
(spark.shuffle.service.port,7337)
(spark.executor.extraClassPath,/opt/mellanox/spark-rdma-3.1.jar)
(spark.hadoop.mapreduce.application.classpath,)
(spark.eventLog.dir,
(spark.master,yarn)
(spark.dynamicAllocation.enabled,false)
(spark.sql.catalogImplementation,hive)
(spark.executor.cores,5)
(spark.sql.hive.metastore.jars,${env:HADOOP_COMMON_HOME}/../hive/lib/
:${env:HADOOP_COMMON_HOME}/client/*)
(spark.driver.extraClassPath,/opt/mellanox/spark-rdma-3.1.jar)
Classpath elements:
file:/tmp/spark-terasort-1.1-SNAPSHOT-jar-with-dependencies.jar

19/05/03 17:43:00 WARN rdma.RdmaNode: IbvContext supports only -704911072 CPU cores, while there are 88 CPU cores in the system. This may lead to under-utilization of the system's CPU cores. This limitation may be adjustable in the RDMA device configuration.
19/05/03 17:43:14 WARN spark.SparkContext: Using an existing SparkContext; some configuration may not take effect.
[Stage 0:> (0 + 17) / 45]java.lang.ArithmeticException: / by zero
at org.apache.spark.shuffle.rdma.RdmaNode.getNextCpuVector(RdmaNode.java:278)
at org.apache.spark.shuffle.rdma.RdmaNode.lambda$new$0(RdmaNode.java:158)
at java.lang.Thread.run(Thread.java:748)
Exception in thread "RdmaNode connection listening thread" java.lang.RuntimeException: Exception in RdmaNode listening thread java.lang.ArithmeticException: / by zero
at org.apache.spark.shuffle.rdma.RdmaNode.lambda$new$0(RdmaNode.java:210)
at java.lang.Thread.run(Thread.java:748)
19/05/03 17:43:17 WARN scheduler.TaskSetManager: Lost task 39.0 in stage 0.0 (TID 10, , executor 7): java.lang.ArithmeticException: / by zero
at org.apache.spark.shuffle.rdma.RdmaNode.getNextCpuVector(RdmaNode.java:278)
at org.apache.spark.shuffle.rdma.RdmaNode.getRdmaChannel(RdmaNode.java:301)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.org$apache$spark$shuffle$rdma$RdmaShuffleManager$$getRdmaChannel(RdmaShuffleManager.scala:314)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.getRdmaChannelToDriver(RdmaShuffleManager.scala:322)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.publishMapTaskOutput(RdmaShuffleManager.scala:410)
at org.apache.spark.shuffle.rdma.writer.wrapper.RdmaWrapperShuffleWriter.stop(RdmaWrapperShuffleWriter.scala:118)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:97)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

@petro-rudenko
Copy link
Member

Ok seems like there's overflow when requesting completion vectors from disni. Strange. Can you please try to run with these prebuild disni libraries that should log something like j2c::getContextNumCompVectors: obj_id 25435344654, num_comp_vectors: 1234
disni_log.tar.gz

@petro-rudenko
Copy link
Member

Do you use Mellanox ofed?

@rmunoz527
Copy link
Author

rmunoz527 commented May 6, 2019 via email

@rmunoz527
Copy link
Author

Hi Petro,

I am not sure if OFED is installed. I can share:

sudo service rdma status
Low level hardware support loaded:
mlx4_ib mlx4_core

Upper layer protocol modules:
mlx4_vnic rds_rdma rds ib_ipoib

User space access modules:
rdma_ucm ib_ucm ib_uverbs ib_umad

Connection management modules:
rdma_cm ib_cm iw_cm

Configured IPoIB interfaces: ib0 ib1

Currently active IPoIB interfaces: ib0 ib1 bondib0

@petro-rudenko
Copy link
Member

Can you please run with debug disni library and send spark logs.

@rmunoz527
Copy link
Author

Hi Petro,

This is the output after using the prebuild disni libraries:

j2c::createEventChannel: obj_id 140621149451072
j2c::createId: ret 0, obj_id 0x7fe4e9a92fe0
j2c::bind: ret 0, cm_listen_id 0x7fe4e9a92fe0
j2c::getContext: obj_id 140621149466816
j2c::getContextNumCompVectors: obj_id 140621149466816, num_comp_vectors -2042881760
19/05/07 10:31:11 WARN rdma.RdmaNode: IbvContext supports only -2042881760 CPU cores, while there are 88 CPU cores in the system. This may lead to under-utilization of the system's CPU cores. This limitation may be adjustable in the RDMA device configuration.
j2c::listen: ret 0
j2c::allocPd: obj_id 140621149494720
19/05/07 10:31:23 WARN spark.SparkContext: Using an existing SparkContext; some configuration may not take effect.
j2c::regMr: obj_id 0x7fe11000f3b0, mr 0x7fe11000f3b0
[Stage 0:> (0 + 16) / 45]java.lang.ArithmeticException: / by zero
at org.apache.spark.shuffle.rdma.RdmaNode.getNextCpuVector(RdmaNode.java:278)
at org.apache.spark.shuffle.rdma.RdmaNode.lambda$new$0(RdmaNode.java:158)
at java.lang.Thread.run(Thread.java:748)

@petro-rudenko
Copy link
Member

petro-rudenko commented May 7, 2019

Thanks, seems like need to check for the negative value. Can you please run with a next spark config:
spark.shuffle.rdma.cpuList 0-87

@rmunoz527
Copy link
Author

Hi Petro,

I have tested using the config and receive same error. I have attached the yarn logs for your consideration.
--- Spark config
com.github.ehiggs.spark.terasort.TeraSort
Arguments:
/tmp/data/terasort_in
/tmp/data/terasort_out
System properties:
(spark.shuffle.rdma.cpuList,0-87)
(spark.shuffle.manager,org.apache.spark.shuffle.rdma.RdmaShuffleManager)
(spark.executor.extraLibraryPath,/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hadoop/lib/native)
(spark.executor.memory,4G)
(spark.executor.instances,10)
(spark.authenticate,false)
(spark.driver.extraLibraryPath,/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hadoop/lib/native)
(spark.yarn.jars,local:/opt/cloudera/parcels/SPARK2-2.2.0.cloudera1-1.cdh5.12.0.p0.142354/lib/spark2/jars/)
(spark.yarn.am.extraLibraryPath,/opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hadoop/lib/native)
(spark.yarn.historyServer.address,http:https://:18089)
(spark.shuffle.rdma.driverPort,3037)
(spark.eventLog.enabled,true)
(spark.dynamicAllocation.schedulerBacklogTimeout,1)
(SPARK_SUBMIT,true)
(spark.yarn.config.gatewayPath,/opt/cloudera/parcels)
(spark.ui.killEnabled,true)
(spark.dynamicAllocation.maxExecutors,148)
(spark.serializer,org.apache.spark.serializer.KryoSerializer)
(spark.app.name,com.github.ehiggs.spark.terasort.TeraSort)
(spark.dynamicAllocation.executorIdleTimeout,60)
(spark.dynamicAllocation.minExecutors,0)
(spark.hadoop.yarn.application.classpath,)
(spark.shuffle.service.enabled,false)
(spark.yarn.config.replacementPath,{{HADOOP_COMMON_HOME}}/../../..)
(spark.shuffle.rdma.executorPort,4037)
(spark.sql.hive.metastore.version,1.1.0)
(spark.jars,file:/tmp/spark-terasort-1.1-SNAPSHOT-jar-with-dependencies.jar)
(spark.submit.deployMode,client)
(spark.shuffle.service.port,7337)
(spark.executor.extraClassPath,/opt/mellanox/spark-rdma-3.1.jar)
(spark.hadoop.mapreduce.application.classpath,)
(spark.eventLog.dir,hdfs:https://user/spark/spark2ApplicationHistory)
(spark.master,yarn)
(spark.dynamicAllocation.enabled,false)
(spark.sql.catalogImplementation,hive)
(spark.executor.cores,5)
(spark.sql.hive.metastore.jars,${env:HADOOP_COMMON_HOME}/../hive/lib/
:${env:HADOOP_COMMON_HOME}/client/*)
(spark.driver.extraClassPath,/opt/mellanox/spark-rdma-3.1.jar)
Classpath elements:
file:/tmp/spark-terasort-1.1-SNAPSHOT-jar-with-dependencies.jar
-- Yarn log
yarn_log_application_1532355992130_44904.zip

@rmunoz527
Copy link
Author

rmunoz527 commented May 8, 2019

Im not sure if this helps but I see a different number of num_comp_vectors each time I run

j2c::createEventChannel: obj_id 140671346315408
j2c::createId: ret 0, obj_id 0x7ff099a08d30
j2c::bind: ret 0, cm_listen_id 0x7ff099a08d30
j2c::getContext: obj_id 140671346331184
j2c::getContextNumCompVectors: obj_id 140671346331184, num_comp_vectors 489359648
j2c::listen: ret 0
j2c::allocPd: obj_id 140671346340240
19/05/08 15:59:08 WARN spark.SparkContext: Using an existing SparkContext; some configuration may not take effect.
j2c::regMr: obj_id 0x7fed540131c0, mr 0x7fed540131c0
[Stage 0:> (0 + 15) / 45]j2c::createCompChannel: obj_id 140658232833904
j2c::createCQ: ibv_create_cq failed
19/05/08 15:59:09 ERROR rdma.RdmaNode: Error in accept call on a passive RdmaChannel: java.io.IOException: createCQ() failed
java.lang.NullPointerException
at org.apache.spark.shuffle.rdma.RdmaChannel.processRdmaCmEvent(RdmaChannel.java:345)
at org.apache.spark.shuffle.rdma.RdmaChannel.stop(RdmaChannel.java:894)
at org.apache.spark.shuffle.rdma.RdmaNode.lambda$new$0(RdmaNode.java:176)
at java.lang.Thread.run(Thread.java:748)
Exception in thread "RdmaNode connection listening thread" java.lang.RuntimeException: Exception in RdmaNode listening thread java.lang.NullPointerException
at org.apache.spark.shuffle.rdma.RdmaNode.lambda$new$0(RdmaNode.java:210)
at java.lang.Thread.run(Thread.java:748)
19/05/08 15:59:11 WARN scheduler.TaskSetManager: Lost task 20.0 in stage 0.0 (TID 4, bda65node06.core.pimcocloud.net, executor 3): java.lang.ArithmeticException: / by zero
at org.apache.spark.shuffle.rdma.RdmaNode.getNextCpuVector(RdmaNode.java:278)
at org.apache.spark.shuffle.rdma.RdmaNode.getRdmaChannel(RdmaNode.java:301)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.org$apache$spark$shuffle$rdma$RdmaShuffleManager$$getRdmaChannel(RdmaShuffleManager.scala:314)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.getRdmaChannelToDriver(RdmaShuffleManager.scala:322)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.publishMapTaskOutput(RdmaShuffleManager.scala:410)
at org.apache.spark.shuffle.rdma.writer.wrapper.RdmaWrapperShuffleWriter.stop(RdmaWrapperShuffleWriter.scala:118)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:97)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

@petro-rudenko
Copy link
Member

Made a PR to fix this issue. Can you please try to run with the attached jar.
spark-rdma-3.1-for-spark-2.2.0.zip

@petro-rudenko
Copy link
Member

Can you please run ofed_info

@rmunoz527
Copy link
Author

Using provided jar, I receive following error:

19/05/13 15:00:20 ERROR spark.SparkContext: Error initializing SparkContext.
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct orAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC onstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.SparkEnv$.instantiateClass$1(SparkEnv.scala:266)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:325)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:175)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:257)
at org.apache.spark.SparkContext.(SparkContext.scala:432)
at com.github.ehiggs.spark.terasort.TeraSort$.main(TeraSort.scala:58)
at com.github.ehiggs.spark.terasort.TeraSort.main(TeraSort.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSub mit$$runMain(SparkSubmit.scala:755)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:18 0)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NoClassDefFoundError: com/ibm/disni/rdma/verbs/RdmaEventCha nnel
at org.apache.spark.shuffle.rdma.RdmaNode.(RdmaNode.java:64)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.(RdmaShuffleMa nager.scala:137)
... 20 more
Caused by: java.lang.ClassNotFoundException: com.ibm.disni.rdma.verbs.RdmaEventC hannel
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 22 more
Exception in thread "main" java.lang.NoClassDefFoundError: com/ibm/disni/rdma/ve rbs/RdmaEventChannel
at org.apache.spark.shuffle.rdma.RdmaNode.(RdmaNode.java:64)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.(RdmaShuffleMa nager.scala:137)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct orAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC onstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.SparkEnv$.instantiateClass$1(SparkEnv.scala:266)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:325)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:175)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:257)
at org.apache.spark.SparkContext.(SparkContext.scala:432)
at com.github.ehiggs.spark.terasort.TeraSort$.main(TeraSort.scala:58)
at com.github.ehiggs.spark.terasort.TeraSort.main(TeraSort.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSub mit$$runMain(SparkSubmit.scala:755)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:18 0)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: com.ibm.disni.rdma.verbs.RdmaEventC hannel
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 22 more

@rmunoz527
Copy link
Author

Not able to run ofed_info at the moment. Command not found on IB switches. Working with support for more information on that.

@rmunoz527
Copy link
Author

lsmod | grep ipoib
ib_ipoib 114688 1 rds_rdma
ib_cm 61440 4 rds_rdma,ib_ipoib,ib_ucm,rdma_cm
ib_sa 40960 6 ib_ipoib,rdma_ucm,rdma_cm,ib_cm,mlx4_vnic,mlx4_ib
ib_core 102400 14 rds_rdma,ib_sdp,ib_ipoib,rdma_ucm,ib_ucm,ib_uverbs,ib_umad,rdma_cm,ib_cm,iw_cm,mlx4_vnic,mlx4_ib,ib_sa,ib_mad

@petro-rudenko
Copy link
Member

Ah sorry, wrong jar. Here's correct one:
spark-rdma-3.1-for-spark-2.2.0.zip
Can you run ib_read_bw test on your environment?

@rmunoz527
Copy link
Author

ib_read_bw test:

Server:
ib_read_bw


  • Waiting for client to connect... *


                RDMA_Read BW Test

Dual-port : OFF Device : mlx4_0
Number of qps : 1 Transport type : IB
Connection type : RC Using SRQ : OFF
CQ Moderation : 100
Mtu : 2048[B]
Link type : IB
Outstand reads : 128
rdma_cm QPs : OFF
Data ex. method : Ethernet

local address: LID 0x01 QPN 0x0294 PSN 0xd50c26 OUT 0x80 RKey 0x74a0c00 VAddr 0x007f9174aac000
remote address: LID 0x0e QPN 0x025b PSN 0x7122ee OUT 0x80 RKey 0x740e600 VAddr 0x007f0475c4b000

#bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps]
ethernet_read_keys: Couldn't read remote address
Unable to read to socket/rdam_cm
Failed to exchange data between server and clients

Client:
RDMA_Read BW Test
Dual-port : OFF Device : mlx4_0
Number of qps : 1 Transport type : IB
Connection type : RC Using SRQ : OFF
TX depth : 128
CQ Moderation : 100
Mtu : 2048[B]
Link type : IB
Outstand reads : 128
rdma_cm QPs : OFF
Data ex. method : Ethernet

local address: LID 0x0e QPN 0x025b PSN 0x7122ee OUT 0x80 RKey 0x740e600 VAddr 0x007f0475c4b000
remote address: LID 0x01 QPN 0x0294 PSN 0xd50c26 OUT 0x80 RKey 0x74a0c00 VAddr 0x007f9174aac000

#bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps]
Conflicting CPU frequency values detected: 1290.179000 != 1237.156000
Can't produce a report

@rmunoz527
Copy link
Author

rmunoz527 commented May 16, 2019

Petro,

I got error using latest attached jar:

ERROR rdma.RdmaNode: Error in accept call on a passive RdmaChannel: java.io.IOException: createCQ() failed
java.lang.NullPointerException
at org.apache.spark.shuffle.rdma.RdmaChannel.processRdmaCmEvent(RdmaChannel.java:345)
at org.apache.spark.shuffle.rdma.RdmaChannel.stop(RdmaChannel.java:894)
at org.apache.spark.shuffle.rdma.RdmaNode.lambda$new$0(RdmaNode.java:176)
at java.lang.Thread.run(Thread.java:748)
Exception in thread "RdmaNode connection listening thread" java.lang.RuntimeException: Exception in RdmaNode listening thread java.lang.NullPointerExceptio
at org.apache.spark.shuffle.rdma.RdmaNode.lambda$new$0(RdmaNode.java:210)
at java.lang.Thread.run(Thread.java:748)
j2c::regMr: obj_id 0x7fa614001b80, mr 0x7fa614001b80

Exception in thread "main" org.apache.spark.SparkException: Job aborted.
at org.apache.spark.internal.io.SparkHadoopMapReduceWriter$.write(SparkHadoopMapReduceWriter.scala:107)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1085)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1084)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:1003)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:994)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$1.apply$mcV$sp(PairRDDFunctions.scala:982)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$1.apply(PairRDDFunctions.scala:982)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$1.apply(PairRDDFunctions.scala:982)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:981)
at com.github.ehiggs.spark.terasort.TeraSort$.main(TeraSort.scala:63)
at com.github.ehiggs.spark.terasort.TeraSort.main(TeraSort.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 38 in stage 0.0 failed 4 times, most recent failure: Lost task 38.3 in s
at org.apache.spark.shuffle.rdma.RdmaNode.getNextCpuVector(RdmaNode.java:278)
at org.apache.spark.shuffle.rdma.RdmaNode.getRdmaChannel(RdmaNode.java:301)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.org$apache$spark$shuffle$rdma$RdmaShuffleManager$$getRdmaChannel(RdmaShuffleManager.scala:314)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.getRdmaChannelToDriver(RdmaShuffleManager.scala:322)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.publishMapTaskOutput(RdmaShuffleManager.scala:410)
at org.apache.spark.shuffle.rdma.writer.wrapper.RdmaWrapperShuffleWriter.stop(RdmaWrapperShuffleWriter.scala:118)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:97)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1499)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1487)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1486)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1486)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:814)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1714)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1669)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1658)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2022)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2043)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2075)
at org.apache.spark.internal.io.SparkHadoopMapReduceWriter$.write(SparkHadoopMapReduceWriter.scala:88)
... 32 more
Caused by: java.lang.ArithmeticException: / by zero
at org.apache.spark.shuffle.rdma.RdmaNode.getNextCpuVector(RdmaNode.java:278)
at org.apache.spark.shuffle.rdma.RdmaNode.getRdmaChannel(RdmaNode.java:301)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.org$apache$spark$shuffle$rdma$RdmaShuffleManager$$getRdmaChannel(RdmaShuffleManager.scala:314)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.getRdmaChannelToDriver(RdmaShuffleManager.scala:322)
at org.apache.spark.shuffle.rdma.RdmaShuffleManager.publishMapTaskOutput(RdmaShuffleManager.scala:410)
at org.apache.spark.shuffle.rdma.writer.wrapper.RdmaWrapperShuffleWriter.stop(RdmaWrapperShuffleWriter.scala:118)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:97)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

@petro-rudenko
Copy link
Member

So you have unconfigured fabric. Please make sure that the network is configured correctly, ib_read_bw is working.

@rmunoz527
Copy link
Author

Thanks Petro,

What are your recommendations for configuring fabric. This is first time I have come across this issue on Oracle Linux. System is OL6. From the results of ib_read_bw are you confirming the fabric is unconfigured?

@petro-rudenko
Copy link
Member

@rmunoz527 you need to follow ofed installation tutorial (assuming you're using Mellanox product). Need to make sure ibv_devinfo is working and ib_read_bw.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants