Skip to content
This repository has been archived by the owner on Dec 20, 2022. It is now read-only.

spark rdma error #11

Open
li7hui opened this issue Sep 22, 2018 · 15 comments
Open

spark rdma error #11

li7hui opened this issue Sep 22, 2018 · 15 comments

Comments

@li7hui
Copy link

li7hui commented Sep 22, 2018

Hi,
I was trying to run SparkRDMA Terasort code. The common Spark Terasort can finish successfully, however, there exist errors for Spark RDMA Terasort code. Here is the errors as below:
error9

I used Spark 2.1.0

@petro-rudenko
Copy link
Member

Can you please provide command that you use to submit spark terasort? Do you use yarn or standalone cluster deployment mode? Can you please check logs for that blockManager Id?
Thanks,
Peter

@li7hui
Copy link
Author

li7hui commented Sep 25, 2018

Hi,
I used the following commands to submit spark terasort job
./bin/spark-submit --class com.github.ehiggs.spark.terasort.TeraSort
--master spark:https://rdma21:7077 /root/spark-terasort-1.0-SNAPSHOT-jar-with-dependencies.jar
hdfs:https:///terasort_in hdfs:https:///terasort_out

I used the standalone mode.
I do not know where to get blockManagerId. I searched logs folder, but didn't find any blockManagerId there.

@petro-rudenko
Copy link
Member

Ok can you please check for errors in spark log directory: grep -i error $SPARK_HOME/logs/. What dataset size do you run on?

@li7hui
Copy link
Author

li7hui commented Sep 25, 2018

Hi, here are zip file in attachment which contains error message.

The dataset size I used is 1g.

spark-root-org.apache.spark.deploy.worker.Worker-1-rdma21.zip

@petro-rudenko
Copy link
Member

Sorry logs for executors are in work directory.

@li7hui
Copy link
Author

li7hui commented Sep 25, 2018

Hi,
Here is the error logs in work directory.
stderr.zip

@petro-rudenko
Copy link
Member

In work directory there should be folders for each application and there separate folder for each executor. You need to collect executor logs from machines or use NFS. Tried to reproduce your case, it works for me:

  1. Teragen 1g of data:
spark/bin/spark-submit -v --num-executors 10 --executor-cores 20 --executor-memory 24G --master yarn --class com.github.ehiggs.spark.terasort.TeraGen /hpc/scrap/users/peterr/spark-terasort/target/spark-terasort-1.1-SNAPSHOT-jar-with-dependencies.jar 1g /terasort-input-1g
  1. Run terasort:
$ cat /hpc/scrap/users/swat/jenkins/spark/spark2.conf

spark.driver.extraJavaOptions   -Djava.library.path=/hpc/scrap/users/swat/jenkins/disni/
spark.executor.extraClassPath   /hpc/scrap/users/swat/jenkins//spark_rdma_artifacts/spark-rdma-2.0-for-spark-2.1.0-jar-with-dependencies.jar
spark.driver.extraClassPath     /hpc/scrap/users/swat/jenkins//spark_rdma_artifacts/spark-rdma-2.0-for-spark-2.1.0-jar-with-dependencies.jar
spark.executor.extraJavaOptions -Djava.library.path=/hpc/scrap/users/swat/jenkins/disni/
spark.shuffle.manager org.apache.spark.shuffle.rdma.RdmaShuffleManager
spark.executor.instances 16

$ bin/spark-submit -v --executor-cores 3 --properties-file /hpc/scrap/users/swat/jenkins/spark/spark2.conf --executor-memory 124G --master  spark:https://clx-orion-011:7077 --class com.github.ehiggs.spark.terasort.TeraSort /hpc/scrap/users/peterr/spark-terasort/target/spark-terasort-1.1-SNAPSHOT-jar-with-dependencies.jar /terasort-input-1g /terasort-output-1g

Could you please also try to generate bigger data. You are running 15 executors for 1 Gb of input data (<100Mb per executor). Or try to run with smaller number of executors to make sure everything is working

@li7hui
Copy link
Author

li7hui commented Sep 27, 2018

Hi,
Thanks for your reply.
I used 4 nodes to start spark job, now the terasort spark job can start and finish successfully.
However, when the terasort spark rdma job running, my zabbix system can only see tcp traffic while the rdma traffic is zero. This is very strange! Because according to the stderr, the ibm.disni has been loaded during the processing.

@petro-rudenko
Copy link
Member

Do you use Infiniband or Roce? Does your monitoring system configured to monitor RDMA traffic. You can check how to monitor RDMA traffic here: https://community.mellanox.com/docs/DOC-2416
Also you can run some ib perf tests from the perf package and make sure zabbix captures rdma traffic.

@li7hui
Copy link
Author

li7hui commented Sep 29, 2018

Hi,
I use the RoCE network. I run some OFED perftest job,and the zabbix can see RMDA traffic.

What's your command to run SparkRDMA terasort please? I don't know whether there is some difference in the commands to run the SparkRDMA terasort program.

@petro-rudenko
Copy link
Member

petro-rudenko commented Oct 1, 2018

Here's how i run SparkRDMA Ehigg's terasort version:
Teragen 1g of data:

spark/bin/spark-submit -v --num-executors 10 --executor-cores 20 --executor-memory 24G --master yarn --class com.github.ehiggs.spark.terasort.TeraGen /hpc/scrap/users/peterr/spark-terasort/target/spark-terasort-1.1-SNAPSHOT-jar-with-dependencies.jar 1g /terasort-input-1g

Run terasort:

$ cat /hpc/scrap/users/swat/jenkins/spark/spark2.conf

spark.driver.extraJavaOptions   -Djava.library.path=/hpc/scrap/users/swat/jenkins/disni/
spark.executor.extraClassPath   /hpc/scrap/users/swat/jenkins//spark_rdma_artifacts/spark-rdma-2.0-for-spark-2.1.0-jar-with-dependencies.jar
spark.driver.extraClassPath     /hpc/scrap/users/swat/jenkins//spark_rdma_artifacts/spark-rdma-2.0-for-spark-2.1.0-jar-with-dependencies.jar
spark.executor.extraJavaOptions -Djava.library.path=/hpc/scrap/users/swat/jenkins/disni/
spark.shuffle.manager org.apache.spark.shuffle.rdma.RdmaShuffleManager
spark.executor.instances 16

$ bin/spark-submit -v --executor-cores 3 --properties-file /hpc/scrap/users/swat/jenkins/spark/spark2.conf --executor-memory 124G --master  spark:https://clx-orion-011:7077 --class com.github.ehiggs.spark.terasort.TeraSort /hpc/scrap/users/peterr/spark-terasort/target/spark-terasort-1.1-SNAPSHOT-jar-with-dependencies.jar /terasort-input-1g /terasort-output-1g

You can find how to run Hibench terasort version here, but the approach is the same. Basically you need to set spark.shuffle.manager org.apache.spark.shuffle.rdma.RdmaShuffleManager.

@li7hui
Copy link
Author

li7hui commented Oct 8, 2018

Hi,
I think I found the problem. They set up PFC for the RoCE network in server and switch. The RoCE traffic will go under 5th queue, while other traffic,such as TCP, will go under 0th queue. Do you know how to setup disni to go under 5th queue?
Best,

@petro-rudenko
Copy link
Member

Disni is just a wrapper over verbs api. If you setup PFC for rdma traffic to go under 5th queue, it'll go there. We've updated our wiki documentation, you can check Advanced forms of flowcontrol.
Let me know if you'll have questions. BTW we've released new version of SparkRDMA. Take a try ;) It has several performance improvements, bug fixes, more verbose error messages, etc.

@RummySugar
Copy link

我们的问题是:总是找不到libdisni,
19/08/28 16:57:44 ERROR RdmaNode: libdisni not found! It must be installed within the java.library.path on each Executor and Driver instance
我们反复检查了配置,找不出问题。能给我们一份您当时安装的详细步骤吗?

@tobegit3hub
Copy link

@RummySugar You need to install libdisni so in each server or upload with spark-submit.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants