Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training stuck after "Successfully opened dynamic library libcublas.so.10" #79

Open
akshathaarodi opened this issue Mar 31, 2021 · 3 comments

Comments

@akshathaarodi
Copy link

Hi,

I am trying to train the BERT-base model on a new dataset. The training is stuck after printing "Successfully opened dynamic library libcublas.so.10". Following this discussion, I replaced the calls to parallel_interleave with interleave. However, the training is still stuck. I am using a 32GB GPU with tensorflow 1.14. I am not sure how to debug this one. Did anyone face this problem? Is there any workaround?

2021-03-31 00:07:04.751806: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1 2021-03-31 00:07:04.751837: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10 2021-03-31 00:07:04.751881: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10 2021-03-31 00:07:04.751905: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10 2021-03-31 00:07:04.751931: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10 2021-03-31 00:07:04.751972: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10 2021-03-31 00:07:04.752003: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7 2021-03-31 00:07:04.755988: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0 2021-03-31 00:07:04.756628: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1 2021-03-31 00:07:04.931587: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix: 2021-03-31 00:07:04.931625: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0 2021-03-31 00:07:04.932043: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N 2021-03-31 00:07:04.942135: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30591 MB memory) -> physical GPU (device: 0, name: Tesla V100-SXM2-32GB-LS, pci bus id: 0000:06:00.0, compute capability: 7.0) 2021-03-31 00:07:04.950214: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5611aa9bf160 executing computations on platform CUDA. Devices: 2021-03-31 00:07:04.950233: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Tesla V100-SXM2-32GB-LS, Compute Capability 7.0 2021-03-31 00:07:30.820369: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10

Thank you!

@josubg
Copy link

josubg commented Apr 4, 2021

Hello, I am getting the exact same problem with tensorflow 1.14. in a docker container based on tensorflow:1.14.0-gpu image and cuda 10.0 and a GPU with 10GB of RAM . The execution also stops at:

Successfully opened dynamic library libcublas.so.10

Is there any adidtional information we could provide to help identify this issue?

Thanks in advance.

@nupoorgandhi
Copy link

This is probably not the issue for others, but I was also seeing the training hang with tensorflow 1.14 after 
Successfully opened dynamic library libcublas.so.10
For me,  the problem was that my train set was empty, so it might be helpful to double check that.

@aymen-souid-github
Copy link

Does anyone have resolved this issue after "Successfully opened dynamic library libcublas.so.10" ?

2022-02-01 15:06:35.916081: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0
2022-02-01 15:07:28.841615: E tensorflow/stream_executor/cuda/cuda_blas.cc:428] failed to run cuBLAS routine: CUBLAS_STATUS_EXECUTION_FAILED
2022-02-01 15:07:28.848147: W tensorflow/core/kernels/queue_base.cc:277] _0_padding_fifo_queue: Skipping cancelled enqueue attempt with queue not closed
Traceback (most recent call last):
File "/home/souid/anaconda3/envs/arabic_coref/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1356, in _do_call
return fn(*args)
File "/home/souid/anaconda3/envs/arabic_coref/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1341, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/souid/anaconda3/envs/arabic_coref/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1429, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found.
(0) Internal: Blas GEMM launch failed : a.shape=(30, 20), b.shape=(20, 3000), m=30, n=3000, k=20
[[{{node width_scores/xw_plus_b/MatMul}}]]
[[strided_slice_6/_1889]]
(1) Internal: Blas GEMM launch failed : a.shape=(30, 20), b.shape=(20, 3000), m=30, n=3000, k=20
[[{{node width_scores/xw_plus_b/MatMul}}]]
0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train.py", line 58, in
tf_loss, tf_global_step, _ = session.run([model.loss, model.global_step, model.train_op])
File "/home/souid/anaconda3/envs/arabic_coref/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 950, in run
run_metadata_ptr)
File "/home/souid/anaconda3/envs/arabic_coref/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1173, in _run
feed_dict_tensor, options, run_metadata)
File "/home/souid/anaconda3/envs/arabic_coref/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1350, in _do_run
run_metadata)
File "/home/souid/anaconda3/envs/arabic_coref/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1370, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found.
(0) Internal: Blas GEMM launch failed : a.shape=(30, 20), b.shape=(20, 3000), m=30, n=3000, k=20
[[node width_scores/xw_plus_b/MatMul (defined at /home/souid/coref/util.py:109) ]]
[[strided_slice_6/_1889]]
(1) Internal: Blas GEMM launch failed : a.shape=(30, 20), b.shape=(20, 3000), m=30, n=3000, k=20
[[node width_scores/xw_plus_b/MatMul (defined at /home/souid/coref/util.py:109) ]]
0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node width_scores/xw_plus_b/MatMul:
width_scores/hidden_weights_0/read (defined at /home/souid/coref/util.py:107)
span_width_prior_embeddings/read (defined at /home/souid/coref/independent.py:379)

Input Source operations connected to node width_scores/xw_plus_b/MatMul:
width_scores/hidden_weights_0/read (defined at /home/souid/coref/util.py:107)
span_width_prior_embeddings/read (defined at /home/souid/coref/independent.py:379)

Original stack trace for 'width_scores/xw_plus_b/MatMul':
File "train.py", line 26, in
model = util.get_model(config)
File "/home/souid/coref/util.py", line 21, in get_model
return independent.CorefModel(config)
File "/home/souid/coref/independent.py", line 54, in init
self.predictions, self.loss = self.get_predictions_and_loss(*self.input_tensors)
File "/home/souid/coref/independent.py", line 283, in get_predictions_and_loss
candidate_mention_scores = self.get_mention_scores(candidate_span_emb, candidate_starts, candidate_ends)
File "/home/souid/coref/independent.py", line 382, in get_mention_scores
width_scores = util.ffnn(span_width_emb, self.config["ffnn_depth"], self.config["ffnn_size"], 1, self.dropout) # [W, 1]
File "/home/souid/coref/util.py", line 109, in ffnn
current_outputs = tf.nn.relu(tf.nn.xw_plus_b(current_inputs, hidden_weights, hidden_bias))
File "/home/souid/anaconda3/envs/arabic_coref/lib/python3.5/site-packages/tensorflow/python/ops/nn_ops.py", line 4066, in xw_plus_b
mm = math_ops.matmul(x, weights)
File "/home/souid/anaconda3/envs/arabic_coref/lib/python3.5/site-packages/tensorflow/python/util/dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "/home/souid/anaconda3/envs/arabic_coref/lib/python3.5/site-packages/tensorflow/python/ops/math_ops.py", line 2647, in matmul
a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
File "/home/souid/anaconda3/envs/arabic_coref/lib/python3.5/site-packages/tensorflow/python/ops/gen_math_ops.py", line 5925, in mat_mul
name=name)
File "/home/souid/anaconda3/envs/arabic_coref/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/home/souid/anaconda3/envs/arabic_coref/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/souid/anaconda3/envs/arabic_coref/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3616, in create_op
op_def=op_def)
File "/home/souid/anaconda3/envs/arabic_coref/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2005, in init
self._traceback = tf_stack.extract_stack()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants