Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CI] skip vllm_example #36665

Merged
merged 1 commit into from
Jun 21, 2023
Merged

[CI] skip vllm_example #36665

merged 1 commit into from
Jun 21, 2023

Conversation

scv119
Copy link
Contributor

@scv119 scv119 commented Jun 21, 2023

Why are these changes needed?

vllm requires cuda to be build, which is not available in our CI gpu docker image.



No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
--
  | Traceback (most recent call last):
  | File "/opt/miniconda/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
  | main()
  | File "/opt/miniconda/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
  | json_out['return_val'] = hook(**hook_input['kwargs'])
  | File "/opt/miniconda/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
  | return hook(config_settings)
  | File "/tmp/pip-build-env-b4hl6g1_/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 341, in get_requires_for_build_wheel
  | return self._get_build_requires(config_settings, requirements=['wheel'])
  | File "/tmp/pip-build-env-b4hl6g1_/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 323, in _get_build_requires
  | self.run_setup()
  | File "/tmp/pip-build-env-b4hl6g1_/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 338, in run_setup
  | exec(code, locals())
  | File "<string>", line 24, in <module>
  | RuntimeError: Cannot find CUDA at CUDA_HOME: /usr/local/cuda. CUDA must be available in order to build the package.


Also there is a bug in the build rule cc983fc#r119162333

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

@scv119 scv119 marked this pull request as ready for review June 21, 2023 20:40
@scv119 scv119 requested a review from a team as a code owner June 21, 2023 20:40
@scv119 scv119 merged commit 6b599ba into ray-project:master Jun 21, 2023
89 of 101 checks passed
SongGuyang pushed a commit to alipay/ant-ray that referenced this pull request Jul 12, 2023
Why are these changes needed?
vllm requires cuda to be build, which is not available in our CI gpu docker image.

No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
--
  | Traceback (most recent call last):
  | File "/opt/miniconda/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
  | main()
  | File "/opt/miniconda/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
  | json_out['return_val'] = hook(**hook_input['kwargs'])
  | File "/opt/miniconda/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
  | return hook(config_settings)
  | File "/tmp/pip-build-env-b4hl6g1_/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 341, in get_requires_for_build_wheel
  | return self._get_build_requires(config_settings, requirements=['wheel'])
  | File "/tmp/pip-build-env-b4hl6g1_/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 323, in _get_build_requires
  | self.run_setup()
  | File "/tmp/pip-build-env-b4hl6g1_/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 338, in run_setup
  | exec(code, locals())
  | File "<string>", line 24, in <module>
  | RuntimeError: Cannot find CUDA at CUDA_HOME: /usr/local/cuda. CUDA must be available in order to build the package.

Also there is a bug in the build rule cc983fc#r119162333

Signed-off-by: 久龙 <[email protected]>
harborn pushed a commit to harborn/ray that referenced this pull request Aug 17, 2023
Why are these changes needed?
vllm requires cuda to be build, which is not available in our CI gpu docker image.

No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
--
  | Traceback (most recent call last):
  | File "/opt/miniconda/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
  | main()
  | File "/opt/miniconda/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
  | json_out['return_val'] = hook(**hook_input['kwargs'])
  | File "/opt/miniconda/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
  | return hook(config_settings)
  | File "/tmp/pip-build-env-b4hl6g1_/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 341, in get_requires_for_build_wheel
  | return self._get_build_requires(config_settings, requirements=['wheel'])
  | File "/tmp/pip-build-env-b4hl6g1_/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 323, in _get_build_requires
  | self.run_setup()
  | File "/tmp/pip-build-env-b4hl6g1_/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 338, in run_setup
  | exec(code, locals())
  | File "<string>", line 24, in <module>
  | RuntimeError: Cannot find CUDA at CUDA_HOME: /usr/local/cuda. CUDA must be available in order to build the package.

Also there is a bug in the build rule cc983fc#r119162333

Signed-off-by: harborn <[email protected]>
harborn pushed a commit to harborn/ray that referenced this pull request Aug 17, 2023
Why are these changes needed?
vllm requires cuda to be build, which is not available in our CI gpu docker image.



No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
--
  | Traceback (most recent call last):
  | File "/opt/miniconda/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
  | main()
  | File "/opt/miniconda/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
  | json_out['return_val'] = hook(**hook_input['kwargs'])
  | File "/opt/miniconda/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
  | return hook(config_settings)
  | File "/tmp/pip-build-env-b4hl6g1_/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 341, in get_requires_for_build_wheel
  | return self._get_build_requires(config_settings, requirements=['wheel'])
  | File "/tmp/pip-build-env-b4hl6g1_/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 323, in _get_build_requires
  | self.run_setup()
  | File "/tmp/pip-build-env-b4hl6g1_/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 338, in run_setup
  | exec(code, locals())
  | File "<string>", line 24, in <module>
  | RuntimeError: Cannot find CUDA at CUDA_HOME: /usr/local/cuda. CUDA must be available in order to build the package.


Also there is a bug in the build rule cc983fc#r119162333
arvind-chandra pushed a commit to lmco/ray that referenced this pull request Aug 31, 2023
Why are these changes needed?
vllm requires cuda to be build, which is not available in our CI gpu docker image.

No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
--
  | Traceback (most recent call last):
  | File "/opt/miniconda/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
  | main()
  | File "/opt/miniconda/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
  | json_out['return_val'] = hook(**hook_input['kwargs'])
  | File "/opt/miniconda/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
  | return hook(config_settings)
  | File "/tmp/pip-build-env-b4hl6g1_/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 341, in get_requires_for_build_wheel
  | return self._get_build_requires(config_settings, requirements=['wheel'])
  | File "/tmp/pip-build-env-b4hl6g1_/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 323, in _get_build_requires
  | self.run_setup()
  | File "/tmp/pip-build-env-b4hl6g1_/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 338, in run_setup
  | exec(code, locals())
  | File "<string>", line 24, in <module>
  | RuntimeError: Cannot find CUDA at CUDA_HOME: /usr/local/cuda. CUDA must be available in order to build the package.

Also there is a bug in the build rule cc983fc#r119162333

Signed-off-by: e428265 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants