Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
docs: Fix a few typos (#21094)
Browse files Browse the repository at this point in the history
There are small typos in:
- docs/python_docs/python/tutorials/getting-started/crash-course/7-use-gpus.md
- docs/static_site/src/pages/api/cpp/docs/tutorials/multi_threaded_inference.md
- example/README.md

Fixes:
- Should read `specifying` rather than `specifieing`.
- Should read `multi threaded` rather than `iultithreaded`.
- Should read `provisioning` rather than `provisionning`.
  • Loading branch information
timgates42 committed Aug 16, 2022
1 parent 1058369 commit 7748ae7
Show file tree
Hide file tree
Showing 3 changed files with 3 additions and 3 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ npx.num_gpus() #This command provides the number of GPUs MXNet can access

## Allocate data to a GPU

MXNet's ndarray is very similar to NumPy's. One major difference is that MXNet's ndarray has a `device` attribute specifieing which device an array is on. By default, arrays are stored on `npx.cpu()`. To change it to the first GPU, you can use the following code, `npx.gpu()` or `npx.gpu(0)` to indicate the first GPU.
MXNet's ndarray is very similar to NumPy's. One major difference is that MXNet's ndarray has a `device` attribute specifying which device an array is on. By default, arrays are stored on `npx.cpu()`. To change it to the first GPU, you can use the following code, `npx.gpu()` or `npx.gpu(0)` to indicate the first GPU.

```{.python .input}
gpu = npx.gpu() if npx.num_gpus() > 0 else npx.cpu()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ A long standing request from MXNet users has been to invoke parallel inference o
With this use case in mind, the threadsafe version of CachedOp was added to provide a way for customers to do multi-threaded inference for MXNet users.
This doc attempts to do the following:
1. Discuss the current state of thread safety in MXNet
2. Explain how one can use C API and thread safe version of cached op, along with CPP package to achieve iultithreaded inference. This will be useful for end users as well as frontend developers of different language bindings
2. Explain how one can use C API and thread safe version of cached op, along with CPP package to achieve multi threaded inference. This will be useful for end users as well as frontend developers of different language bindings
3. Discuss the limitations of the above approach
4. Future Work

Expand Down
2 changes: 1 addition & 1 deletion example/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ As part of making sure all our tutorials are running correctly with the latest v

Add your own test here `tests/tutorials/test_tutorials.py`. (If you forget, don't worry your PR will not pass the sanity check).

If your tutorial depends on specific packages, simply add them to this provisionning script: `ci/docker/install/ubuntu_tutorials.sh`
If your tutorial depends on specific packages, simply add them to this provisioning script: `ci/docker/install/ubuntu_tutorials.sh`

## <a name="list-of-examples"></a>List of examples

Expand Down

0 comments on commit 7748ae7

Please sign in to comment.