Skip to content

Commit

Permalink
[docs] fix code indentation (ray-project#37012)
Browse files Browse the repository at this point in the history
code snippets for Batch Inference and Hyperparameter Tuning needed minor fixes (typo, indentation)
cc: @kamil-kaczmarek

Signed-off-by: 久龙 <[email protected]>
  • Loading branch information
angelinalg authored and SongGuyang committed Jul 12, 2023
1 parent 9a955a0 commit 37cee43
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions doc/source/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,8 @@ class HuggingFacePredictor:
# Logic for inference on 1 batch of data.
def __call__(self, batch: Dict[str, np.ndarray]) -> Dict[str, list]:
# Get the predictions from the input batch.
predictions = self.model(list(batch["data"]), max_length=20, num_return_sequences=1)
predictions = self.model(
list(batch["data"]), max_length=20, num_return_sequences=1)
# `predictions` is a list of length-one lists. For example:
# [[{'generated_text': 'output_1'}], ..., [{'generated_text': 'output_2'}]]
# Modify the output to get it into the following format instead:
Expand Down Expand Up @@ -181,8 +182,7 @@ trainer = LightGBMTrainer(
tuner = tune.Tuner(
trainer=trainer,
param_space=hyper_param_space,
tune_config=tune.TuneConfig(num_sa
les=1000),
tune_config=tune.TuneConfig(num_samples=1000),
)
# Step 3: run distributed HPO with 1000 trials; each trial runs on 64 CPUs
Expand Down

0 comments on commit 37cee43

Please sign in to comment.