Skip to content

Commit

Permalink
Merge pull request microsoft#41 from madhavMathur/madhavMathur/fix-typo
Browse files Browse the repository at this point in the history
Update README.md
  • Loading branch information
jindongwang committed Dec 26, 2023
2 parents c5ea99e + e68eae2 commit 789b669
Showing 1 changed file with 6 additions and 6 deletions.
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,12 +97,12 @@
## Installation

### Install via `pip`
We provide a Python package *promptbench* for users who want to start evaluation quickly. Simply run
We provide a Python package *promptbench* for users who want to start evaluation quickly. Simply run:
```sh
pip install promptbench
```

Note that the pip installation could be behind the recent updates. So, if you want to use the latest features or develop based on our code, you should intall via Github.
Note that the pip installation could be behind the recent updates. So, if you want to use the latest features or develop based on our code, you should install via GitHub.

### Install via GitHub

Expand All @@ -129,12 +129,12 @@ then use pip to install required packages:
pip install -r requirements.txt
```

Note that this only installed basic python packages. For Prompt Attacks, it requires to install textattacks.
Note that this only installed basic python packages. For Prompt Attacks, you will also need to install [TextAttack](https://github.com/QData/TextAttack).


## Usage

promptbench is easy to use and extend. Going through the bellowing examples will help you familiar with promptbench for quick use, evaluate an existing datasets and LLMs, or creating your own datasets and models.
promptbench is easy to use and extend. Going through the examples below will help you get familiar with promptbench for quick use, evaluate existing datasets and LLMs, or create your own datasets and models.


Please see [Installation](#installation) to install promptbench first.
Expand Down Expand Up @@ -212,7 +212,7 @@ Please refer to our [benchmark website](https://llm-eval.github.io/) for benchma

## Acknowledgements

- [textattacks](https://github.com/textattacks)
- [TextAttack](https://github.com/QData/TextAttack)
- [README Template](https://github.com/othneildrew/Best-README-Template)
- We thank the volunteers: Hanyuan Zhang, Lingrui Li, Yating Zhou for conducting the semantic preserving experiment in Prompt Attack benchmark.

Expand All @@ -232,7 +232,7 @@ Please refer to our [benchmark website](https://llm-eval.github.io/) for benchma

## Citing promptbench and other research papers

Please cite us if you fine this project helpful for your project/paper:
Please cite us if you find this project helpful for your project/paper:

```
@article{zhu2023promptbench2,
Expand Down

0 comments on commit 789b669

Please sign in to comment.