- Overview
- Quick-start guide
- Component overview
- Terminology
- Solvers
- The evaluation UI
- Question sets
- Feedback
- History
Aristo mini is a light-weight question answering system that can quickly evaluate Aristo science questions with an evaluation web server and the provided baseline solvers. You can also extend the provided solvers with your own implementations to try out new approaches and compare results.
To experiment you'll need python 3.6
. We recommend you create
a dedicated virtual environment for aristo-mini
and its dependencies.
Then follow these steps.
-
Clone this repo:
git clone [email protected]:allenai/aristo-mini.git cd aristo-mini
-
Install the requirements:
cd aristo-mini pip install -r requirements.txt
-
Add the project to your PYTHONPATH
export PYTHONPATH=$PYTHONPATH:`pwd`
-
Run the random solver in one terminal window:
python aristomini/solvers/randomguesser.py
-
Run the evaluation web UI in another terminal window:
python aristomini/evalui/evalui.py
-
Try the UI in your browser at https://localhost:9000/
Included are these components:
- Simple solvers: Simple example solvers with JSON APIs that can answer multiple choice questions.
- Simple Evaluation system: A web UI to a simple evaluation process that pairs questions with a solver to produce a score.
- Question sets: A subset of Aristo's science questions are included for convenience.
Consider a question that might be represented on an exam like this:
What is the color of the sky?
(A) blue
(B) green
(C) red
(D) black
Parts of this question are named like this:
-
Question stem: The non-choices part of the question. Example:
What is the color of the sky?
-
Answer key: The correct answer's choice label. Example:
A
-
Choice: One of the possible answers, consisting of a choice label (e.g.,
A
) and choice text (e.g.,blue
).
These are modeled as NamedTuple
s in
aristomini/common/models.py.
Several solvers are included in this distribution of Aristo mini. You can run one solver at a time for the Evaluation UI to use.
This solver answers questions randomly. It illustrates the question-answer interface for a solver.
As above, you can start it with
python aristomini/solvers/randomguesser.py
Then you can go to https://localhost:8000/solver-info to confirm that it is running.
To answer a question you can POST to /answer
. To try it on the command line:
-
Make a JSON file with the question, structured like this:
% cat question.json { "stem" : "What color is the sky?", "choices" : [ { "label" : "A", "text" : "red" }, { "label" : "B", "text" : "green" }, { "label" : "C", "text" : "blue" } ] }
-
Submit the request with
curl
:% curl -H "Content-Type: application/json" --data @question.json https://localhost:8000/answer
-
Look at the response:
{ "multipleChoiceAnswer" : { "choiceConfidences" : [ { "choice" : { "text" : "red", "label" : "A" }, "confidence" : 0.398084282084622 }, { "choice" : { "text" : "green", "label" : "B" }, "confidence" : 0.984916549460303 }, { "confidence" : 0.13567292440745, "choice" : { "text" : "blue", "label" : "C" } } ] }, "solverInfo" : "RandomGuesser" }
See aristomini/solvers/textsearch.md for setup and running instructions.
Use the scripts/train_word2vec_model.py
script to train a Word2Vec model
from a text file of sentences (one per line). For instance, you could use the same sentences
as the text search solver
Then start the solver with the path to the word2vec model:
python python/aristomini/solvers/wordvectorsimilarity.py /path/to/word2vec/model
Modify aristomini/solvers/mysolver.py. It has two TODOs for the parts you need to update.
Your solver has to be an HTTP server that responds to the GET /solver-info
and POST /answer
APIs. The POST /answer
API has to consume a JSON-formatted question document and must produce a JSON-formatted response document with the answer. You can start reading at aristomini/common/solver.py (which is extended by the provided solvers) to understand the input and output document structures.
Since a solver is just a HTTP server, you can write it in any language you like. You should follow the existing solvers for the input and output JSON formats.
Once started (see above) you can go to https://localhost:9000/ and click around.
The UI is hard-coded to connect to a solver on localhost:8000
. If you started a solver as above, it will be automatically used. You can restart solvers (on localhost:8000
) while the evaluation UI remains running.
Several question sets are provided in the questions/ directory.
These question sets are written in the JSONL format, each line corresponding to an instance of MultipleChoiceQuestion.
To try other question sets in this format, add them to the above questions
directory and restart the evaluation UI.
AI2 provides more questions at https://allenai.org/data.html
Please tell us what you think!
-
If you have a question or suggestion for a change, take look at existing issues or file a new issue.
-
If you'd like to propose a change to this code, please submit a pull request.
- November, 2016: Initial public release, version 1.
- February, 2018: Delete all Scala code.
- March, 2018: Update README.