Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simple grading python API #1371

Open
vpozdnyakov opened this issue Sep 16, 2020 · 4 comments
Open

Simple grading python API #1371

vpozdnyakov opened this issue Sep 16, 2020 · 4 comments

Comments

@vpozdnyakov
Copy link

vpozdnyakov commented Sep 16, 2020

Hello there!

I use nbgrader to run competitions at my university. Each student submit an .ipynb file and then the external system (EvalAI) starts evaluation via nbgrader. I don't need to maintain gradebook.db, course structure and so on, because EvalAI already does this. I just only need a python interface that allows me to grade student's submissions and that's it. It could be an interface of the form

def autograde(source_ipynb, solution_ipynb):
    ...
    return {'cell_1': 1.0, 'cell_2': 0.0, 'cell_3': 2.0, ...}

Unfortunately there is no such interface, so I have to recreate a course structure with random username in each iteration of evaluation to save free disk space . It looks like:

def autograde():
    makedirs('source/practice/', exist_ok=True)
    copyfile('source.ipynb', 'source/practice/assignment_1.ipynb')
    username = random_username()
    makedirs('submitted/{}/practice/'.format(username), exist_ok=True)
    copyfile('solution.ipynb', 'submitted/{}/practice/assignment_1.ipynb'.format(username))
    api = NbGraderAPI()
    api.autograde('practice', username, force=True, create=True)
    api.generate_feedback('practice', username, force=True)
    copyfile('feedback/{}/practice/assignment_1.html'.format(username), destination_path)
    score = api.get_student_submissions(username)[0]['code_score']
    rmtree('autograded/{}'.format(username), ignore_errors=True)
    rmtree('feedback/{}'.format(username), ignore_errors=True)
    rmtree('submitted/{}'.format(username), ignore_errors=True)
    return score

I suggest to develop such interface that allow to grade (also generate assignment/feedback) submissions without gradebook.db, course structure and so on.

@perllaghu
Copy link
Contributor

That's because the grading system requires the gradebook.db file.

The workflow is thus:

  • We start with a source notebook, which has solutions and hidden tests.
  • The Generate button copies the source notebook into the Release directory, and removes the “solutions” and “hidden tests” sections.
  • Release copies the notebooks to the exchange service
  • Submitted notebooks may have cell output in the document.
  • Collected notebooks are copies of the submitted notebooks, in the instructors storage-space
  • Autograde modifies the submitted notebook:
    • The notebook is copied into an autograded directory & the hidden tests are re-inserted into the notebooks
    • The new notebook is then run [akin to using the double-headed arrow in the notebooks tool-bar]. “Autograded test” cells that run with no errors are given the points (stored in the db) defined
    • The notebook is saved with output

..... so the gradebook.db database is integral to the process.

There was a proposal to create an external grader..... however I can't find the reference for that.

@vpozdnyakov
Copy link
Author

@perllaghu thanks for answer. as I understand there is no prohibitions to do it in memory or using temp files. At least my code above does something like this, but with a few extra-steps for maintenance gradebook.db and course structure that I don't use actually.

@perllaghu
Copy link
Contributor

sqlite in memory.... no problem :)

@ajaykushwaha
Copy link

ajaykushwaha commented Jan 31, 2023

The new notebook is then run [akin to using the double-headed arrow in the notebooks tool-bar]. “Autograded test” cells that run with no errors are given the points (stored in the db) defined

@perllaghu , Can you point out which files are used for this? I am looking more specifically for code , where it decides whether cell ran with or without error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants