Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Milestone 4 Feedback #77

Open
ArmaghanSarvar opened this issue Mar 23, 2023 · 5 comments
Open

Milestone 4 Feedback #77

ArmaghanSarvar opened this issue Mar 23, 2023 · 5 comments

Comments

@ArmaghanSarvar
Copy link

Congratulations on finishing milestone 4! We can see you put a lot of work into this project, nice work! Below we list some specific feedback you can use to improve your project.

  • Submission instructions: mechanics: 4/4

  • Improving your app: mechanics: 20/20

o The information requested for the submission on Canvas is not including what has been modified, the source, team member, and commit/s.
o There is one improvement to the application by each team member and the changes are not only minor fixes.

  • Reproducibility 12/20
    o Testing: rubric={mechanics:1/5, accuracy:5/5}
    • The tests are passing in the github action workflow 1/5
    • There are 3 test_that() statements in the file tests/testthat/test-shinytest2.R  5/5
    o Continuous Integration and Deployment: rubric={mechanics:1/5, accuracy:5/5}
    • There should be one workflow badge in the README.md 5/5 mechanics
    • The deployment workflow deploy_app.yaml is passing. 1/5 accuracy

  • Tie it all together: rubric={mechanics:20/20, accuracy:20/20, viz:6/10}
    o The main plots take time to load (-4)

  • Reflection: reasoning: 6/6
    o There is a reflection-milestone4.md file 
    o reflection-milestone4.md is included in the docs folder
    o When reviewing reflection-milestone4.md and comparing it with the milestone1 sketch, it's not difficult to comprehend the alterations that were made and the reasons behind them.  

@samson-bakos
Copy link
Collaborator

samson-bakos commented Mar 28, 2023

Hello @ArmaghanSarvar , to clarify:

For The main plots take time to load, is that referring to the fact that the plots do not render immediately upon opening, or that once the button is pressed, it takes time for the plots to update?

For the Reproducibility, where are you seeing the failed workflows? We had failing workflows in previous builds while we were debugging, but all workflows appear to be currently passing in our repo

@ArmaghanSarvar
Copy link
Author

Hi.

  1. I believe both are true. The plots take a relatively long time to load in general.
  2. I am referring to the following:

@samson-bakos
Copy link
Collaborator

Regarding the plots taking a long time to load, that doesn't really have anything to do with our implementation; we're hosting this on a free external server and that makes it slow. The plots render almost instantly on the local version but there's no way to speed up the shiny apps deployment without a paid plan. The plots not immediately displaying is fair, it's a limitation of our original design that we acknowledge in our reflection. But we shouldn't lose marks for the shinyapps deployment being slow.

Regarding the workflows. I don't understand what you mean by 'change the files' . We were debugging issues with our workflows and code that resulted in failed runs in the history. But all work flows pass in the most recent deployment, which is what matters. It doesn't make sense for us to lose marks to have failed runs while debugging, especially when we were eventually able to fix those bugs.

@ArmaghanSarvar
Copy link
Author

ArmaghanSarvar commented Mar 29, 2023

  1. Yes, I am aware of the server adoption, and the corresponding rubric set by the instructor is based on the assumption of adopting on the server.
    I have just realized the user needs to figure out that they need to click on the "plot sliders" button in order to see the visualizations. Before, it seemed like nothing was loading.
    Regarding this, I will add the 4 deducted points for the load time. However, unfortunately, I will have to deduct 3 points for user interpretability. When a first-time user opens the app, they will not see any visualization although some default parameter values have been already selected, and this is not user-friendly.

  2. That's right. But what I mean is your github release. The criteria is what we have considered in the rubric for every group.

@samson-bakos
Copy link
Collaborator

Okay thank you. -3 for the interpretation is very understandable.

I am still confused regarding the workflows as the GitHub release has the same final commit/ workflow history as the current repo state. But if it's rubric based I will ask Florencia about it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants