Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement success criteria metrics for lesson checkpointing #4042

Closed
BenHenning opened this issue Dec 14, 2021 · 2 comments · Fixed by #5336
Closed

Implement success criteria metrics for lesson checkpointing #4042

BenHenning opened this issue Dec 14, 2021 · 2 comments · Fixed by #5336
Assignees
Labels
enhancement End user-perceivable enhancements. Impact: Medium Moderate perceived user impact (non-blocking bugs and general improvements). Work: Medium The means to find the solution is clear, but it isn't at good-first-issue level yet. Z-ibt Temporary label for Ben to keep track of issues he's triaged.

Comments

@BenHenning
Copy link
Sponsor Member

Is your feature request related to a problem? Please describe.
Lesson checkpointing was implemented as a GSoC project last summer, but we didn't have time to implement the success criteria metrics expected for the feature here.

Describe the solution you'd like
The metrics should be implemented using the existing analytics framework. This feature should be documented at least in this issue with a high-level overview with the changes that need to be done in the code before actually starting implementation. A design document is not needed.

Describe alternatives you've considered
N/A

Additional context
None

@BenHenning
Copy link
Sponsor Member Author

BenHenning commented Dec 14, 2021

@rishidyno this is actually a higher priority feature than #3825 and isn't blocked on UX (though it isn't a new UI). Might you be interested working on in this issue?

@rishidyno rishidyno self-assigned this Feb 15, 2022
@rishidyno rishidyno removed their assignment May 26, 2022
@BenHenning BenHenning added this to the Beta MR2 milestone Jun 11, 2022
@Broppia Broppia added Impact: Medium Moderate perceived user impact (non-blocking bugs and general improvements). issue_type_infrastructure labels Jun 13, 2022
@BenHenning BenHenning added Issue: Needs Break-down Indicates that an issue is too large and should be broken into smaller chunks. Z-ibt Temporary label for Ben to keep track of issues he's triaged. issue_user_developer and removed mini-project labels Sep 14, 2022
@BenHenning BenHenning removed this from the Beta MR2 milestone Sep 16, 2022
@seanlip seanlip added enhancement End user-perceivable enhancements. and removed issue_type_infrastructure labels Mar 28, 2023
@adhiamboperes adhiamboperes added the Work: Medium The means to find the solution is clear, but it isn't at good-first-issue level yet. label Jul 31, 2023
@adhiamboperes adhiamboperes added this to the 1.0 Global availability milestone Oct 6, 2023
@theMr17
Copy link
Collaborator

theMr17 commented Feb 6, 2024

Implementation

The metrics will be logged using the existing analytics framework. Each of the following events will be logged alongwith the ExplorationContext. As discussed with @adhiamboperes, here are the places where each of the event logs shall be triggered.

  1. Progress saving success count
    This count will include the number of instances successful in saving user’s progress. The event will be triggered in ExplorationProgressController, the point where we receive a response that the checkpoint was successfully saved. Here.

  2. Progress saving failure count
    This count will include the number of instances failed in saving user’s progress. The event will be triggered in ExplorationProgressController, the point where we receive a response that the checkpoint has failed to save. Here.

  3. Lessons saved advertently count
    This count will include the number of instances user exits the exploration advertently, i.e., by using the 'X' icon or the back button. The event will be triggered in ExplorationActivityPresenter, inside backButtonPressed function. Here.

  4. Lessons saved inadvertently count
    It will be preferred to calculate the metric as shown below, because logging the event will be a bit complex in this situation. This metric includes scenarios of app crashes, updates, topic deletions and reinstalls, so separating them from total saving attempts will be difficult.

    Lessons saved inadvertently count = Total saving attempts count - Lessons saved advertently count

  5. Total saving attempt count

    Total saving attempt count = Progress saving success count + Progress saving failure count

  6. Start over count
    Already implemented as a part of Learner Analytics. Here.

  7. Continuation count
    Already implemented as a part of Learner Analytics. Here.

  8. Correct answer count after returning to the lesson
    A new variable will be introduced in ExplorationProgressController.ControllerState to log the event only when a resumed lesson is played. The event will be triggered in ExplorationProgressController, inside ControllerState.submitAnswerImpl function. Here.

  9. Incorrect answer count after returning to the lesson
    This count will be same as above but will be logged in case of incorrect answer submission.

File Changes

The following tables list the files that will be modified in each of the modules.

App module

File Change
ExplorationActivityPresenter.kt Log lessons saved advertently.

Domain module

File Change
ExplorationProgressController.kt Log progress save success & failure, correct & incorrect answer.
LearnerAnalyticsLogger.kt Log the events via AnalyticsController.

Model module

File Change
oppia_logger.proto Introduce context fields for above events.

Utility module

File Change
EventBundleCreator.kt Convert new fields to ActivityContext.
StandardEventTypeToHumanReadableNameConverterImpl.kt Add human readable names for above events.
KenyaAlphaEventTypeToHumanReadableNameConverterImpl.kt Add human readable names for above events.

@adhiamboperes adhiamboperes removed the Issue: Needs Break-down Indicates that an issue is too large and should be broken into smaller chunks. label Apr 5, 2024
adhiamboperes added a commit that referenced this issue May 8, 2024
#5336)

<!-- READ ME FIRST: Please fill in the explanation section below and
check off every point from the Essential Checklist! -->
## Explanation
<!--
- Explain what your PR does. If this PR fixes an existing bug, please
include
- "Fixes #bugnum:" in the explanation so that GitHub can auto-close the
issue
  - when this PR is merged.
  -->

Fixes #4042

This PR implements the success criteria metrics required for lesson
checkpointing as described
[here](https://docs.google.com/document/d/1d8yjwz76mngtsPRxC7fubgLKg8mfA7kG1sWRWdbiaVw/edit#bookmark=id.2zyjd5vygmcv).
A high level overview of the implementation is documented on the issue
thread
[here](#4042 (comment)).

#### Changes to FakeAnalyticsEventLogger
- `fun getOldestEvents(count: Int): List<EventLog>` is necessary to
retrieve a list of a pre-defined count of oldest events from all the
logged events.
- `fun getLoggedEvent(predicate: (EventLog) -> Boolean): EventLog?`
serves to acquire a reference to a logged event when the context of the
event is known, but the event index is uncertain or potentially
surrounded by other events. Thus, extracting the event via index might
be difficult.

## Essential Checklist
<!-- Please tick the relevant boxes by putting an "x" in them. -->
- [x] The PR title and explanation each start with "Fix #bugnum: " (If
this PR fixes part of an issue, prefix the title with "Fix part of
#bugnum: ...".)
- [x] Any changes to
[scripts/assets](https://github.com/oppia/oppia-android/tree/develop/scripts/assets)
files have their rationale included in the PR explanation.
- [x] The PR follows the [style
guide](https://github.com/oppia/oppia-android/wiki/Coding-style-guide).
- [x] The PR does not contain any unnecessary code changes from Android
Studio
([reference](https://github.com/oppia/oppia-android/wiki/Guidance-on-submitting-a-PR#undo-unnecessary-changes)).
- [x] The PR is made from a branch that's **not** called "develop" and
is up-to-date with "develop".
- [x] The PR is **assigned** to the appropriate reviewers
([reference](https://github.com/oppia/oppia-android/wiki/Guidance-on-submitting-a-PR#clarification-regarding-assignees-and-reviewers-section)).

## For UI-specific PRs only
<!-- Delete these section if this PR does not include UI-related
changes. -->
If your PR includes UI-related changes, then:
- Add screenshots for portrait/landscape for both a tablet & phone of
the before & after UI changes
- For the screenshots above, include both English and pseudo-localized
(RTL) screenshots (see [RTL
guide](https://github.com/oppia/oppia-android/wiki/RTL-Guidelines))
- Add a video showing the full UX flow with a screen reader enabled (see
[accessibility
guide](https://github.com/oppia/oppia-android/wiki/Accessibility-A11y-Guide))
- For PRs introducing new UI elements or color changes, both light and
dark mode screenshots must be included
- Add a screenshot demonstrating that you ran affected Espresso tests
locally & that they're passing

---------

Co-authored-by: Adhiambo Peres <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement End user-perceivable enhancements. Impact: Medium Moderate perceived user impact (non-blocking bugs and general improvements). Work: Medium The means to find the solution is clear, but it isn't at good-first-issue level yet. Z-ibt Temporary label for Ben to keep track of issues he's triaged.
Development

Successfully merging a pull request may close this issue.

6 participants