-
Notifications
You must be signed in to change notification settings - Fork 175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(api): handle performance-metrics package not existing #14922
chore(api): handle performance-metrics package not existing #14922
Conversation
Closes EXEC-186 If the gantry is not homed and a powercycle occurs, drop-tip wizard cannot proceed with flows. An error is raised during the flow, and ultimately a home command is dispatched that has the side effect of potentially aspirating liquid into the pipette, damaging it. We special case home errors to prevent this. The primary functional difference is now that any time an error occurs, exiting the wizard via the header should not home the gantry. Homing as a result of an error should only occur when the "Confirm removal and home" button is presented and clicked.
try: | ||
from performance_metrics import RobotContextTracker | ||
|
||
return RobotContextTracker | ||
except ImportError: | ||
return StubbedTracker |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should I check opentrons.config.IS_ROBOT here?
if IS_ROBOT:
try:
from performance_metrics import RobotContextTracker
except
<raise or log here>
else:
return StubbedTracker
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nah I think this is totally fine
# Overview Closes https://opentrons.atlassian.net/browse/EXEC-399 Evaluate robot context as `In Analysis` when _analyze is running in `opentrons.cli.analyze` # Changelog - Add `performance_metrics_data` directory - Inside of performance_helpers.py - Add logic to determine if tracking should be enabled (based off FF) - Add `track_analysis` factory decorator (I think that is the right name for it lol) - Wrap `opentrons.cli.analyze._analyze` with `track_analysis` decorator - Change attribute naming on SupportsTracking to `storage_location`. If the passed Path wants to be a dir that needs to have a file defined or a direct file path, is an implementation detail. - Have `RobotContextTracker` define its own file name # Test Plan - Verifying storage with tests in test_cli.py. A follow-up PR will implement storing to a file on the robot. At that point, testing will be done on the robot. - Did verify that `performance_metrics_data` directory is getting created # Review requests None # Risk assessment Low, although this is wrapping production code, it is gated by a feature flag that defaults to false
try: | ||
from performance_metrics import RobotContextTracker | ||
|
||
return RobotContextTracker | ||
except ImportError: | ||
return StubbedTracker |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nah I think this is totally fine
# Overview Closes https://opentrons.atlassian.net/browse/EXEC-407 We do not want performance-metrics to be a required dependency for the publicly available opentrons package on PyPI. But we want to utilize performance-metrics when running the opentrons package on the robot itself. The issue is, that since the performance-metrics package uses decorators to track running code, the decorators will always need to exist. This PR handles the case where the performance-metrics package does not exist and injects stubs where necessary. # Changelog - Created the `SupportsTracking` mypy Protocol to define an interface both `RobotContextTracker` and the stubbed class, `StubbedTracker` implement - Added performance-metrics as a dev dependency so tests using performance-metrics can still run - Created `performance_helpers.py` - Contains `StubbedTracker` defininition - Handles loading `StubberTracker` if performance-metrics library fails to load - Provides `_get_robot_context_tracker` private singleton function for eventual public-facing functions to use. # Test Plan - Testing to ensure stubbed `track` decorator returns the decorated function unchanged - Validate singleton logic of _get_robot_context_tracker # Risk assessment Low, still not actually being used anywhere --------- Co-authored-by: Jethary Rader <[email protected]> Co-authored-by: Jamey Huffnagle <[email protected]>
Overview
Closes https://opentrons.atlassian.net/browse/EXEC-407
We do not want performance-metrics to be a required dependency for the publicly available opentrons package on PyPI. But we want to utilize performance-metrics when running the opentrons package on the robot itself.
The issue is, that since the performance-metrics package uses decorators to track running code, the decorators will always need to exist. This PR handles the case where the performance-metrics package does not exist and injects stubs where necessary.
Changelog
SupportsTracking
mypy Protocol to define an interface bothRobotContextTracker
and the stubbed class,StubbedTracker
implementperformance_helpers.py
StubbedTracker
defininitionStubberTracker
if performance-metrics library fails to load_get_robot_context_tracker
private singleton function for eventual public-facing functions to use.Test Plan
track
decorator returns the decorated function unchangedRisk assessment
Low, still not actually being used anywhere