Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Addition of tests to check build status #8

Closed
Milind220 opened this issue Feb 21, 2022 · 12 comments · Fixed by #120
Closed

Addition of tests to check build status #8

Milind220 opened this issue Feb 21, 2022 · 12 comments · Fixed by #120
Assignees
Labels
help wanted Extra attention is needed

Comments

@Milind220
Copy link
Collaborator

The package currently has no test coverage at all. There is thus no method of checking build status with CI/CD.
The core functionality at least should have test coverage.

@Milind220 Milind220 added the help wanted Extra attention is needed label Feb 21, 2022
@Milind220 Milind220 self-assigned this Feb 24, 2022
@Milind220
Copy link
Collaborator Author

I'll write the tests for the current core functionality.

@Samxx97
Copy link
Contributor

Samxx97 commented Mar 15, 2022

hey @Milind220 can i take this off your plate?
i always wanted to a implement a test suite , also i feel like this will give me the opportunity to learn more about GitHub actions and workflow and CI/CD and hooks and such.

@Milind220
Copy link
Collaborator Author

@Sam-damn Hahaha sure

I've been meaning to get to this for quite a while now, but been bogged down with work. Have fun working on it - After it's complete we can open a new issue to connect the tests to a continuous testing workflow so that you can learn that too!

@Samxx97
Copy link
Contributor

Samxx97 commented Mar 17, 2022

Great! very excited for this :)

@Milind220 Milind220 assigned Samxx97 and unassigned Milind220 Mar 19, 2022
@Samxx97
Copy link
Contributor

Samxx97 commented Mar 22, 2022

@Milind220 a little update on this, i have been planning how to do the tests and i decided that we should be mocking the API request calls because of the following reasons :

  1. We aren't trying to test the API itself rather we are testing our own logic which is a wrapper around the data returned by the API.
  2. API calls are unpredictable and could be slow, they also depend on internet connections , and tests should be always predictable.

PS : by mocking i mean creating artificial request responses that we know should produce certain functionality from our methods and we patch (intercept and replace) any requests made inside our methods with those mock objects (artificial responses) to test the behavior of said methods.

However having said that, we should definitely include a test to check whether our URL endpoints are exactly what they should be because possibly (in the future) some contributor might change one of those URLS and in that case if it goes unnoticed , it should be picked up by our tests.

And one last thing i have been reading a little bit on the subject of testing private methods or more accurately "internal methods" methods which aren't made to be used by a consumer of ozone, methods which are used internally by other methods , like _parse_data and methods like these which change quite often aren't stable , and providing a test for such methods would be counter intuitive as any change to them would require a change to the tests which would prove cumbersome , however this line of reasoning wouldn't apply to all private methods.

I will still of course provide tests for the private methods , but i would like to hear your opinion on this, since the general consensus seems to be that only public methods (ozone API) should be tested and not private methods, and i realize that this isn't an iron clad rule which fits all cases , there are always exceptions but it does seem to be a general guideline.

i will be making a pull request soon after i do the rest of the tests where we can discuss this further, but for now i would like to hear your opinion on all of this :)

@Milind220
Copy link
Collaborator Author

Milind220 commented Mar 23, 2022

@Sam-damn

  • Creating a mock API response to feed into Ozone's methods makes sense - The WAQI API is pretty stable and is unlikely to change anyway, so testing our own code is all we need to do. We should generally remain aware of any changes to the WAQI API though - but perhaps that's a separate issue
  • Definitely include some simple tests to check if the urls are what they should be - all our functionality is built on this
  • We can begin with testing only public methods. If our public methods work then we can be confident that the private methods are functioning fine, and if the public methods fail then we can trace it back to the private methods.
  • Can always add private method tests afterwards in a separate PR

Thanks for working on this! Your points all made a lot of sense. Can't wait for us to have a continuous testing system for Ozone!

@lahdjirayhan
Copy link
Contributor

Looking forward to your PR @Sam-damn :)

@Milind220
Copy link
Collaborator Author

@Sam-damn Hey! Are you still working on the unit tests? We definitely need this asap, so let us know if you're too busy to work on it

@Samxx97
Copy link
Contributor

Samxx97 commented Apr 16, 2022

@Milind220 i apologize i got a a little busy with job , i think it would be best if someone else is assigned to this.

@Samxx97 Samxx97 removed their assignment Apr 16, 2022
@lahdjirayhan
Copy link
Contributor

I'll take this and try.

Hold my beer, I got this.

@lahdjirayhan lahdjirayhan mentioned this issue Apr 21, 2022
13 tasks
@Milind220 Milind220 linked a pull request Apr 21, 2022 that will close this issue
13 tasks
@Milind220
Copy link
Collaborator Author

Milind220 commented Apr 22, 2022

@lahdjirayhan Tried the tests! they work perfectly :) Can you confirm that you also get this output?

Screenshot 2022-04-23 at 12 32 47 AM

@lahdjirayhan
Copy link
Contributor

@Milind220 Yes. This is my output:

=========================== short test summary info ===========================
FAILED tests/test_get_city_air.py::test_bad_data_format - AssertionError: ass...
FAILED tests/test_get_city_air.py::test_correct_data_format - AssertionError:...
FAILED tests/test_get_city_forecast.py::test_bad_data_format_input - Assertio...
FAILED tests/test_get_city_forecast.py::test_correct_data_format_input - Asse...
FAILED tests/test_get_coordinate_air.py::test_bad_coordinates - vcr.errors.Ca...
FAILED tests/test_get_coordinate_air.py::test_bad_data_format - AssertionErro...
FAILED tests/test_get_coordinate_air.py::test_output_formats - AssertionError...
FAILED tests/test_get_historical_data.py::test_warnings_on_input_combo - Fail...
FAILED tests/test_get_historical_data.py::test_correct_data_format - Assertio...
FAILED tests/test_get_multiple_city_air.py::test_return_value_and_format - Va...
FAILED tests/test_get_multiple_city_air.py::test_column_expected_contents - V...
FAILED tests/test_get_multiple_city_air.py::test_column_types - ValueError: c...
FAILED tests/test_get_multiple_city_air.py::test_excluded_params - ValueError...
FAILED tests/test_get_multiple_city_air.py::test_bad_city - ValueError: could...
FAILED tests/test_get_multiple_city_air.py::test_bad_data_format - AssertionE...
FAILED tests/test_get_multiple_city_air.py::test_correct_data_format - ValueE...
FAILED tests/test_get_multiple_coordinate_air.py::test_bad_coordinates - Exce...
FAILED tests/test_get_multiple_coordinate_air.py::test_bad_data_format - Asse...
FAILED tests/test_get_multiple_coordinate_air.py::test_output_formats - Asser...
FAILED tests/test_get_range_coordinates_air.py::test_return_value_and_format
FAILED tests/test_get_range_coordinates_air.py::test_column_expected_contents
FAILED tests/test_get_range_coordinates_air.py::test_column_types - Exception...
FAILED tests/test_get_range_coordinates_air.py::test_excluded_params - Except...
FAILED tests/test_get_range_coordinates_air.py::test_nonexistent_requested_params
FAILED tests/test_get_range_coordinates_air.py::test_bad_data_format - Assert...
FAILED tests/test_get_range_coordinates_air.py::test_output_formats - Excepti...
================== 26 failed, 31 passed in 679.10s (0:11:19) ==================

We can incrementally work towards making all the tests pass.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants