-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JSON schema and validation #182
Conversation
Define a JSON schema and use `ajv` to load/compile it. Then, use this schema to validate the response returned by the API. If the response does not validate the schema we throw an exception and don't load any of the addons. I plan to modify this to validate each part of the response by the addon itself and disable a particular addon if the validation fails. This will allow having a broken addon without interfering with the rest. Related #149
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I love this approach. I've used it in the past and it's a great way to keep backend and frontend in sync. It's also a great way to write some tests to keep checking if the API response conforms to the schema or not.
src/index.js
Outdated
"https://readthedocs.org/schemas/response.json", | ||
); | ||
if (!validate(config)) { | ||
throw new Error("JSON response does not validate."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we just kill all the addons if it doesn't validate? I'm thinking if we ever change the schema: there might be some discrepancies - should we maybe let the addons resolve as much as possible anyway? Or maybe validate per add-on?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or maybe validate per add-on?
Yes. This is my end goal. I want to be able to enable addons if the data required for that addon is validated, even if there are parts of the data returned by the API that's corrupted.
In fact, I don't think we will end up validating that data returned by the API "as a whole" since we may don't care about all the attributes of it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following this approach, what I have in mind for analytics addons, for example, it will validate:
- required
projects.current.slug
(string) - required
versions.current.slug
(string) - optional
projects.current.language.code
(string) - optional
projects.current.programming_language.code
(string) - optional
readthedocs.analytics.code
(string)`
To be able to enable the analytics addon, we don't care if builds.current.slug
is invalid, for example, or any other field outside of the ones mentioned here 👍🏼
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It also changes the pattern used insider `render()` method. The `config` object is assigned to the addon _only if_ it validates via `isEnabled()` static method. I think this pattern is better since we don't have to validate the `config` object on each render of the addon.
This way we ensure that everything we need from the addons is present.
I updated this PR and I'm happier with the approach I'm proposing here. I will continue working on this PR to apply this approach to all the addons. I will also take a look at the testing suite to check how the approach works together with that. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach is looking great so far and per-addon validation seemingly makes the most sense so far.
I'm getting this error when running tests and I'm not able to trace it down. @agjohnson I'd appreciate if you have a clue here that can help me to debug this better.
I found this issue ajv-validator/ajv#2306 |
I understand this issue is because of UMD and ES modules, and I understand we are doing some type of module's type conversion when running the tests via rollup, but I'm not sure to understand all of this 🙃 |
It seems the key issue because of the Rollup plugin is doing something that may not be 100% accurate. Commenting this lines https://github.com/readthedocs/readthedocs-client/blob/14a3b874d7f067b8fd69bc451ff0d957459cdb94/web-test-runner.config.mjs#L24-L27 I get a different error:
|
I finished this PR regarding the JSON schema and validation. I'm happy with the current pattern. It's strict enough to help us find issues with invalid data and keeping the other addons sane when the data is invalid only for one addon. I'd like to get some help on making the test suite to run properly, without adding more tests for now, but adapting the existent ones to follow the current approach. I will add tests in a following PR after merging this one. |
So, the reason why these type of errors come up is generally because of cyclical exports in the library. For the most part, you can get around unclear exports by using:
However, in this case, this doesn't quite solve the issue as the exports from the built bundle are a little off already. This is from built bundle:
This is the start of the cyclical export that Rollup and the commonjs plugin seem to have trouble with. This probably doesn't help explain the solution too much yet, and is mostly a review of what I found setting up WTR. I'll experiment a bit more first, but we should be able to resolve this on our side. |
I took another swing at this. See #185 for the fix to WTR config to make our configuration play a bit nicer with both ajv and visual-dom-diff. Ultimately, I found the issue because most of the applicator_1 = {default: getApplicator} Instead, with Rollup's commonjs plugin, and the applicator_1 = getApplicator Specifically, the module exports were squashed because the intention would never to use I got new test failures, but they look like exactly what I would expect. The new failures are because we are only passing in partial configuration objects and with your change we can no longer do that. |
Preferred default export mode is not required anymore, and this plays nicely with other TS libraries. Refs #182
… humitos/json-schema-validator
Re-use these static methods across all the addons by simply defining some variables in the addons class.
This PR is ready to review and merge. I'm happy with the pattern and I'm feeling more comfortable with it. There are things we will want to refactor in the near future, but as I said before, I wanted to be pretty explicit at this point since it's pretty easy to make mistakes with the JSON schema. In the meanwhile, I will open another PR based on this one to write some extra test cases 👍🏼 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This all looks great and it all feels easier to work with now 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a note here, I didn't inspect the actual schemas here deeply. I think we can adjust if they don't match what we are outputting from the application and don't need to be strict about ensuring they are strictly accurate yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we planning to add some backend tests to see if the API response conforms to the schema(s)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a note here, I didn't inspect the actual schemas here deeply. I think we can adjust if they don't match what we are outputting from the application and don't need to be strict about ensuring they are strictly accurate yet.
Yes. Currently, they are "good enough" but not perfect. I've been finding issues while writing the test cases, so I expect to find more as we keep working on the addons. We already have the pattern defined, so updating them shouldn't be too hard
Are we planning to add some backend tests to see if the API response conforms to the schema(s)?
This is something that I'd eventually want but we can't prioritize right now. We will need to find a way to share the JSON schema between the two repositories and execute the tests in both platforms with the same schema. "Sharing the schema" probably means writing the schema in JSON instead of JS that's something I tried to avoid because JSON doesn't support comments among others painful things, like not supporting trailing comma 🙃
Do you have experience doing this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way I did it in the past was to have the schema in json and then use description
and $comment
keywords to provide context where needed.
It's been a few years since I did this, so I can't remember how exactly I shared it between the frontend and backend repos. Let me try to find that information actually.
Define a JSON schema and use
ajv
to load/compile it. Then, use this schema to validate the response returned by the API. If the response does not validate the schema we throw an exception and don't load any of the addons.I plan to modify this to validate each part of the response by the addon itself and disable a particular addon if the validation fails. This will allow having a broken addon without interfering with the rest.
Related #149