Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[wip] feat: add mango-based VDU endpoints #1898

Open
wants to merge 232 commits into
base: main
Choose a base branch
from

Conversation

garbados
Copy link
Contributor

@garbados garbados commented Feb 3, 2019

Overview

This work-in-progress PR tracks my efforts to implement features described in #1554 namely Mango-based update handlers under the /{db}/_update endpoint.

As of this writing, I have scaffolded only a few functions and have written no tests. I'm mainly fumbling around, copying code from where it seems to make sense, in anticipation of further guidance.

As described in the issue thread, this PR attempts to implement the following endpoints:

  • GET /{db/_update: Retrieve a database's update handlers, similar to GET /{db}/_index.
  • POST /{db}/_update: Submit a document for validation. If it passes validation, the update is accepted. Otherwise, it is rejected.
  • PUT /{db}/_update: Create a new update handler.
  • DELETE /{db}/_update: Delete an update handler.

Although not yet in the code, I expect update handlers to have at least two attributes: a guard selector, which indicates which documents this handler applies to; and a condition selector, which validates the document. If a document passes the guard selector but not the condition selector, it is rejected. If a document submitted for validation matches no known guard selectors, it should be rejected too.

Testing recommendations

As of this writing, there are no tests in this PR.

Related Issues or Pull Requests

Checklist

  • Code is written and works correctly;
  • Changes are covered by tests;
  • Documentation reflects the changes;

rnewson and others added 30 commits May 6, 2017 09:32
Test does not pass yet.
* remove dependency on config
* make checks optional
* support HS256
and make everything truly optional.
* Improve pubkey not found error handling

When the public key identified by the {Alg, KID} tuple is not found on
the IAM keystore server, it's possible to see errors like:

([email protected])140> epep:jwt_decode(SampleJWT).
** exception error: no function clause matching
                    public_key:do_verify(<<"eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjIwMTcwNTIwLTAwOjAwOjAwIn0.eyJpc3MiOiJodHRwOi8vbG9jYWxob3N0OjEyMzIx"...>>,
                                         sha256,
                                         <<229,188,162,247,201,233,118,32,115,206,156,
                                           169,17,221,78,157,161,147,46,179,42,219,66,
                                           15,139,91,...>>,
                                         {error,not_found}) (public_key.erl, line 782)
     in function  jwtf:public_key_verify/4 (src/jwtf.erl, line 212)
     in call from jwtf:decode/3 (src/jwtf.erl, line 30)

Modify key/1 and public_key_not_found_test/0 to account for keystore
changing from returning an error tuple to throwing one.
Tolerate 5 crashes per 10 seconds
tonysun83 and others added 26 commits August 28, 2020 09:58
Previously, we passed in the unpacked version of the bookmark with
the cursor inside the options field. This worked fine for _find because
we didn't need to return it to the user. But for _explain, we return
the value back as unpacked tuple instead of a string and jiffy:encode/1
complains. Now we correctly extract the bookmark out of options, unpack
it, and then pass it separately in it's own field. This way options
retains it's original string form for the user so that invalid_ejson
is not thrown.
In some situation where design document for search index created by
customer is not valid, the _search_cleanup endpoint will stop to clean
up. This will leave some search index orphan. The change is to allow
to continue to clean up search index even if there is invalid design
document for search.
…id-ddoc

Allow to continue to cleanup search index even if there is invalid design document
…r_handling

Improve jwtf keystore error handling
When set, every response is sent once fully generated on the server
side. This increases memory usage on the nodes but simplifies error
handling for the client as it eliminates the possibility that the
response will be deliberately terminated midway through due to a
timeout.

The config value can be changed at runtime without impacting any
in-flight responses.
Add option to delay responses until the end
Previously there was an error thrown which prevented emitting _scheduler/docs
responses. Instead of throwing an error, return `null` if the URL cannot be
parsed.
Smoosh monitors the compactor pid to determine when the compaction jobs
finishes, and uses this for its idea of concurrency. However, this isn't
accurate in the case where the compaction job has to re-spawn to catch up on
intervening changes since the same logical compaction job continues with
another pid and smoosh is not aware. In such cases, a smoosh channel with
concurrency one can start arbitrarily many additional database compaction jobs.

To solve this problem, we added a check to see if a compaction PID exists for
a db in `start_compact`. But wee need to add another check because this check
is only for shard that comes off the queue. So the following can still occur:

1. Enqueue a bunch of stuff into channel with concurrency 1
2. Begin highest priority job, Shard1, in channel
3. Compaction finishes, discovers compaction file is behind main file
4. Smoosh-monitored PID for Shard1 exits, a new one starts to finish the job
5. Smoosh receives the 'DOWN' message, begins the next highest priority job,
Shard2
6. Channel concurrency is now 2, not 1

This change adds another check into the 'DOWN' message so that it checks for
that specific shard. If the compaction PID exists then it means a new process
was spawned and we just monitor that one and add it back to the queue. The
length of the queue does not change and therefore we won’t spawn new
compaction jobs.
We need to call StartFun as it might add headers, etc.
This fixes a94e693 because a race
condition exisited where the 'DOWN' message could be received
before the compactor pid is spawned. Adding a synchronous call to
get the compactor pid guarantees that the couch_db_updater process
handling of finish_compaction has occurred.
Some elixir test cases don't have actual module tag. Add tags to
help include or exclude them in CI test.
@wohali
Copy link
Member

wohali commented Oct 9, 2020

@garbados Any progress on this PR? I am preparing to mass-close old PRs that never got merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet