Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: Describe Symbolicator Caching Infrastructure #5

Merged
merged 3 commits into from
Nov 2, 2022
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
finish up diagram
  • Loading branch information
Swatinem committed Sep 2, 2022
commit 081ffd60e28374a3b63aa5227695e1e8a87ab1ba
60 changes: 39 additions & 21 deletions text/0005-symbolicator-caching.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,22 +24,11 @@ On the other hand we want to be confident to roll out changes that refresh cache

# Current architecture

TODO: maybe draft a mermaid diagram showing the control flow of how an event is processed, etc...
The following diagram should highlight the current architecture for fetching files and computing caches based on those,
as well as highlight all the control flow related to request coalescing.

```
```mermaid
graph TD
subgraph symcache [SymCache]
construct-candidates[Construct Candidates list]
fetch-candidates[Fetch all Candidates]
pick-candidate[Pick best Candidate]

construct-candidates --> fetch-candidates

fetch-candidates -. for each candidate .-> get-cached-file

fetch-candidates --> pick-candidate
end

subgraph computation [Compute Cache File]
compute([Compute Cache File])
run-computation[Run Computation]
Expand Down Expand Up @@ -91,14 +80,43 @@ graph TD

load-cache([Load Cached File])
end
```

# Drawbacks
subgraph get-file-graph [Get File]
get-file([Get File])

construct-candidates[Construct Candidates list]
fetch-candidates[Fetch all Candidates]
pick-candidate([Pick best Candidate])

get-file --> construct-candidates --> fetch-candidates

fetch-candidates -. for each candidate .-> get-cached-file

fetch-candidates --> pick-candidate
end

subgraph get-symcache [Get SymCache]
find-object[Find Object]

get-bcsymbolmap[Fetch BCSymbolMap]
get-il2cpp-map[Fetch Il2Cpp Mapping]

compute-symcache[Compute SymCache]

find-object --> get-bcsymbolmap --> compute-symcache
find-object --> get-il2cpp-map --> compute-symcache

find-object -..-> get-file
pick-candidate -..-> find-object
compute-symcache -..-> get-cached-file
load-cache -..-> compute-symcache
end
```

Why should we not do this? What are the drawbacks of this RFC or a particular option if
multiple options are presented.
# Drawbacks of current Design

# Unresolved questions
There is currently one main drawback in particular related to "lazy cache computation": The lazy computations have a
concurrency limit, but no queue. This currently works well to limit the impact of cache refreshes on the infrastructure,
but it can potentially lead to very long tails until a refresh was fully rolled out.

- What parts of the design do you expect to resolve through this RFC?
- What issues are out of scope for this RFC but are known?
This also has customer visible impact, as we cannot say for certain when a targeted fix was indeed rolled out.