Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add LSP benchmark mimicking the one on quick-lint-js #13365

Merged
merged 15 commits into from
Jan 18, 2022
Merged

Conversation

ry
Copy link
Member

@ry ry commented Jan 13, 2022

See #13022

https://github.com/quick-lint/quick-lint-js/blob/35207e6616267c6c81be63f47ce97ec2452d60df/benchmark/benchmark-lsp/lsp-benchmarks.cpp#L223-L268

Currently I'm getting something like 128ms per iteration (which seems slower than the 200ms expected) so not sure if there is something broken.

# cargo bench  -p deno --bench lsp_bench_standalone
   Compiling deno v1.17.2 (/Users/ry/src/deno/cli)
    Finished bench [optimized] target(s) in 6.83s
     Running unittests (target/release/deps/lsp_bench_standalone-c6010a0d3f29fcda)

running 1 test
test incremental_change_wait ... bench: 128,334,504 ns/iter (+/- 7,344,853)

test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured

@ry ry requested a review from dsherret as a code owner January 13, 2022 19:50
Copy link
Contributor

@kitsonk kitsonk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure why you are seeing 128ms instead of 200ms minimum, looking at the code is from the time the change is processed it should be at least 200ms before we start an update_diagnostics...

cli/bench/lsp_bench_standalone.rs Outdated Show resolved Hide resolved
@ry
Copy link
Member Author

ry commented Jan 14, 2022

This still isn't right because I'm using diagnosticsMessagesToIgnore=1 and the numbers I'm getting are 287ms instead of 200ms. But I think it's getting closer to quick-lint-js's benchmark.

bench: 287,436,395 ns/iter (+/- 12,993,664)

@kitsonk
Copy link
Contributor

kitsonk commented Jan 14, 2022

This still isn't right because I'm using diagnosticsMessagesToIgnore=1 and the numbers I'm getting are 287ms instead of 200ms. But I think it's getting closer to quick-lint-js's benchmark.

bench: 287,436,395 ns/iter (+/- 12,993,664)

Well, it will be 200ms + tsc diagnostic request time + maybe blocking time if there is some other request in tsc. The whole purpose of the debounce was to allow other "high priority" tsc requests to get in front of the diagnostic request, but requests are processed in the order they are received:

deno/cli/lsp/tsc.rs

Lines 100 to 111 in eda6e58

while let Some((req, state_snapshot, tx)) = rx.recv().await {
if !started {
// TODO(@kitsonk) need to reflect the debug state of the lsp here
start(&mut ts_runtime, false, &state_snapshot)
.expect("could not start tsc");
started = true;
}
let value = request(&mut ts_runtime, state_snapshot, req);
if tx.send(value).is_err() {
warn!("Unable to send result to client.");
}
}

Given the test case though I can't specifically think of another request that might be going on, but it could be.

I did check on a reasonably sized project and I am getting:

measurement time
update_diagnostics_deps 10ms
update_diagnostics_lint 10ms
update_diagnostics_ts 69ms

So getting 287ms for a tsc diagnostics publish on a large file is not unreasonable or unexpected.

The first one being < 200ms I believe is actually another side effect all together of a design limitation of the test harness. We tried and tried to find a non-blocking way to "poll" for messages, but haven't, so any unread unsolicited messages will be read and returned, but if you try to poll for a message that isn't there, it will just hang. So you are likely reading something that was previously sent.

@ry
Copy link
Member Author

ry commented Jan 18, 2022

I've updated the benchmark to wait until we get the first "deno-lint" diagnostic (as described in the source field of the diagnostics. After this lands I'll send a PR to quick-lint-js to change their benchmark.

It's surprising but each iteration is still quite a bit longer than 200ms.

# cargo bench --bench lsp_bench_standalone
   Compiling deno v1.17.3 (/Users/ry/src/deno/cli)
    Finished bench [optimized] target(s) in 7.03s
     Running unittests (target/release/deps/lsp_bench_standalone-50969af938e69a9b)

running 1 test
test incremental_change_wait ... bench: 288,701,433 ns/iter (+/- 6,529,414)

@kitsonk
Copy link
Contributor

kitsonk commented Jan 18, 2022

It's surprising but each iteration is still quite a bit longer than 200ms.

There are other factors at play that try to keep it from becoming a "blocking" thing, but as I mentioned above, I think it can be refactored so that there is no debouncing of lint and "deno" diagnostics, we only debounce the TypeScript ones, and then we would get a "real" picture of how long it takes to issue lint diagnostics as well as provide a more responsive experience for users.

Copy link
Contributor

@kitsonk kitsonk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM noting a couple nits

cli/bench/lsp_bench_standalone.rs Outdated Show resolved Hide resolved
cli/bench/lsp_bench_standalone.rs Outdated Show resolved Hide resolved
@ry ry merged commit ce52bfc into denoland:main Jan 18, 2022
@ry ry deleted the lsp-bench branch January 18, 2022 11:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants