forked from sourcegraph/sourcegraph
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[pull] main from sourcegraph:main #48
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
In the prototype "external" links are marked with a React logo to make it clear that these links makes you leave the prototype. This currently doesn't work on S2 because the React logo wasn't included in the list of source files for bazel.
* dev/sg/autocomplete: sync completion scripts from upstream * Correctly close sg completion script --------- Co-authored-by: Jean-Hadrien Chabran <[email protected]>
My main goal was to add an empty state for the search results page, but I took the opportunity to adopt/test some of Taiyab's ideas for the search results page (left sidebar with type filters, more condense results display). It's only a crude approximation since his work is still in flux. It wasn't the goal to strictly follow the design.
This change adds the external URL to the exported events payload. We send it once at the beginning of the stream as part of the stream metadata, and in Telemetry Gateway this gets joined onto each event payload, per request: https://sourcegraph.slack.com/archives/C05BGNBEPKL/p1702584292491359?thread_ts=1702583517.559349&cid=C05BGNBEPKL
Telemetry Gateway currently accepts remote spans as parents - because these come from Sourcegraph instances we often to not control or don't have access to traces for, this makes Cloud Trace difficult to use since the root span will always be missing. This change adapts `internal/grpc/defaults` to allow us to build some defaults a bit more tailored towards public-facing gRPC services. There are currently a _lot_ of helpers in `internal/grpc` so it might be helpful to stay integrated for now.
The top-level spec works if zero value and does not need to be explicitly provided, but right now this causes a JSON schema error if you don't provide monitoring. This change makes it optional for the purposes of JSON schema validation.
The embeddings policy framework attempts to rerun a repo job even if a previous run failed at the exact same revision. This means that when a job failed, for example because of rate limits or a problematic file, it would immediately be rescheduled and fail again. This can be expensive and noisy. Now, the policy framework does **not** rerun failed jobs unless the revision changes. An admin can always kick off a job manually if they want to rerun a job at the revision. This reduces noise and feels like a better trade-off.
) Adds a `telemetry { exportedEvents { ... } }` query that allows a site admin to view recently exported events, before they are removed from the queue by the queue cleaner after `TELEMETRY_GATEWAY_EXPORTER_EXPORTED_EVENTS_RETENTION`, which defaults to 24 hours. This is very different from `event_logs` because it provides a `protojson` rendering of the "true" event payload - the data shown is exactly the data that was exported (except the export happens over proto), whereas the `event_logs` equivalent is translated from the raw data and may be missing some things. The new query and resolver supports pagination. This is useful in local development to see events without interrogating the telemetry-gateway's local dev logging mode or connecting it to a real telemetry-gateway and querying BigQuery. This can also be useful if a customer wants to see what is getting exported - right now, there's no easy way to do so without asking someone at Sourcegraph to check BigQuery, or for the customer to parse the raw proto payloads in the database themselves. I have a feeling this ask will eventually arise as we roll out v2 telemetry adoption more broadly. Closes #57027 ## Test plan Unit and integration tests, and some manual testing with `sg start` and running some searches: ![image](https://github.com/sourcegraph/sourcegraph/assets/23356519/ab39d9ad-829f-475a-b093-411edbcdf579) --------- Co-authored-by: Joe Chen <[email protected]>
* added log for when siteadmin verifies a user's email * fixed URL and added siteconfig change logging action * Logging list and create action for licenses, differentiating between redacted vs unredacted siteconfig * Adding more logging locations * adding logging on email addition and license * adding a function to call for logging events * changes to function per feedback * using a function to log security event * function changes * cleanup and comments * fix failing test * adding error checks and testing * putting log expansion behind feature flag * prettier * bazel configure * updating feature flag name * fix misspelling and wrong parameter passed * removing logging of args of siteconfig * function reverted to original iteration * sg generate go * cleanup * update unit test * feedback --------- Co-authored-by: Vincent Ruijter <[email protected]>
bug fixes and optimizations
) This commit approximates another part of Taiyab's design, the language, repo, path filters. As discussed in our huddle today we want to explore having filters be a separate query parameter instead of modifying the users input. This commit enables this feature. It takes the filters that are currently returned by the streaming API and generates links for them that update the search filter query parameter. A couple of things to note: - It's currently not possible to include multiple filters of the same type. - Clicking the filter again will remove it. - Because the filters are implemented as links we get data preloading on hover, which results in a very smooth experience for queries that are fast. - For now the "type filter" is still implemented by modifying the query input.
It looks like the existing event, `ExternalAuthSignupSucceeded`, is referenced in the legacy Cloud exporter, so I've opted to use `teestore.WithoutV1(ctx)` to preserve the old events while also exporting V2 events. The very large diff includes new generated mocks for `TelemetryEventsExportQueueStore`. I've been trying to avoid doing this to encourage callsites to provide `EventRecorder` and mock that instead (smaller interface, more predictable input), but for convenience I think it's hard to diverge from existing patterns of "mock everything", so I think it's better to just have it.
) Currently, we do not respect the search.contextLines setting in the backend. Specifically, the search backend currently always yields results with zero lines of context. One effect of this is in order to display matches with context lines, any client needs to make a followup request to fetch the full contents of the file. This makes the UI feel laggy because results filter in slowly as you scroll. This is exacerbated by the fact that we load the highlighted code, and highlighting can be unpredictable, sometimes taking a couple seconds to return. We already stream the matched chunk back to the client, so this just updates the backend so that the streamed results include the number of context lines a user requested. Zoekt already supports this, so it was just a matter of taking advantage of that setting and updating searcher to do the same.
This is some setup for getting rid of all the cruft from extensions. The first step is to make the browser extension use the codeintel packages directly rather than requiring codeintel extensions running in a web worker. This extracts and simplifies few functions to prepare for that.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot]
Can you help keep this open source service alive? 💖 Please sponsor : )