forked from sourcegraph/sourcegraph
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[pull] main from sourcegraph:main #9
Open
pull
wants to merge
10,000
commits into
frikke:main
Choose a base branch
from
sourcegraph:main
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+1,561,321
−735,630
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
There was an error handling pipeline event 73ba194f-e79a-41f7-9b1b-6c5b9e9927cd. |
Global Compliance
|
Previously, this took users to `https://sourcegraph.com/https://sourcegraph.com/cody`, which was because it used `<Navigate />` incorrectly. Now it correctly takes you to https://sourcegraph.com/cody. ## Test plan Test locally in dotcom mode
## Test plan Visit search contexts page and ensure it renders as expected.
Turn off 2 low-value eslint rules that add a lot of noise: - `@typescript-eslint/no-explicit-any` because if you mean to use `any` then you probably mean to. We still have eslint warnings for `any` member access, which is indeed riskier. - `@typescript-eslint/no-non-null-assertion` because if you type `!` then you mean it. Both of these have hundreds of warnings in our current codebase, so this significantly reduces the eslint noise and makes it so that the higher-value eslint rules are more noticeable.
These CTAs are all inaccessible on dotcom anyway and would never show up. ## Test plan Visit pages where CTAs were deleted and ensure they work.
I noticed that the regexp toggle doesn't work anymore if `"search.defaultPatternType": "regexp"`. This is related to a recent change #63410. We also append `patternType:keyword` in that case which I don't think we want, because we have an UI element to indicate that keyword search is active. The question I don't know how to answer is: What should happen if `regexp` is the default and the user toggles keyword search off. Should we go back to `regexp` or to `standard`? Test plan: - new unit test - manual testing with default pattern type set to "keyword" and "regexp".
These are more frequently erroneous than helpful. See https://sourcegraph.slack.com/archives/C04MYFW01NV/p1719209633005499. This eliminates a source of frustration and flakiness in pull requests and removes a lot of code and Bazel complexity. If we want to revive them, we can revert this commit. Note that `client/web-sveltekit` does not use Percy, and if we want it to, we can always revert this commit or start over from scratch if that's easier. <!-- PR description tips: https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e --> ## Test plan CI Co-authored-by: Jean-Hadrien Chabran <[email protected]>
This is part of the Keyword GA Project. Instead of hardcoding the pattern type to "standard" we use the default pattern type. This really only affects highlighting, because users are already forced to explicity state a pattern type by the GraphQL API. Test plan: - new unit test - manual testing: I tested the following workflows: Creating a saved search from search results page Creating a saved search from the user menu Editing an existing saved search after the default patternType had changed
"lucky" was an experimental pattern type we added about 2 years ago. Judging from the git history and the current code, it was at some point replaced by "smart search" and "search mode", which we also plan to remove soon. See #43140 for more context Test plan: CI
…nges credentials (#63517)
<!-- 💡 To write a useful PR description, make sure that your description covers: - WHAT this PR is changing: - How was it PREVIOUSLY. - How it will be from NOW on. - WHY this PR is needed. - CONTEXT, i.e. to which initiative, project or RFC it belongs. The structure of the description doesn't matter as much as covering these points, so use your best judgement based on your context. Learn how to write good pull request description: https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e?pvs=4 --> Working towards a standard Appliance deployment, currently none exist. As part of [REL-13](https://linear.app/sourcegraph/issue/REL-13/appliance-can-be-installed-by-helm) we need to create a docker container that hosts Appliance ([REL-201](https://linear.app/sourcegraph/issue/REL-201/create-appliance-container-through-bazel)) which this PR resolves. ## Test plan <!-- All pull requests REQUIRE a test plan: https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles --> `sg images build appliance` ## Changelog <!-- 1. Ensure your pull request title is formatted as: $type($domain): $what 2. Add bullet list items for each additional detail you want to cover (see example below) 3. You can edit this after the pull request was merged, as long as release shipping it hasn't been promoted to the public. 4. For more information, please see this how-to https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c? Audience: TS/CSE > Customers > Teammates (in that order). Cheat sheet: $type = chore|fix|feat $domain: source|search|ci|release|plg|cody|local|... --> <!-- Example: Title: fix(search): parse quotes with the appropriate context Changelog section: ## Changelog - When a quote is used with regexp pattern type, then ... - Refactored underlying code. --> - feat(appliance): create docker container
Studying Joe's work a bit more in depth I noticed that our API representation of this role ("Cody Analytics admin") does not line up with our internal representation ("customer admin"). Since we're already here, it's probably better to just align on "customer admin" as the role everywhere, and figure out more granular roles if we need it later. Once it's rolled out and usages are migrated (sourcegraph/cody-analytics#83), we can remove the deprecated enum entirely (#63502) ## Test plan CI
Makes destructive updates usable in automation, such as GitHub actions ## Test plan ``` sg enterprise subscription update-membership -subscription-instance-domain='bobheadxi.dev' --auto-approve '...' ```
Fixes [SRCH-627](https://linear.app/sourcegraph/issue/SRCH-627/cody-web-includes-false-code-context-preamble-is-unusable-for-many) `cody-web-experimental` 0.1.4 includes fixes for the including intro preamble when we have and when we don't have initial context for the cody chat, [commit with fix](sourcegraph/cody@a0f1c8b) ## Test plan - Check that in blob UI Cody's chat constantly has context based on the repository which is currently open - Check that on Cody Chat standalone page chat has no predefined context and therefore builds prompt without any preamble (so it can answer generic questions properly)
Currently the matrix is hardcoded in the msp repo. Service operators can forget to add or remove their service from the list. GitHub supports dynamically generating the matrix from a previous jobs output ([example](https://josh-ops.com/posts/github-actions-dynamic-matrix/)) This PR adds an `sg msp subscription-matrix` command which will generate the matrix we need Part of CORE-202 ## Test plan Output ``` {"service":[{"id":"cloud-ops","env":"prod","category":"internal"},{"id":"gatekeeper","env":"prod","category":"internal"},{"id":"linearhooks","env":"prod","category":"internal"}]} ```
Uses the guidance in https://openfga.dev/docs/modeling/testing to craft some rudimentary IAM model tests for Enterprise Portal IAM. Not automated for now - the model tests must be run manually: ``` go run github.com/openfga/cli/cmd/fga@latest model test --tests='cmd/enterprise-portal/service/iam_model.fga.yml' ``` If we end up changing the model more I'll ask around in dev-infra to see how we should automate this. ## Test plan CI and: ``` go run github.com/openfga/cli/cmd/fga@latest model test --tests='cmd/enterprise-portal/service/iam_model.fga.yml' ``` --------- Co-authored-by: James Cotter <[email protected]>
Using `append` on a variable, then sharing that variable, surprisingly seems to cause nondeterministic behaviour in the flags. This makes the shared flag set a function so that each command gets its own set to append to. ## Test plan `sg enterprise subscription list -h` now has the correct flags
… only (#63325) This PR adds another option to the sensitive metadata allowlist by simplifying the input requirements. to accept `feature` and the allowlisted `privateMetadata` keys for that feature. This change is particularly beneficial when a `feature` has multiple associated `action`(s), and the `privateMetadata` key needs to be allowed for all events related to that feature. Building upon this initial PR: - https://github.com/sourcegraph/sourcegraph/pull/62830/files <!-- 💡 To write a useful PR description, make sure that your description covers: - WHAT this PR is changing: - How was it PREVIOUSLY. - How it will be from NOW on. - WHY this PR is needed. - CONTEXT, i.e. to which initiative, project or RFC it belongs. The structure of the description doesn't matter as much as covering these points, so use your best judgement based on your context. Learn how to write good pull request description: https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e?pvs=4 --> ## Test plan <!-- All pull requests REQUIRE a test plan: https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles --> CI and unit tests ## Changelog <!-- 1. Ensure your pull request title is formatted as: $type($domain): $what 2. Add bullet list items for each additional detail you want to cover (see example below) 3. You can edit this after the pull request was merged, as long as release shipping it hasn't been promoted to the public. 4. For more information, please see this how-to https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c? Audience: TS/CSE > Customers > Teammates (in that order). Cheat sheet: $type = chore|fix|feat $domain: source|search|ci|release|plg|cody|local|... --> <!-- Example: Title: fix(search): parse quotes with the appropriate context Changelog section: ## Changelog - When a quote is used with regexp pattern type, then ... - Refactored underlying code. -->
…rch results. (#63524) Due to changes in the code base, the search extension code when run from `main` shows file names only in the search results page - no matches in the files. This is a regression from the behavior in the deployed extension. Deployed extension: <img width="1504" alt="Screenshot 2024-06-27 at 10 04 37" src="https://github.com/sourcegraph/sourcegraph/assets/129280/edd97903-d03f-4612-98c8-c8f286f0cb3b"> Running from `main`: <img width="1502" alt="Screenshot 2024-06-27 at 10 11 17" src="https://github.com/sourcegraph/sourcegraph/assets/129280/d7aefcfe-3a25-4486-9fa6-a5e6bc7c6a8e"> Turns out the reason is because some shared code expects chunk matches, while the search queries were all returning line matches. Added support for line matches in the shared code, and then fixed an issue with the search results display not keeping up with `MatchGroups`. ## Test plan Build and run locally. ### Build ``` git switch peterguy/vscode-bring-back-matched-lines-in-search-results cd client/vscode pnpm run build ``` ### Run - Launch extension in VSCode: open the `Run and Debug` sidebar view in VS Code, then select `Launch VS Code Extension` from the dropdown menu. - Run a search using the search bar. - See that the results contain matched lines in the files and not just a list of file names. Compare to the currently-deployed extension - the search results should look generally the same.
…thoritative (#63502) Closes https://linear.app/sourcegraph/issue/CORE-199. AIP generally implies `Update` RPCs are authoritative, which means that we should be deleting all roles memberships not provided to `UpdateEnterpriseSubscriptionMembership`. Most important outcome here is that we can actually remove roles from users by assigning them an empty role set `[]` Later we can add a "get roles" RPC to safely make these updates, and introduce a purely additive RPC if needed. It's not a huge deal right now because we only have 1 role ("customer admin") Also removes the deprecated value from #63501. ## Test plan Unit tests, expanded with better table-driven cases and expanded assertions
Closes https://linear.app/sourcegraph/issue/CORE-157 Potentially controversial decision: storing license data (the JSON data encoded into license keys, and the license key itself) in JSONB. The data in there isn't (and shouldn't be) interesting to query on - they're only used in the context of a single license, so you would never use it as conditions. The only one that might be useful is "license key substring" for parity with what we have today, but that's an internal-only use case and shouldn't be in the hotpath for anything super performance sensitive. Conditions table is similar to the one introduced in #63453 ## Test plan CI
Improves the Cody PLG management page to have a more prominent link to Cody Web. Also renames `Ask Cody` to `Cody` for simplicity. Closes https://linear.app/sourcegraph/issue/PRIME-396/improve-web-chat-link-on-cody-manage-page <img width="1226" alt="image" src="https://github.com/sourcegraph/sourcegraph/assets/1976/8f74f8f6-c5b8-41f1-abb6-2da5e02a25aa"> ## Test plan View in dotcom mode and ensure section looks nice
This cleans up the revision picker design, adds a copy button, and adds it to the file page.
follow up #63484 got feedback from teammates that it shouldn't be `warning` but `info` warning usually indicates something is wrong, this is not the case. <!-- PR description tips: https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e --> ## Test plan <!-- REQUIRED; info at https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles --> ## Changelog <!-- OPTIONAL; info at https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c --> n/a
This integrates the new occurrences API into the Svelte webapp. This fixes a number of issues where the syntax highlighting data is not an accurate way to determine hoverable tokens. It is currently behind the setting `experimentalFeatures.enablePreciseOccurrences`
This is a bit of refactoring as a followup to #63217. The goal is to unify how we use `Occurrences` so that we can use the same interface for both syntax highlighting data and occurrence data from the new API. An additional goal is to encode more of the invariants in the type system so it's more difficult to use incorrectly.
* Once all the hooks have finished we now os.Exit ensuring anything else non-process related quits. * Reduce max interrupt count from 5 -> 2. Restoring what it was previously. This might lead to dangling processes. [Issue](https://linear.app/sourcegraph/issue/DINF-74/sg-address-sg-hanging-around-after-ctrlc) <!-- PR description tips: https://www.notion.so/sourcegraph/Write-a-good-pull-request-description-610a7fd3e613496eb76f450db5a49b6e --> ## Test plan Tested locally <!-- REQUIRED; info at https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles --> ## Changelog * sg - Always os.Exit once shutdown hooks have completed * sg - Reduce max intterupt count from 5 to 2 to hard exit
Fixes srch-589 This implements the 'y' shortcut to navigate to the permalink page, just like we have in the React app. According to aria guidelines it should be possible to disable single character shortcuts but this actually never worked for the 'y' shortcut because it was implemented differently. So adding it without the option to turn it off is at least not a regression. For reference this the code handling the shortcut in the React app: https://sourcegraph.sourcegraph.com/github.com/sourcegraph/sourcegraph@d9dff1191a3bad812aa5b50315b8e77ee0e40e55/-/blob/client/web/src/repo/actions/CopyPermalinkAction.tsx?L64-77 When initially implementing this I noticed that there is a slight delay between pressing 'y' and the URL/UI updating. That's because SvelteKit has to wait for `ResolveRepoRevision` call to resolve before updating the page. A new request is made because the URL changes which triggers the data loader. But we already know that such a request will return the same data, so the request is unnecessary. To avoid the second call I added another caching layer. I also noticed that the last commit info would be reloaded which is unnecessary. I changed the implementation to use the resolved revision instead of the revision from the URL. Now the request will be properly cached on the commit ID and the implementation is also much simpler. ## Test plan Manual testing.
Missing bit for the minor release version bump ## Test plan CI <!-- REQUIRED; info at https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles -->
Closes srch-494 This adds the search query syntax introduction component to the search homepage. I tried to replicate the React version as closely as possible. I originally wanted to reuse the logic to generate the example sections but since it had dependencies on wildcard I duplicated it instead. Notable additional changes: - Added a `value` method to the temporary settings store to make it easier to get the current value of the settings store. It only resolves (or rejects) once the data is loaded. - Extended the tabs component to not show the tab header if there is only a single panel. This makes it easier for consumers to render tabs conditionally. - Added the `ProductStatusBadge` component - Various style adjustments For reference, the relevant parts of the React version are in https://sourcegraph.sourcegraph.com/github.com/sourcegraph/sourcegraph/-/blob/client/branded/src/search-ui/components/useQueryExamples.tsx and https://sourcegraph.sourcegraph.com/github.com/sourcegraph/sourcegraph/-/blob/client/branded/src/search-ui/components/QueryExamples.tsx ## Test plan Manual testing. I manually set the value received from temporary settings to `null` (in code) to force trigger the compute logic.
This PR adds more unit tests for the "Chat Completions" HTTP endpoint. The goal is to have unit tests for more of the one-off quirks that we support today, so that we can catch any regressions when refactoring this code. This PR adds _another layer_ of test infrastructure to use to streamline writing completion tests. (Since they are kinda involved, and are mocking out multiple interactions, it's kinda necessary.) It introduces a new data type `completionsRequestTestData` which contains all of the "inputs" to the test case, as well as some of the things we want to validate. ```go type completionsRequestTestData struct { SiteConfig schema.SiteConfiguration UserCompletionRequest types.CodyCompletionRequestParameters WantRequestToLLMProvider map[string]any WantRequestToLLMProviderPath string ResponseFromLLMProvider map[string]any WantCompletionResponse types.CompletionResponse } ``` Then to run one of these tests, you just call the new function: ```go func runCompletionsTest(t *testing.T, infra *apiProviderTestInfra, data completionsRequestTestData) { ``` With this, the new pattern for completion tests is of the form: ```go func TestProviderX(t *testing.T) { // Return a valid site configuration, and the expected API request body // we will send to the LLM API provider X. getValidTestData := func() completionsRequestTestData { ... } t.Run("TestDataIsValid", func(t *testing.T) { // Just confirm that the stock test data works as expected, // without any test-specific modifications. data := getValidTestData() runCompletionsTest(t, infra, data) }) } ``` And then, for more sophisticated tests, we would just overwrite whatever subset of fields are necessary from the stock test data. For example, testing the way AWS Bedrock provisioned throughput ARNs get reflected in the completions API can be done by creating a function to return the specific site configuration data, and then: ```go t.Run("Chat", func(t *testing.T) { data := getValidTestData() data.SiteConfig.Completions = getProvisionedThroughputSiteConfig() // The chat model is using provisioned throughput, so the // URLs are different. data.WantRequestToLLMProviderPath = "/model/arn:aws:bedrock:us-west-2:012345678901:provisioned-model/abcdefghijkl/invoke" runCompletionsTest(t, infra, data) }) t.Run("FastChat", func(t *testing.T) { data := getValidTestData() data.SiteConfig.Completions = getProvisionedThroughputSiteConfig() data.UserCompletionRequest.Fast = true // The fast chat model does not have provisioned throughput, and // so the request path to bedrock just has the model's name. (No ARN.) data.WantRequestToLLMProviderPath = "/model/anthropic.claude-v2-fastchat/invoke" runCompletionsTest(t, infra, data) }) ``` ## Test plan Added more unit tests. ## Changelog NA
A couple of minor changes to minimize the diff for "large completions API refactoring". Most changes are just a refactoring of the `openai` completions provider, which I apparently missed in #63731. (There are still some smaller tweaks that can be made to the `fireworks` or `google` completion providers, but they aren't as meaningful. This PR also removes a couple of unused fields and methods. e.g. `types.CompletionRequestParameters::Prompt`. There was a comment to the effect of it being long since deprecated, and it is no longer read anywhere on the server side. So I'm assuming that a green CI/CD build means it is safe to remove. ## Test plan CI/CD ## Changelog NA
This will correct6 upgrade path for mvu plan creation ## Test plan CI test <!-- REQUIRED; info at https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles --> ## Changelog <!-- OPTIONAL; info at https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c --> --------- Co-authored-by: Release Bot <[email protected]> Co-authored-by: Jean-Hadrien Chabran <[email protected]> Co-authored-by: Anish Lakhwara <[email protected]> Co-authored-by: Jean-Hadrien Chabran <[email protected]> Co-authored-by: Anish Lakhwara <[email protected]>
Removes the `sg telemetry` command that pertains to the legacy V1 exporter that is specific to Cloud instances. I got asked about this recently, and especially with the new `sg analytics` for usage of the `sg` CLI, this has the potential to be pretty confusing. Part of https://linear.app/sourcegraph/issue/CORE-104 ## Test plan n/a ## Changelog - `sg`: the deprecated `sg telemetry` command for allowlisting export of V1 telemetry from Cloud instances has been removed. Use telemetry V2 instead.
This PR fixes the following: - Handles source range translation in the occurrences API (Fixes https://linear.app/sourcegraph/issue/GRAPH-705) - Handles range translation when comparing with document occurrences in search-based and syntactic usagesForSymbol implementations Throwing this PR up in its current state as I think adding the bulk conversion API will be a somewhat complex task, so we should split them into separate PRs anyways, and I don't have time to continue working on this right now. Some design notes: - We want to avoid passing around full CompletedUpload and RequestState objects, which is why I chose to create a smaller UploadSummary type and decided to pass around GitTreeTranslator as that is the minimal thing we need to handle range re-mapping. - Yes, this PR increases the surface of the UploadLike type, but I think it's still quite manageable. ## Test plan manual testing, existing tests on gittreetranslator --------- Co-authored-by: Christoph Hegemann <[email protected]>
This commit removes files/dependencies that we are not using (anymore). In the case of `@sourcegraph/wildcard` we never want to import dependencies from it, but have done so accidentally in the past. I hope that we can prevent this by removing it from dependencies (and we don't need anyway). ## Test plan `pnpm build` and CI
Instead of fetching the file for every node, this passes in a request-scoped cache to minimize the number of gitserver roundtrips, but does no fetching if `surroundingContent` is not requested by the caller.
Fixes srch-717 This commit fixes the line numbers for unified diff views, which are used on the commit page and for inline diffs. ## Test plan Manual testing. | Before | After | |--------|--------| | ![2024-07-11_10-40](https://github.com/sourcegraph/sourcegraph/assets/179026/170ac815-d038-4239-80fe-7d35cecfa832) | ![2024-07-11_10-38](https://github.com/sourcegraph/sourcegraph/assets/179026/3606cb34-ad87-43bf-9664-414bf9250fa4) |
When `internal/env/env.Get` detects a difference in between already registered descriptions, it panics (good). But the error message doesn't tell you what's the difference and you're left out to put a few prints for yourself in the code to try to understand what's wrong. See also: #63786 Before: <img width="1109" alt="image" src="https://github.com/sourcegraph/sourcegraph/assets/10151/56b2d65c-ef87-4134-bfc0-67248aa48350"> After: ![CleanShot 2024-07-11 at 15 26 13@2x](https://github.com/sourcegraph/sourcegraph/assets/10151/406bca90-2b87-481d-aad3-6550afaca29f) ## Test plan CI + local run ## Changelog - When conflicting env var are detected, print the two to ease debugging.
This PR fixes an important bug in #62976, where we didn't properly map the symbol line match to the return type. Instead, we accidentally treated symbol matches like file matches and returned the start of the file. ## Test plan Add new unit test for symbol match conversion. Extensive manual testing.
…3779) If we failed getting a secret via a tool - we return CommandErr which contains SecretErr If we failed getting a secret via Google - we return GoogleSecretErr which contains SecretErr Depending on the error we get while trying to persist Analytics we suggest different fixes the user can try. Below is how it looks when we get a GoogleSecretErr ![Screenshot 2024-07-11 at 11 11 40](https://github.com/sourcegraph/sourcegraph/assets/1001709/12479561-c1f5-4de7-b00e-01a1fbb49ece) ## Test plan Tested locally <!-- REQUIRED; info at https://docs-legacy.sourcegraph.com/dev/background-information/testing_principles --> ## Changelog <!-- OPTIONAL; info at https://www.notion.so/sourcegraph/Writing-a-changelog-entry-dd997f411d524caabf0d8d38a24a878c -->
**chore(appliance): extract constant for configmap name** To the reconciler, this is just a value, but to higher-level packages like appliance, there is a single configmap that is an entity. Let's make sure all high-level orchestration packages can reference our name for it. This could itself be extracted to injected config if there was a motivation for it. **chore(appliance): extract NewRandomNamespace() in k8senvtest** From reconciler tests, so that we can reuse it in self-update tests. **feat(appliance): self-update** Add a worker thread to the appliance that periodically polls release registry for newer versions, and updates its own Kubernetes deployment. If the APPLIANCE_DEPLOYMENT_NAME environment variable is not set, this feature is disabled. This PR will be accompanied by one to the appliance's helm chart to add this variable by default. **fix(appliance): only self-update 2 minor versions above deployed SG** **chore(appliance): self-update integration test extra case** Check that self-update doesn't run when SG is not yet deployed. https://linear.app/sourcegraph/issue/REL-212/appliance-can-self-upgrade
`REDIS_ENDPOINT` is now registered by gateway, but it is also registered in `internal/redispool` as the fallback for when the other values are not set. The real fix would be to not have env vars in that package, and instead each service creates one instance of each of those two in their `cmd/`, but that's a lot of work so short-term fixing it by reading the fallback using os.Getenv. Test plan: `sg run cody-gateway` doesn't panic. --------- Co-authored-by: Jean-Hadrien Chabran <[email protected]>
…#63790) The OTEL upgrade #63171 bumps the `prometheus/common` package too far via transitive deps, causing us to generate configuration for alertmanager that altertmanager doesn't accept, at least until the alertmanager project cuts a new release with a newer version of `promethues/common`. For now we forcibly downgrade with a replace. Everything still builds, so we should be good to go. ## Test plan `sg start` and `sg run prometheus`. On `main`, editing `observability.alerts` will cause Alertmanager to refuse to accept the generated configuration. With this patch, all is well it seems - config changes go through as expected. This is a similar test plan for #63329 ## Changelog - Fix Prometheus Alertmanager configuration failing to apply `observability.alerts` from site config
A couple of tweaks to the commit / diff view: - Linking both file paths in the header for renamed files - Collapse renamed file diffs without changes by default - Move "no changes" out of `FileDiffHunks` to not render a border around the test. - Add description for binary files - Adjust line height and font size to match what we use in the file view - Added the `visibly-hidden` utility class to render content for a11y purposes (I didn't test the changes I made with a screenreader though) Contributes to SRCH-523 ## Test plan Manual testing
…itly (#63782) I went through all call sites of the 3 search APIs (Stream API, GQL API, SearchClient (internal)) and made sure that the query syntax version is set to "V3". Why? Without this change, a new default search syntax version might have caused a change in behavior for some of the call sites. ## Test plan - No functional change, so relying mostly on CI - The codeintel GQL queries set the patternType explicitly, so this change is a NOP. I tested manually - search based code intel sends GQL requests with version "V3" - repo badge still works - compute GQL returns results
The background publisher was started regardless if analytics was disabled or not. This PR makes it so that we only publish analytics if it is enabled. To make it work and not duplicate the disabled analytics check, I moved the usershell + background context creation to happen earlier. ## Test plan CI and tested locally ## Changelog * sg - only start the analytics background publisher when analytics are enabled --------- Co-authored-by: Jean-Hadrien Chabran <[email protected]>
This PR overhauls the UI a bunch to make it look more in line with other pages, and fixes various smaller papercuts and bugs. Closes SRC-377 Test plan: Added storybooks and made sure existing ones still look good, created, updated, deleted various webhooks locally.
…-box after successful completion (#59645) ## Linked Issues - Closes #38348 ## Motivation and Context: <!--- Why is this change required? What problem does it solve? --> - Improves the UX of the password reset flow ## Changes Made: <!--- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change. --> - Made changes to the following 2 flows: - On the new password entry screen: - Added a text which displays the email of the account for which the password change request has been raised - Added a back button to allow the users to go back to the previous email entry screen if the account they want to reset the password for is different - On the sign-in screen which comes after successful password reset request completion: - Made changes to auto-populate the email text-box with the email linked to the account on which the password reset request was completed recently ## Type of change: - [ ] Bug fix (non-breaking change which fixes an issue) - [x] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [ ] Refactoring (altering code without changing its external behaviour) - [ ] Documentation change - [ ] Other ## Checklist: - [x] Development completed - [ ] Comments added to code (wherever necessary) - [ ] Documentation updated (if applicable) - [x] Tested changes locally ## Follow-up tasks (if any): - None ## Test Plan <!--- Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. --> - Setup the codebase locally along with configuring a custom SMTP to enable the email delivery functionality with the help of this documentation: https://docs.sourcegraph.com/admin/config/email#configuring-sourcegraph-to-send-email-using-another-provider - Tested the entire flow locally end-to-end. - Screen recording of the password reset screen where the current email ID is added along with a back button: https://github.com/sourcegraph/sourcegraph/assets/51479159/a79fc338-ace0-4281-86d2-de7cc68eae20 - Screen recording of the sign-in screen after password reset is successfully done where the email ID is auto-populated in the text-box: https://github.com/sourcegraph/sourcegraph/assets/51479159/be7db65d-9421-4621-a1e9-a04a546b9757 ## Additional Comments - Please let me know if I need to make any further design changes from the frontend side or any API contract related changes from the backend side --------- Co-authored-by: Vincent <[email protected]> Co-authored-by: Shivasurya <[email protected]>
We register ctrl+backspace to go to the repository root, but that should not trigger when an input field, such as the fuzzy finder, is focused. Fixes srch-681 ## Test plan Manual testing.
The "Exclude" options in the filter panels are very useful, but many are specific to Go. This change generalizes them so they apply in many more cases: * All files with suffix `_test` plus extension (covers Go, Python, some Ruby, C++, C, more) * All files with suffix `.test` plus extension (covers Javascript, some Ruby) * Ruby specs * Third party folders (common general naming pattern) Relates to SPLF-70
This adds two new components for the common situation of needing to display a styled path. - `DisplayPath` handles splitting, coloring, spacing, slotting in a file icon, adding a copy button, and ensuring that no spaces get introduced when copying the path manually - `ShrinkablePath` is built on top of `DisplayPath` and adds the ability to collapse path elements into a dropdown menu These are used in three places: - The file header. There should be no change in behavior except maybe a simplified DOM. This makes use of the "Shrinkable" version of the component. - The file search result header. This required carefully ensuring that the text content of the node is exactly equal to the path so that the character offsets are correct. - The file popover, where it is used for both the repo name (unlinkified version) and the file name (linkified version). Fixes SRCH-718 Fixes SRCH-690
Closes https://linear.app/sourcegraph/issue/CODY-2847/change-experimental-labels-to-beta ## Test plan - Check that cody web page and cody web side panel have beta badges
…ore models) (#63797) This PR if what the past dozen or so [cleanup](#63359), [refactoring](#63731), and [test](#63761) PRs were all about: using the new `modelconfig` system for the completion APIs. This will enable users to: - Use the new site config schema for specifying LLM configuration, added in #63654. Sourcegraph admins who use these new site config options will be able to support many more LLM models and providers than is possible using the older "completions" site config. - For Cody Enterprise users, we no longer ignore the `CodyCompletionRequest.Model` field. And now support users specifying any LLM model (provided it is "supported" by the Sourcegraph instance). Beyond those two things, everything should continue to work like before. With any existing "completions" configuration data being converted into the `modelconfig` system (see #63533). ## Overview In order to understand how this all fits together, I'd suggest reviewing this PR commit-by-commit. ### [Update internal/completions to use modelconfig](e6b7eb1) The first change was to update the code we use to serve LLM completions. (Various implementations of the `types.CompletionsProvider` interface.) The key changes here were as follows: 1. Update the `CompletionRequest` type to include the `ModelConfigInfo` field (to make the new Provider and Model-specific configuration data available.) 2. Rename the `CompletionRequest.Model` field to `CompletionRequest.RequestedModel`. (But with a JSON annotation to maintain compatibility with existing callers.) This is to catch any bugs related to using the field directly, since that is now almost guaranteed to be a mistake. (See below.) With these changes, all of the `CompletionProvider`s were updated to reflect these changes. - Any situation where we used the `CompletionRequest.Parameters.RequestedModel` should now refer to `CompletionRequest.ModelConfigInfo.Model.ModelName`. The "model name" being the thing that should be passed to the API provider, e.g. `gpt-3.5-turbo`. - In some situations (`azureopenai`) we needed to rely on the Model ID as a more human-friendly identifier. This isn't 100% accurate, but will match the behavior we have today. A long doc comment calls out the details of what is wrong with that. - In other situations (`awsbedrock`, `azureopenai`) we read the new `modelconfig` data to configure the API provider (e.g. `Azure.UseDeprecatedAPI`), or surface model-specific metadata (e.g. AWS Provisioned Throughput ARNs). While the code is a little clunky to avoid larger refactoring, this is the heart and soul of how we will be writing new completion providers in the future. That is, taking specific configuration bags with whatever data that is required. ### [Fix bugs in modelconfig](75a51d8) While we had lots of tests for converting the existing "completions" site config data into the `modelconfig.ModelConfiguration` structure, there were a couple of subtle bugs that I found while testing the larger change. The updated unit tests and comments should make that clear. ### [Update frontend/internal/httpapi/completions to use modelconfig](084793e) The final step was to update the HTTP endpoints that serve the completion requests. There weren't any logic changes here, just refactoring how we lookup the required data. (e.g. converting the user's requested model into an actual model found in the site configuration.) We support Cody clients sending either "legacy mrefs" of the form `provider/model` like before, or the newer mref `provider::apiversion::model`. Although it will likely be a while before Cody clients are updated to only use the newer-style model references. The existing unit tests for the competitions APIs just worked, which was the plan. But for the few changes that were required I've added comments to explain the situation. ### [Fix: Support requesting models just by their ID](99715fe) > ... We support Cody clients sending either "legacy mrefs" of the form `provider/model` like before ... Yeah, so apparently I lied 😅 . After doing more testing, the extension _also_ sends requests where the requested model is just `"model"`. (Without the provider prefix.) So that now works too. And we just blindly match "gtp-3.5-turbo" to the first mref with the matching model ID, such as "anthropic::unknown::gtp-3.5-turbo". ## Test plan Existing unit tests pass, added a few tests. And manually tested my Sg instance configured to act as both "dotcom" mode and a prototypical Cody Enterprise instance. ## Changelog Update the Cody APIs for chat or code completions to use the "new style" model configuration. This allows for great flexibility in configuring LLM providers and exposing new models, but also allows Cody Enterprise users to select different models for chats. This will warrant a longer, more detailed changelog entry for the patch release next week. As this unlocks many other exciting features.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot]
Can you help keep this open source service alive? 💖 Please sponsor : )