-
Notifications
You must be signed in to change notification settings - Fork 956
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CON-144] OpenAIConnectionError probably caused by failed SSL certificates #218
Comments
Yes, getting the same error when following the tutorial here to use our own OpenAI API as described here https://continue.dev/docs/customization#change-the-default-llm.
|
@sestinj @TyDunn This is generalized now it seems. My ~/.continue/config.py is as follows: """
This is the Continue configuration file.
See https://continue.dev/docs/customization to learn more.
"""
import subprocess
from continuedev.src.continuedev.core.main import Step
from continuedev.src.continuedev.core.sdk import ContinueSDK
from continuedev.src.continuedev.core.models import Models
from continuedev.src.continuedev.core.config import SlashCommand, ContinueConfig
from continuedev.src.continuedev.plugins.context_providers.google import (
GoogleContextProvider,
)
from continuedev.src.continuedev.libs.llm.openai import OpenAI
from continuedev.src.continuedev.plugins.policies.default import DefaultPolicy
from continuedev.src.continuedev.plugins.steps.open_config import OpenConfigStep
from continuedev.src.continuedev.plugins.steps.clear_history import ClearHistoryStep
from continuedev.src.continuedev.plugins.steps.feedback import FeedbackStep
from continuedev.src.continuedev.plugins.steps.comment_code import CommentCodeStep
from continuedev.src.continuedev.plugins.steps.main import EditHighlightedCodeStep
from continuedev.src.continuedev.plugins.context_providers.search import (
SearchContextProvider,
)
from continuedev.src.continuedev.plugins.context_providers.diff import (
DiffContextProvider,
)
class CommitMessageStep(Step):
"""
This is a Step, the building block of Continue.
It can be used below as a slash command, so that
run will be called when you type '/commit'.
"""
async def run(self, sdk: ContinueSDK):
# Get the root directory of the workspace
dir = sdk.ide.workspace_directory
# Run git diff in that directory
diff = subprocess.check_output(["git", "diff"], cwd=dir).decode("utf-8")
# Ask the LLM to write a commit message,
# and set it as the description of this step
self.description = await sdk.models.default.complete(
f"{diff}\n\nWrite a short, specific (less than 50 chars) commit message about the above changes:"
)
OPENAI_API_KEY = "sk-xxx"
config = ContinueConfig(
# If set to False, we will not collect any usage data
# See here to learn what anonymous data we collect: https://continue.dev/docs/telemetry
allow_anonymous_telemetry=True,
models=Models(
default=OpenAI(model="gpt-4", api_key=OPENAI_API_KEY),
medium=OpenAI(model="gpt-3.5-turbo", api_key=OPENAI_API_KEY)
),
# Set a system message with information that the LLM should always keep in mind
# E.g. "Please give concise answers. Always respond in Spanish."
system_message=None,
# Set temperature to any value between 0 and 1. Higher values will make the LLM
# more creative, while lower values will make it more predictable.
temperature=0.0,
# Custom commands let you map a prompt to a shortened slash command
# They are like slash commands, but more easily defined - write just a prompt instead of a Step class
# Their output will always be in chat form
custom_commands=[
# CustomCommand(
# name="test",
# description="Write unit tests for the higlighted code",
# prompt="Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
# )
],
# Slash commands let you run a Step from a slash command
slash_commands=[
# SlashCommand(
# name="commit",
# description="This is an example slash command. Use /config to edit it and create more",
# step=CommitMessageStep,
# )
SlashCommand(
name="edit",
description="Edit code in the current file or the highlighted code",
step=EditHighlightedCodeStep,
),
SlashCommand(
name="config",
description="Customize Continue - slash commands, LLMs, system message, etc.",
step=OpenConfigStep,
),
SlashCommand(
name="comment",
description="Write comments for the current file or highlighted code",
step=CommentCodeStep,
),
SlashCommand(
name="feedback",
description="Send feedback to improve Continue",
step=FeedbackStep,
),
SlashCommand(
name="clear",
description="Clear step history",
step=ClearHistoryStep,
),
],
# Context providers let you quickly select context by typing '@'
# Uncomment the following to
# - quickly reference GitHub issues
# - show Google search results to the LLM
context_providers=[
# GitHubIssuesContextProvider(
# repo_name="<your github username or organization>/<your repo name>",
# auth_token="<your github auth token>"
# ),
GoogleContextProvider(
serper_api_key="yyy"
),
SearchContextProvider(),
DiffContextProvider(),
],
# Policies hold the main logic that decides which Step to take next
# You can use them to design agents, or deeply customize Continue
policy=DefaultPolicy(),
) |
I also have |
@mysticaltech @Geczy I've been able to reproduce, definitely something going on here. Will update you once I figure it out. |
@mysticaltech @Geczy I've just published a new version and tested it myself (v0.0.278), and it should solve the problem. Let me know if for some reason it doesn't. The issue was that certificate bundles don't automatically get packaged when using pyinstaller, so I manually added them to the bundle and then set the appropriate environment variable. (Relevant commit here) Also, will be fixing the readability issues in just a minute. We try to match the VS Code theme, but missed a spot there :) |
@sestinj Thanks for the quick fix, very much appreciated, will try it now. Edit: Works like a charm! 🥳 🙏 |
Awesome! I'll leave this issue open until @Geczy also confirms that everything is okay Edit: See below, oops |
@Geczy so sorry! I missed your earlier message when I sent this. It looks like tiktoken might look elsewhere for the SSL certificates. Back to working on this, I'll give an update soon |
Alright, thanks for the quick update. Can I ask a few questions to help pin this down? I'm curious whether you're on any sort of Wifi network / VPN that would be in any way relevant regarding SSL certificates, for example being on wifi at work that has a firewall. The other thing I wonder is whether somehow we failed to start the newest server version. You might attempt to kill the server and reload VS Code to see if it works (though this is an unlikely solution). |
yeah i do have a work vpn on. turning it off yields the same issue. and my work laptop has some sort of other net thing installed called netskope that probably proxies all requests through it |
@Geczy You do need to close VSCode before trying again. Also, try to reboot just in case the continue server was not restarted with the latest version, that should work. And make sure your config.py looks more or less like mine. |
@Geczy I finally got around to this, sorry it took so long. For a while wasn't clear to me that this was just an explicit blocking of the tiktoken URL on many firewalls. I ended up updating token counting to fall back to counting characters, which is a bit unsafe, but better than the alternative, which was complete failure. This is ready in v0.0.341, and should solve your problem. I'll close the issue in about 2 days or after I hear back from you that this works, whichever comes first. |
microsoft/semantic-kernel#627
https://community.openai.com/t/ssl-certificate-verify-failed/32442/39
Seems like there's something weird going on with OpenAI. Want to make sure that this is the actual issue, then can consider directly making the request, though I don't know if I expect this to work. Other alternative is to download the OpenAI certificates, but people are claiming it doesn't work, so don't really want to spend time on this.
From SyncLinear.com | CON-144
The text was updated successfully, but these errors were encountered: