Skip to content

Commit

Permalink
feat(settings): make openai /v1/chat/completion endpoint(basepath) co…
Browse files Browse the repository at this point in the history
…nfigurable. (#110)

Signed-off-by: scbizu <[email protected]>
  • Loading branch information
scbizu committed Jun 1, 2023
1 parent e6adaae commit b284838
Show file tree
Hide file tree
Showing 3 changed files with 23 additions and 8 deletions.
10 changes: 7 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ You can click and drag or shift+click to select multiple blocks to use as input

If you are not in a block, the plugin won't add any additional input text to your prompt, and will append the results of the prompt to the bottom of the page.

After selecting the prompt and generating a response, a preview of the response will be shown in the popup. You can click the `Insert` button or press the enter key to insert the response into the page.
After selecting the prompt and generating a response, a preview of the response will be shown in the popup. You can click the `Insert` button or press the enter key to insert the response into the page.

You can also click the `Replace` button to replace the selected block with the response.

Expand All @@ -62,7 +62,7 @@ There are a number of built in prompt templates that you can use to generate tex
![](docs/ask-questions.gif)
### User prompt templates
You can also create your own custom prompt templates.
To do this, you create a block with the `prompt-template::` property. The template will be added to the list of templates in the gpt popup.
To do this, you create a block with the `prompt-template::` property. The template will be added to the list of templates in the gpt popup.


The `prompt-template::` property is the name of the prompt template.
Expand Down Expand Up @@ -141,7 +141,11 @@ You can adjust the `chatPrompt` setting to adjust how ChatGPT should respond to

You can add guidance such as "respond in chinese" or "respond in spanish" to the prompt to get the model to respond in a different language.

This setting is only used when the model is set to `gpt-3.5-turbo`
This setting is only used when the model is set to `gpt-3.5-turbo`.

For people who uses reverse-proxy server to use OpenAI service ,you can set the `chatCompletionEndpoint` to your reverse-proxy endpoint , the default configuration for this is `http:https://api.openai.com/v1`.

**WARNING: To use those reverse-proxy endpoints , you should always keep your data and privacy safe !!!**
### Inject Prefix

Allows you to inject a prefix into the GPT-3 output before it is inserted into the block, such as a [[gpt3]] tag or markdown formatting like `>` for a blockquote. This is useful for identifying blocks that were generated by GPT-3.
Expand Down
12 changes: 7 additions & 5 deletions src/lib/openai.ts
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ export interface OpenAIOptions {
maxTokens?: number;
dalleImageSize?: DalleImageSize;
chatPrompt?: string;
completionEndpoint?: string;
}

const OpenAIDefaults = (apiKey: string): OpenAIOptions => ({
Expand Down Expand Up @@ -51,12 +52,12 @@ const retryOptions = {
export async function whisper(file: File,openAiOptions:OpenAIOptions): Promise<string> {
const apiKey = openAiOptions.apiKey;
const model = 'whisper-1';

// Create a FormData object and append the file
const formData = new FormData();
formData.append('model', model);
formData.append('file', file);

// Send a request to the OpenAI API using a form post
const response = await backOff(

Expand All @@ -67,17 +68,17 @@ export async function whisper(file: File,openAiOptions:OpenAIOptions): Promise<s
},
body: formData,
}), retryOptions);

// Check if the response status is OK
if (!response.ok) {
throw new Error(`Error transcribing audio: ${response.statusText}`);
}

// Parse the response JSON and extract the transcription
const jsonResponse = await response.json();
return jsonResponse.text;
}

export async function dallE(
prompt: string,
openAiOptions: OpenAIOptions
Expand Down Expand Up @@ -112,6 +113,7 @@ export async function openAI(
const engine = options.completionEngine!;

const configuration = new Configuration({
basePath: options.completionEndpoint,
apiKey: options.apiKey,
});

Expand Down
9 changes: 9 additions & 0 deletions src/lib/settings.ts
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,13 @@ export const settingsSchema: SettingSchemaDesc[] = [
title: "OpenAI Completion Engine",
description: "See Engines in OpenAI docs.",
},
{
key: "chatCompletionEndpoint",
type: "string",
default: "http:https://api.openai.com/v1/",
title: "OpenAI API Completion Endpoint",
description: "The endpoint to use for OpenAI API completion requests. You shouldn't need to change this."
},
{
key: "chatPrompt",
type: "string",
Expand Down Expand Up @@ -90,6 +97,7 @@ export function getOpenaiSettings(): PluginOptions {
logseq.settings!["dalleImageSize"]
) as DalleImageSize;
const chatPrompt = logseq.settings!["chatPrompt"];
const completionEndpoint = logseq.settings!["chatCompletionEndpoint"];
return {
apiKey,
completionEngine,
Expand All @@ -98,5 +106,6 @@ export function getOpenaiSettings(): PluginOptions {
dalleImageSize,
injectPrefix,
chatPrompt,
completionEndpoint,
};
}

0 comments on commit b284838

Please sign in to comment.