npm start
loads an OpenAI API Key and Organization, creates the OpenAI agent and then loads a Chat object that you can use in the REPL to send prompts to ChatGPT and store answers.
Start a Deno REPL with a Chat object loaded that allows you to:
- Set a configuration
- Set a model:
> Chat.setModel(<model>)
- Set a temperature:
> Chat.setTemp(0.1)
- Set message memory:
> Chat.setMemory(20)
- Set a model:
- Ask a question in the repl:
> var code = Chat.getPrompt('./examples/main.go', 'utf8')
> var answer = await Chat.complete('What does the following program do?\n' + code)
- Get history:
> Chat.getHistory()
> Chat.getLast()
- Convert web pages to smaller docs that GPT models can use for context
var doc = await Chat.makeDoc('https://code.visualstudio.com/api/extension-guides/webview')
- Do anything else you can do in the Node.js REPL:
- Read/write files: modules
fs
andpath
loaded on initializationChat.saveFile(
./completions/vscode-ext-docs/guides/webview.md, doc)
- Import and use modules
- Feed computational output into prompts
- Read/write files: modules
- An OpenAI API Key and Organization Id
- A recent version of Deno
- Clone this repository
$ cd gpt-my-repl
- Create a .env file with the following lines:
OPENAI_API_KEY=<your API key>
ORGANIZATION=<your organization id>
$ npm install
$ npm start
Chat.setDefaultConfig()
: Reset to the initial configuration:- Model:
gpt-3.5-turbo
- Temperature:
0
- Top P:
1
- Frequency Penalty:
0
- Presence Penalty:
0
- Model:
Chat.getConfig()
: Print the current configurationsChat.setModel(<model: string>)
: Set a valid modelChat.setTemp(<temperature: float>)
: Set a valid temperatureChat.setMemory(<steps: integer>)
: Set the number of messages from history sent with the new prompt.Chat.getMemory()
: Since memory is not part of the API params, this is a separate getter to return the current memory size in messages
await Chat.complete(<prompt: string>)
: Returns a completion with the current configuration.Chat.getLast() -> object
: Returns the last completion in history.Chat.getHistory() -> object[]
: Returns an array of the history of completions in reverse-chronological order.Chat.saveHistory(<filePath: string>)
: Save all chat history to a file.Chat.loadHistory(<filePath: string>)
: Load messages from a JSON file into the history array.Chat.clearHistory()
: Clear the messages in history.Chat.tokenUsage(<messages: object[]) -> object
: Calculates tokens used in the provided messag history array. Return an object.
Chat.help()
: Print the available methods on the Chat objectChat.setWorkDir(<dirPath: string>)
: Set the working directory in the REPLChat.saveFile(<filePath: string>, <completion: string | array | object>)
: Save data as string to a file.Chat.loadFile(<filePath: string>)
: Load utf8 string data (text, JSON) in from a file.await Chat.fetch(<url: string>) -> string
: Fetch a web page's html.await Chat.extractFromHtml(<html: string>, <url: string>) -> object
: Uses the npm package @extractus/article-extractor to extract content from a web page and creates stripped-down article html (like in Safari's Reader mode). Returns an object with some properties (title, description, content, etc).Chat.htmlToMarkdown(<html: string>) -> string
: Translates html to markdown and returns the markdown.await Chat.makeDoc(<url: string>, <type: string> = 'markdown) -> string
: Fetches the contents of a web page, extracts article (text content) and converts content to the specified type (i.e. markdown, text). Returns article in final format.await Chat.concatFiles(<dir: string>, <extensions: Array<string>) -> Promise<string>
: Asynchronously reads and concatenates the contents of all files with specified extensions in a given directory and its subdirectories into a single string.