Skip to content

Neovim plugin to generate text using LLMs with customizable prompts

License

Notifications You must be signed in to change notification settings

macukadam/gen.nvim

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

80 Commits
 
 
 
 
 
 
 
 

Repository files navigation

gen.nvim

Generate text using LLMs with customizable prompts

Video

Local LLMs in Neovim: gen.nvim

Requires

Usage

Use command Gen to generate text based on predefined and customizable prompts.

Example key maps:

vim.keymap.set('v', '<leader>]', ':Gen<CR>')
vim.keymap.set('n', '<leader>]', ':Gen<CR>')

You can also directly invoke it with one of the predefined prompts:

vim.keymap.set('v', '<leader>]', ':Gen Enhance_Grammar_Spelling<CR>')

Options

All prompts are defined in require('gen').prompts, you can enhance or modify them.

Example:

require('gen').prompts['Elaborate_Text'] = {
  prompt = "Elaborate the following text:\n$text",
  replace = true
}
require('gen').prompts['Fix_Code'] = {
  prompt = "Fix the following code. Only ouput the result in format ```$filetype\n...\n```:\n```$filetype\n$text\n```",
  replace = true,
  extract = "```$filetype\n(.-)```"
}

You can use the following properties per prompt:

  • prompt: (string | function) Prompt either as a string or a function which should return a string. The result can use the following placeholders:
    • $text: Visually selected text
    • $filetype: Filetype of the buffer (e.g. javascript)
    • $input: Additional user input
    • $register: Value of the unnamed register (yanked text)
  • replace: true if the selected text shall be replaced with the generated output
  • extract: Regular expression used to extract the generated result
  • model: The model to use, e.g. zephyr, default: mistral:instruct
  • container: Specify name of ollama container if you are using Docker to host ollama service
  • debugCommand: Set to true redirects stderr of command execution to output window

You can change the default model by setting require('gen').model = 'your_model', e.g.

require('gen').model = 'zephyr' -- default 'mistral:instruct'

Here are all available models.

You can also change the complete command with

require('gen').command = 'your command' -- default 'ollama run $model $prompt'

You can use the placeholders $model, $prompt and $container.

You can specify Docker container that hosts ollama

require('gen').container = 'container name' -- default nil

Default command will then change to 'docker exec $container ollama run $model $prompt'

About

Neovim plugin to generate text using LLMs with customizable prompts

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Lua 100.0%