Skip to content

calderonsamuel/ollama

Repository files navigation

ollama

Lifecycle: experimental CRAN status R-CMD-check

The goal of ollama is to wrap the ollama API and provide infrastructure to be used within {gptstudio}

Installation

You can install the development version of ollama like so:

pak::pak("calderonsamuel/ollama")

Prerequisites

The user is in charge of downloading ollama and providing networking configuration. We recommend using the official docker image, which trivializes this process.

The following code downloads the default ollama image and runs an “ollama” container exposing the 11434 port.

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

By default, this package will use https://localhost:11434 as API host url. Although we provide methods to change this, only do it if you are absolutely sure of what it means.

Example

This is a basic example which shows you how to solve a common problem:

library(ollama)
## basic example code

About

Interact with the Ollama API using R

Resources

License

Unknown, MIT licenses found

Licenses found

Unknown
LICENSE
MIT
LICENSE.md

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Languages