Skip to content
This repository has been archived by the owner on Jun 24, 2024. It is now read-only.

Build and execute our own computation graph #137

Open
philpax opened this issue Apr 13, 2023 · 5 comments
Open

Build and execute our own computation graph #137

philpax opened this issue Apr 13, 2023 · 5 comments
Labels
issue:enhancement New feature or request meta:maintenance Changes that will make it easier for us to maintain code topic:backend-support Support for alternate non-GGML backends, or for particular GGML backend features

Comments

@philpax
Copy link
Collaborator

philpax commented Apr 13, 2023

At present, we are using GGML's computation graph. This works well, but it has a few flaws:

  1. We're reliant on whatever support GGML has for threading; the Rust threading ecosystem is more versatile/OS-agnostic
  2. Adding new operations requires patching GGML
  3. We're coupled pretty tightly to GGML, so switching to an alternate backend would be quite difficult; this will only get worse as we support more models
  4. Abstraction of shared pieces of functionality gets a little finicky with the exposed API

After reading ggerganov/llama.cpp#915, I had a flash of inspiration and realised we could address these problems by using our own computation graph.

The code would be fairly similar to what it is now - but instead of building up a GGML computation graph, we build up our own in Rust code with all of the usual strong-typing guarantees.

To begin with, this computation graph would then be "compiled" to a GGML computation graph, so that it works identically.

Once that's done, we would look at reimplementing the actual execution of the graph in Rust code and using GGML's operations to do so (e.g. we use its vec_dot_q4_0, etc).

This would allow us to decouple from GGML in the future (#3), and gives us freedom to implement new operations that aren't supported by GGML without having to maintain our own patched version.

Ideally, we would just use burn or something similar directly, but none of the existing libraries are in a position to serve our needs (GGML-like performance with quantization support). This lets us side-step that issue for now, and focus on describing models that could be executed by anything once support is available.


Constructing our own computation graph and compiling it to GGML should be fairly simple (this could be done with petgraph or our own graph implementation, it's not that difficult).

The main problem comes in the executor reimplementation - a lot of GGML's more complex operations are coupled to the executor, so we'd have to reimplement them (e.g. all the ggml_compute_forward_... functions). Additionally, a lot of the base operations are static void and not exposed to the outside world, so it's likely we'd have to patch GGML anyway.

An alternate approach to full graph reimplementation might be to add support for custom elementwise operations once (as @KerfuffleV2 has done in their fork), so that we can polyfill custom operations from our computation graph.

@philpax philpax added issue:enhancement New feature or request meta:maintenance Changes that will make it easier for us to maintain code labels Apr 13, 2023
@KerfuffleV2
Copy link
Contributor

KerfuffleV2 commented Apr 14, 2023

I think this is a great a idea. Also, it's probably even more of a reason to decouple llama-rs from the GGML crates, and I would think what you're talking about also should be its own crate. (Using "crate" pretty much interchangeably with "repo" here.)

You'd also be able to do something like I mentioned in #130.

This would allow us to decouple from GGML in the future (#3), and gives us freedom to implement new operations that aren't supported by GGML without having to maintain our own patched version.

It looks like my mapping operations stuff is likely to get merged ( ggerganov/llama.cpp#874 ), so at least for operations that work with unary/binary mapping it won't be necessary to do that. Maybe the only other thing missing would be fold or 3d operations (not sure what would even need the latter). You could emulate a fold (albeit inefficiently) using map + something like statics.

@KerfuffleV2
Copy link
Contributor

I found this crate which looks pretty interesting: https://crates.io/crates/dagga

It's for scheduling directed acyclic graphs (like GGML's graph, and I assume other ML type graphs would be similar). You can do stuff like give the nodes semantics reflecting uses of resources, borrowing, dependencies, etc.

If nothing else, it might be useful for stealing ideas.

@9876691
Copy link

9876691 commented May 19, 2023

Is using Onnx runtime an option here?

There's a rust binding here https://github.com/microsoft/onnxruntime/tree/main/rust

The compute graph is basically formed from a protobuf definition. So using a rust protoc compiler you would get a bunch of rust structs auto generated. Then at runtime put the structs together to the compute graph and pass it to the runtime.

As far as I can see onnx runtime supports

It would perhaps be possible in the future to swap in the Wonnx rust version https://github.com/webonnx/wonnx

@philpax
Copy link
Collaborator Author

philpax commented May 20, 2023

We're already in talks with wonnx to see if we can use them as a computation backend: webonnx/wonnx#169

As for using onnxruntime directly... I don't know. Maybe, but we'd like to avoid having to synthesize an entire ONNX graph at runtime, especially as ONNX is quite an intricate format and has lots of details we don't care about.

@9876691
Copy link

9876691 commented May 26, 2023

For reference there's some ongoing work in ggml for graph support ggerganov/ggml#108

These are initial steps towards GPU support via computation graph export.
Still figuring out the basics needed. Playing with the mnist example

@philpax philpax added the topic:backend-support Support for alternate non-GGML backends, or for particular GGML backend features label Jun 15, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
issue:enhancement New feature or request meta:maintenance Changes that will make it easier for us to maintain code topic:backend-support Support for alternate non-GGML backends, or for particular GGML backend features
Projects
None yet
Development

No branches or pull requests

3 participants