Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backend #11

Open
alecandido opened this issue Apr 11, 2024 · 0 comments
Open

Backend #11

alecandido opened this issue Apr 11, 2024 · 0 comments

Comments

@alecandido
Copy link
Member

NumpyBackend as a case study

During some discussion, I ended up telling that the NumPy backend would have been part of qibo-core.
Well, I'm not sure if this idea will survive to Rust...

By now, I'm trying to make qibo-core as dependency-less as possible. And the Rust core will also be NumPy free (of course).
However, at some point I will have to return some results to Python, wherever they are coming from. These results will of course contain arrays, and the only sensible choice will be to return them as NumPy arrays (well, I could try to make my own Python class in Rust, to hold a buffer following the buffer protocol, or even directly such the NumPy's array protocol, but NumPy is such a light and ubiquitous dependency for Python, and PyO3 has an optimal crate for the purpose, that I really believe not to be worth...).

However, despite the dependency not being an actual argument, I still tend to think that the NumpyBackend should belong to the actual Qibo package (unless we decide to move all backends somewhere else). But I mostly have in mind the bare execution (execute_circuit* methods), and I believe to be little controversial to keep this in qibo.

The rest of the backend is the complex matter...

Backend rich API

I would consider stripping part of the backend functions, and make them a result manipulation library.

However, this will require a careful scrutiny: there are some functions that are essentially never overwritten, so they are perfect candidates for librarization. But there are also functions that might require, or benefit, from being overwritten, e.g. to be run on a discrete hardware accelerator, or any kind of separate device.

These last functions are what the backend mechanism has been designed for, but they are not surviving serialization. So, e.g., they can not be used on a cloud backend.

So, the current Backend is doing many things:

  • executing the circuit, obtaining some kind of result (shots, state, TN)
  • efficiently manipulating the result

At this point, we should take a decision: functions like calculate_norm() should not be required to execute a circuit, whatever is the result type. But they would benefit from being executed in the same place of the circuit (e.g. a GPU, or a cloud node), since you might avoid fetching large amounts of data.

One option is to just deny them, and force the user to fetch the result. From there on, there is little need that this operations will be part of the backend (or, at least, part of the same backend structure executing the circuit). They could be easily provided as a separate library (or multiple ones, if you want to execute on GPU or wherever else).

The other is to allow a rather wide amount of operations to be executed together with the circuit.
To make this possible, we will need a language to encode them as well, and one or more runtimes matching the various backends (direclty using the same memory buffers on the computing device, or being auto-diff compatible).

Currently, a lot of the magic was happening thanks to the self.np assignment (assuming it was actually consistently used, that I believe it's not always the case), but this only works for backends implemented on NumPy compatible libraries.
However, with multilanguage support this can not work.

I'm trying to think to many possible solutions, but currently nothing truly clicked as the optimal one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant