Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add feature "async" #68

Draft
wants to merge 23 commits into
base: main
Choose a base branch
from
Draft

Add feature "async" #68

wants to merge 23 commits into from

Conversation

RamType0
Copy link
Contributor

@RamType0 RamType0 commented Nov 6, 2021

Use RingBuffer::<T>::new_async(usize) to create ring buffer with async read write support.
Use write_chunk_uninit_async, write_chunk_async, read_chunk_async to asynchronously wait for requested number of slots gets available.

@mgeier
Copy link
Owner

mgeier commented Nov 7, 2021

Thanks for this PR!

This is really interesting ... sadly I don't really have any experience with async code, so it will take some time for me to digest this.

However, there might already be a problem: I'd like to not make the existing functionality any slower. But the two-threads/large benchmark shows a regression for me:

regression

Red is the main branch, blue is this PR.

I wouldn't actually expect a difference, maybe #[inline] has to be added somewhere?

Apart from that, this is introducing locking (to the async part), and the whole idea of rtrb is to be lock-free ...

I don't really understand why to bother with atomic operations if a mutex is used anyway. But that might as well be my limited understanding of the topic, I'd like to learn more about it!

Also, there seems to be really little code-reuse between sync and async parts, so is it a good idea to have both in the same codebase? Or would it be just as feasible to have separate sync and async crates?

BTW, have you seen https://github.com/irrustible/async-spsc?

@RamType0
Copy link
Contributor Author

RamType0 commented Nov 7, 2021

BTW, have you seen https://github.com/irrustible/async-spsc?

I love this crate because this crate's get chunk with capacity & commit pattern is really useful for some scenario.

I am trying on audio processing apps.

This audio processing does

  1. Wait until streamed samples reaches FFT window size
  2. Wait until streamed samples reaches double size of wave period (which was computed by FFT)
  3. Pop out two wave period and process with that

Many crates only features

  • Pop and push single element
  • Get chunk always returns all elements pushed and immediately pops out these from buffer

So I didn't take async-mpsc.

I wouldn't actually expect a difference, maybe #[inline] has to be added somewhere?

I will add #[inline] to methods in Reactor.

Apart from that, this is introducing locking (to the async part), and the whole idea of rtrb is to be lock-free ...

I also want to be lock-free.
But I have no ideas about that this time, and I really want this feature for my apps.

I will make this PR as draft, and try to make it lock-free later.

I don't really understand why to bother with atomic operations if a mutex is used anyway. But that might as well be my limited understanding of the topic, I'd like to learn more about it!

Do you saying about Ordering::Acquire or Ordering::Release while taking mutex lock?
I just copied that.
I will replace these with Ordering::Relaxed.

Also, there seems to be really little code-reuse between sync and async parts, so is it a good idea to have both in the same codebase? Or would it be just as feasible to have separate sync and async crates?

I want same codebase if can.

@RamType0 RamType0 marked this pull request as draft November 8, 2021 15:33
@mgeier
Copy link
Owner

mgeier commented Nov 15, 2021

I love this crate because this crate's get chunk with capacity & commit pattern is really useful for some scenario.

Thanks, I'm glad you like it. This pattern doesn't seem to be very common, but I also like it (if I may say so myself).

I will make this PR as draft, and try to make it lock-free later.

OK, I'm looking forward to this. As I mentioned before, I'm not really familiar with async Rust and I don't have any idea how this could/should work.

I'm a bit concerned though that you are suggesting to add nearly as much code as there is already.
I have the feeling that there should be more code-reuse (but this of course should not affect performance!).

Do you saying about Ordering::Acquire or Ordering::Release while taking mutex lock?

Not exactly. I was just wondering if you need atomic variables at all if you use a mutex.

@RamType0
Copy link
Contributor Author

Thanks, I'm glad you like it. This pattern doesn't seem to be very common, but I also like it (if I may say so myself).

I found this pattern first time at System.IO.Pipelines(.NET).

@RamType0 RamType0 marked this pull request as ready for review November 19, 2021 17:53
@RamType0
Copy link
Contributor Author

It is now almost wait-free.
Wait only occurs when start to asynchronously wait for chunks available while other side is committing.
Even while in this case, committer is wait-free.

@mgeier
Copy link
Owner

mgeier commented Dec 1, 2021

It is now almost wait-free.

That's great!

I'm still struggling to understand how this whole thing works, but I'm glad to see that at least the Mutex is gone.

Are there any guarantees about the wake() function?
I guess this could theoretically block, right?
Our methods are only wait-free if the wake() function is wait-free (and therefore pushed() and popped()), right?

And another thing I don't understand: the two poll() functions use a registered variable, but this never seems to be set to true, which means that the else block is never executed. Or am I missing something?

@RamType0
Copy link
Contributor Author

RamType0 commented Dec 3, 2021

And another thing I don't understand: the two poll() functions use a registered variable, but this never seems to be set to true, which means that the else block is never executed. Or am I missing something?

No, I'm missing🙃

@RamType0
Copy link
Contributor Author

RamType0 commented Dec 3, 2021

Are there any guarantees about the wake() function?
I guess this could theoretically block, right?
Our methods are only wait-free if the wake() function is wait-free (and therefore pushed() and popped()), right?

Are you talking about Waker::wake itself?
It depends on the implementation of the async runtime you choose.(e.g. tokio, async-std)
But AFIK, major runtimes are really well optimized and rarely blocks.
You could see what happens inner Waker::wake by using any async runtime and step in to Waker::wake by debugger.

If you are talking about my code. you could see that there is no loop in popped and pushed. So it is wait-free.
Also, you could see loop in registering. So it is not wait-free.
But it still has a solution for unfair locking. So frequent committing never blocks opponent when there is enough slots.

@mgeier
Copy link
Owner

mgeier commented Dec 3, 2021

Are you talking about Waker::wake itself?

Yes.

It depends on the implementation of the async runtime you choose.(e.g. tokio, async-std)

OK, I thought so. I was just wondering if the Rust language imposes any requirements on the wake() function.
But probably not.

But AFIK, major runtimes are really well optimized and rarely blocks.

Maybe. It would still be good to know for sure.

You could see what happens inner Waker::wake by using any async runtime and step in to Waker::wake by debugger.

Yes, that's a good idea, I should do that ...

If you are talking about my code.

No, I'm not.

you could see that there is no loop in popped and pushed. So it is wait-free.

Yes, that's good.

But it is only wait-free if wake() is also wait-free, right?

Also, you could see loop in registering. So it is not wait-free.

Yes, I think that's fine.
This is only called in poll(), right?
So it only affects the runtime and not other threads that might use the non-async API.

But it still has a solution for unfair locking. So frequent committing never blocks opponent when there is enough slots.

TBH, I still don't really understand this, but it sounds good!

@mgeier
Copy link
Owner

mgeier commented Jan 16, 2022

Just for the record, I've found another async SPSC implementation: https://github.com/Liyixin95/spsc-rs

@RamType0
Copy link
Contributor Author

Just for the record, I've found another async SPSC implementation: https://github.com/Liyixin95/spsc-rs

CAS free!? It sounds like a magic.

@mgeier
Copy link
Owner

mgeier commented May 18, 2022

The creator of the ringbuf crate also seems to be working on an async SPSC ring buffer: https://github.com/agerasev/async-ringbuf

@mgeier mgeier marked this pull request as draft April 3, 2024 14:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants