Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data race / use after free reported by Miri #76

Open
Nilstrieb opened this issue Jun 12, 2022 · 14 comments
Open

Data race / use after free reported by Miri #76

Nilstrieb opened this issue Jun 12, 2022 · 14 comments

Comments

@Nilstrieb
Copy link

Nilstrieb commented Jun 12, 2022

I have seen #71, but Miri reports the same issue. In Miri, it exhibits itself through a UAF, though I assume the cause to be the same. Sometimes, Miri reports this as a data race, but I was only able to reproduce the UAF.

Log of the data race: https://miri.saethlin.dev/no-sb/ub?crate=arc-swap&version=1.5.0

To reproduce the UAF: MIRIFLAGS='-Zmiri-disable-stacked-borrows' cargo miri test load_parallel
Stacked borrows has to be disabled because this crate contains stacked borrows violations.

Miri backtrace

test default::load_parallel_small ... error: Undefined Behavior: pointer to alloc50951 was dereferenced after this allocation got freed
   --> /home/nilsh/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/ptr/non_null.rs:381:18
    |
381 |         unsafe { &*self.as_ptr() }
    |                  ^^^^^^^^^^^^^^^ pointer to alloc50951 was dereferenced after this allocation got freed
    |
    = help: this indicates a bug in the program: it performed an invalid operation, and caused Undefined Behavior
    = help: see https://doc.rust-lang.org/nightly/reference/behavior-considered-undefined.html for further information

    = note: inside `std::ptr::NonNull::<alloc::sync::ArcInner<usize>>::as_ref` at /home/nilsh/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/ptr/non_null.rs:381:18
    = note: inside `std::sync::Arc::<usize>::inner` at /home/nilsh/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/sync.rs:1098:18
    = note: inside `<std::sync::Arc<usize> as std::ops::Deref>::deref` at /home/nilsh/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/sync.rs:1381:10
note: inside `<std::sync::Arc<usize> as arc_swap::RefCnt>::as_ptr` at /home/nilsh/os-contrib/arc-swap/src/ref_cnt.rs:95:9
   --> /home/nilsh/os-contrib/arc-swap/src/ref_cnt.rs:95:9
    |
95  |         me as &T as *const T as *mut T
    |         ^^
note: inside `<arc_swap::strategy::hybrid::HybridProtection<std::sync::Arc<usize>> as std::ops::Drop>::drop` at /home/nilsh/os-contrib/arc-swap/src/strategy/hybrid.rs:109:27
   --> /home/nilsh/os-contrib/arc-swap/src/strategy/hybrid.rs:109:27
    |
109 |                 let ptr = T::as_ptr(&self.ptr);
    |                           ^^^^^^^^^^^^^^^^^^^^
    = note: inside `std::ptr::drop_in_place::<arc_swap::strategy::hybrid::HybridProtection<std::sync::Arc<usize>>> - shim(Some(arc_swap::strategy::hybrid::HybridProtection<std::sync::Arc<usize>>))` at /home/nilsh/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/ptr/mod.rs:487:1
    = note: inside `std::ptr::drop_in_place::<arc_swap::Guard<std::sync::Arc<usize>>> - shim(Some(arc_swap::Guard<std::sync::Arc<usize>>))` at /home/nilsh/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/ptr/mod.rs:487:1
    = note: inside `std::ptr::drop_in_place::<[arc_swap::Guard<std::sync::Arc<usize>>]> - shim(Some([arc_swap::Guard<std::sync::Arc<usize>>]))` at /home/nilsh/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/ptr/mod.rs:487:1
    = note: inside `<std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::ops::Drop>::drop` at /home/nilsh/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/vec/mod.rs:2888:13
    = note: inside `std::ptr::drop_in_place::<std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>>> - shim(Some(std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>>))` at /home/nilsh/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/ptr/mod.rs:487:1
note: inside closure at tests/stress.rs:241:17
   --> tests/stress.rs:241:17
    |
241 |                 }
    |                 ^

note: some details are omitted, run with `MIRIFLAGS=-Zmiri-backtrace=full` for a verbose backtrace

error: aborting due to previous error

If this is a false positive, you should open an issue on rust-lang/miri.

@vorner
Copy link
Owner

vorner commented Jun 12, 2022

Alright. Seems like it's time to go over the proof again, trying to find the missing edge :-|.

@vorner
Copy link
Owner

vorner commented Jun 16, 2022

Just letting everyone know. This one is probably going to take a while, and I'm getting around to it in spare half an hour now and then. I hope to find out what's happening eventually.

@vorner
Copy link
Owner

vorner commented Jun 19, 2022

No result yet, but some preliminary findings if someone wants to have a look too, or for me to have something to go back to if I forget.

  • It is quite well reproducible in MIRI (doesn't happen every time, but happens very often and it happens in two similar but slightly different scenarios ‒ accessing the deallocated data during the dereference or during drop).
  • It is never reproducible in normal tests, even if explicitly checking for the number of reference counts. This could be caused by MIRI being a „weakly ordered“ and x86 is „strongly ordered“, but previously I did try running the tests on some ARM machines (even multi-core ones). I don't have one at hand right now :-(.
  • Switching all relevant memory orderings to SeqCst doesn't make it go away, which kind of suggests it's not about the strong/weak ordering.
  • The dropped guard was (always) acquired by the primary „fast“ track, it has a debt assigned to it.
  • Going back in time to when the fallback strategy was different didn't make it go away.

The feeling is that it is related to:

  • One thread acquiring a debt (the fast/primary way).
  • Another thread swapping the content, which should „pay“ the debt and increment the ref count (incrementing it first).

Miri seems to suggest there is a way for the increment either not happen or to get past taking the debt out or something like that. That part needs further investigation.

@RalfJung
Copy link

RalfJung commented Jul 2, 2022

It might be worth checking if the issue reproduces with -Zmiri-preemption-rate=0, -Zmiri-disable-weak-memory-emulation, and/or -Zmiri-compare-exchange-weak-failure-rate=0. The minimal set of flags that makes the problem go away (if any) is a good hint for what kind of a problem we are talking about.

The key is to run Miri with many different seeds, via something like

for seed in $({ echo obase=16; seq 0 255; } | bc); do
  MIRIFLAGS=-Zmiri-seed=$seed cargo miri test || { echo "Last seed: $seed"; break; };
done

@RalfJung
Copy link

RalfJung commented Jul 2, 2022

  • I could still reproduce the error with -Zmiri-preemption-rate=0 and even with -Zmiri-preemption-rate=0 -Zmiri-compare-exchange-weak-failure-rate=0, but then it becomes a UAF rather than a race.
  • I could not reproduce the error with -Zmiri-disable-weak-memory-emulation. So probably an outdated atomic load is needed to trigger the issue.

@cbeuw
Copy link

cbeuw commented Jul 2, 2022

Here are the outdated atomic loads running up to the UAF. There are only 3.

(I'm unconditionally cutting off the top two stack frames to show the actual load() location in user code. Ignore all backtraces other than the ones starting with "read outdated atomic value")

MIRIFLAGS='-Zmiri-disable-stacked-borrows -Zmiri-preemption-rate=0' cargo miri test default::load_parallel_small

    Finished test [unoptimized + debuginfo] target(s) in 0.05s
     Running unittests src/lib.rs (target/miri/aarch64-apple-darwin/debug/deps/arc_swap-5e3d7c173e323133)
     Running tests/random.rs (target/miri/aarch64-apple-darwin/debug/deps/random-0fd3e2fd7634713d)
     Running tests/stress.rs (target/miri/aarch64-apple-darwin/debug/deps/stress-2313bb3f164d2162)
warning: integer-to-pointer cast
   --> /Users/andy/.cargo/registry/src/github.com-1ecc6299db9ec823/once_cell-1.12.0/src/imp_std.rs:217:13
    |
217 |             }
    |             ^ integer-to-pointer cast
    |
    = help: This program is using integer-to-pointer casts or (equivalently) `ptr::from_exposed_addr`,
    = help: which means that Miri might miss pointer bugs in this program.
    = help: See https://doc.rust-lang.org/nightly/std/ptr/fn.from_exposed_addr.html for more details on that operation.
    = help: To ensure that Miri does not miss bugs in your program, use Strict Provenance APIs (https://doc.rust-lang.org/nightly/std/ptr/index.html#strict-provenance, https://crates.io/crates/sptr) instead.
    = help: You can then pass the `-Zmiri-strict-provenance` flag to Miri, to ensure you are not relying on `from_exposed_addr` semantics.
    = help: Alternatively, the `-Zmiri-permissive-provenance` flag disables this warning.
    = note: backtrace:
    = note: inside `once_cell::imp::initialize_or_wait` at /Users/andy/.cargo/registry/src/github.com-1ecc6299db9ec823/once_cell-1.12.0/src/imp_std.rs:217:13
    = note: inside `once_cell::imp::OnceCell::>::initialize::<[closure@once_cell::sync::OnceCell>::get_or_init<[closure@once_cell::sync::Lazy>::force::{closure#0}]>::{closure#0}], once_cell::sync::OnceCell::get_or_init::Void>` at /Users/andy/.cargo/registry/src/github.com-1ecc6299db9ec823/once_cell-1.12.0/src/imp_std.rs:81:9
    = note: inside `once_cell::sync::OnceCell::>::get_or_try_init::<[closure@once_cell::sync::OnceCell>::get_or_init<[closure@once_cell::sync::Lazy>::force::{closure#0}]>::{closure#0}], once_cell::sync::OnceCell::get_or_init::Void>` at /Users/andy/.cargo/registry/src/github.com-1ecc6299db9ec823/once_cell-1.12.0/src/lib.rs:1063:13
    = note: inside `once_cell::sync::OnceCell::>::get_or_init::<[closure@once_cell::sync::Lazy>::force::{closure#0}]>` at /Users/andy/.cargo/registry/src/github.com-1ecc6299db9ec823/once_cell-1.12.0/src/lib.rs:1023:19
    = note: inside `once_cell::sync::Lazy::>::force` at /Users/andy/.cargo/registry/src/github.com-1ecc6299db9ec823/once_cell-1.12.0/src/lib.rs:1211:13
    = note: inside `> as std::ops::Deref>::deref` at /Users/andy/.cargo/registry/src/github.com-1ecc6299db9ec823/once_cell-1.12.0/src/lib.rs:1221:13
note: inside `lock` at tests/stress.rs:20:5
   --> tests/stress.rs:20:5
    |
20  |     LOCK.lock().unwrap_or_else(PoisonError::into_inner)
    |     ^^^^^^^^^^^
note: inside `load_parallel::>` at tests/stress.rs:222:17
   --> tests/stress.rs:222:17
    |
222 |     let _lock = lock();
    |                 ^^^^^^
note: inside `default::load_parallel_small` at tests/stress.rs:292:17
   --> tests/stress.rs:292:17
    |
292 |                 load_parallel::(ITER_MID);
    |                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...
304 | t!(default, DefaultStrategy);
    | ---------------------------- in this macro invocation
note: inside closure at tests/stress.rs:291:13
   --> tests/stress.rs:291:13
    |
291 | /             fn load_parallel_small() {
292 | |                 load_parallel::(ITER_MID);
293 | |             }
    | |_____________^
...
304 |   t!(default, DefaultStrategy);
    |   ---------------------------- in this macro invocation
    = note: this warning originates in the macro `t` (in Nightly builds, run with -Z macro-backtrace for more info)

note: tracking was triggered
--> /Users/andy/Code/arc-swap/src/debt/list.rs:88:36
|
88 | let mut current = unsafe { LIST_HEAD.load(Acquire).as_ref() };
| ^^^^^^^^^^^^^^^^^^^^^^^ read outdated atomic value 0x0000000000000000
|
= note: inside arc_swap::debt::list::Node::traverse::<&arc_swap::debt::list::Node, [closure@arc_swap::debt::list::Node::get::{closure#0}]> at /Users/andy/Code/arc-swap/src/debt/list.rs:88:36
note: inside arc_swap::debt::list::Node::get at /Users/andy/Code/arc-swap/src/debt/list.rs:140:9
--> /Users/andy/Code/arc-swap/src/debt/list.rs:140:9
|
140 | / Self::traverse(|node| {
141 | | node.check_cooldown();
142 | | if node
143 | | .in_use
... |
152 | | }
153 | | })
| |^
note: inside closure at /Users/andy/Code/arc-swap/src/debt/list.rs:211:40
--> /Users/andy/Code/arc-swap/src/debt/list.rs:211:40
|
211 | head.node.set(Some(Node::get()));
| ^^^^^^^^^^^
= note: inside std::thread::LocalKey::<arc_swap::debt::list::LocalNode>::try_with::<[closure@arc_swap::debt::list::LocalNode::with<arc_swap::strategy::hybrid::HybridProtection<std::sync::Arc<usize>>, [closure@<arc_swap::strategy::hybrid::HybridStrategy<arc_swap::strategy::hybrid::DefaultConfig> as arc_swap::strategy::sealed::InnerStrategy<std::sync::Arc<usize>>>::load::{closure#0}]>::{closure#0}], arc_swap::strategy::hybrid::HybridProtection<std::sync::Arc<usize>>> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/std/src/thread/local.rs:445:16
note: inside arc_swap::debt::list::LocalNode::with::<arc_swap::strategy::hybrid::HybridProtection<std::sync::Arc<usize>>, [closure@<arc_swap::strategy::hybrid::HybridStrategy<arc_swap::strategy::hybrid::DefaultConfig> as arc_swap::strategy::sealed::InnerStrategy<std::sync::Arc<usize>>>::load::{closure#0}]> at /Users/andy/Code/arc-swap/src/debt/list.rs:208:9
--> /Users/andy/Code/arc-swap/src/debt/list.rs:208:9
|
208 | / THREAD_HEAD
209 | | .try_with(|head| {
210 | | if head.node.get().is_none() {
211 | | head.node.set(Some(Node::get()));
... |
214 | | f(head)
215 | | })
| |______________^
note: inside <arc_swap::strategy::hybrid::HybridStrategy<arc_swap::strategy::hybrid::DefaultConfig> as arc_swap::strategy::sealed::InnerStrategy<std::sync::Arc<usize>>>::load at /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:184:9
--> /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:184:9
|
184 | / LocalNode::with(|node| {
185 | | let fast = if Cfg::USE_FAST {
186 | | HybridProtection::attempt(node, storage)
187 | | } else {
... |
190 | | fast.unwrap_or_else(|| HybridProtection::fallback(node, storage))
191 | | })
| |
^
note: inside arc_swap::ArcSwapAny::<std::sync::Arc<usize>>::load at /Users/andy/Code/arc-swap/src/lib.rs:443:34
--> /Users/andy/Code/arc-swap/src/lib.rs:443:34
|
443 | let protected = unsafe { self.strategy.load(&self.ptr) };
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
note: inside closure at tests/stress.rs:237:51
--> tests/stress.rs:237:51
|
237 | let guards = (0..256).map(|| shared.load()).collect::<Vec<>>();
| ^^^^^^^^^^^^^
= note: inside closure at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/adapters/map.rs:84:28
= note: inside <std::ops::Range<i32> as std::iter::Iterator>::fold::<(), [closure@std::iter::adapters::map::map_fold<i32, arc_swap::Guard<std::sync::Arc<usize>>, (), [closure@tests/stress.rs:237:47: 237:64], [closure@std::iter::Iterator::for_each::call<arc_swap::Guard<std::sync::Arc<usize>>, [closure@<std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_extend::SpecExtend<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::spec_extend::{closure#0}]>::{closure#0}]>::{closure#0}]> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:2414:21
= note: inside <std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]> as std::iter::Iterator>::fold::<(), [closure@std::iter::Iterator::for_each::call<arc_swap::Guard<std::sync::Arc<usize>>, [closure@<std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_extend::SpecExtend<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::spec_extend::{closure#0}]>::{closure#0}]> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/adapters/map.rs:124:9
= note: inside <std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]> as std::iter::Iterator>::for_each::<[closure@<std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_extend::SpecExtend<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::spec_extend::{closure#0}]> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:831:9
= note: inside <std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_extend::SpecExtend<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::spec_extend at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/vec/spec_extend.rs:40:17
= note: inside <std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_from_iter_nested::SpecFromIterNested<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::from_iter at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/vec/spec_from_iter_nested.rs:62:9
= note: inside <std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_from_iter::SpecFromIter<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::from_iter at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/vec/spec_from_iter.rs:33:9
= note: inside <std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::iter::FromIterator<arc_swap::Guard<std::sync::Arc<usize>>>>::from_iter::<std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/vec/mod.rs:2645:9
= note: inside <std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]> as std::iter::Iterator>::collect::<std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>>> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:1836:9
note: inside closure at tests/stress.rs:237:34
--> tests/stress.rs:237:34
|
237 | let guards = (0..256).map(|| shared.load()).collect::<Vec<>>();
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

note: tracking was triggered
--> /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:44:19
|
44 | let ptr = storage.load(Relaxed);
| ^^^^^^^^^^^^^^^^^^^^^ read outdated atomic value 0x14dfc0[alloc51026]<1>
|
= note: inside arc_swap::strategy::hybrid::HybridProtection::<std::sync::Arc<usize>>::attempt at /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:44:19
note: inside closure at /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:186:17
--> /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:186:17
|
186 | HybridProtection::attempt(node, storage)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
note: inside closure at /Users/andy/Code/arc-swap/src/debt/list.rs:214:17
--> /Users/andy/Code/arc-swap/src/debt/list.rs:214:17
|
214 | f(head)
| ^^^^^^^
= note: inside std::thread::LocalKey::<arc_swap::debt::list::LocalNode>::try_with::<[closure@arc_swap::debt::list::LocalNode::with<arc_swap::strategy::hybrid::HybridProtection<std::sync::Arc<usize>>, [closure@<arc_swap::strategy::hybrid::HybridStrategy<arc_swap::strategy::hybrid::DefaultConfig> as arc_swap::strategy::sealed::InnerStrategy<std::sync::Arc<usize>>>::load::{closure#0}]>::{closure#0}], arc_swap::strategy::hybrid::HybridProtection<std::sync::Arc<usize>>> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/std/src/thread/local.rs:445:16
note: inside arc_swap::debt::list::LocalNode::with::<arc_swap::strategy::hybrid::HybridProtection<std::sync::Arc<usize>>, [closure@<arc_swap::strategy::hybrid::HybridStrategy<arc_swap::strategy::hybrid::DefaultConfig> as arc_swap::strategy::sealed::InnerStrategy<std::sync::Arc<usize>>>::load::{closure#0}]> at /Users/andy/Code/arc-swap/src/debt/list.rs:208:9
--> /Users/andy/Code/arc-swap/src/debt/list.rs:208:9
|
208 | / THREAD_HEAD
209 | | .try_with(|head| {
210 | | if head.node.get().is_none() {
211 | | head.node.set(Some(Node::get()));
... |
214 | | f(head)
215 | | })
| |______________^
note: inside <arc_swap::strategy::hybrid::HybridStrategy<arc_swap::strategy::hybrid::DefaultConfig> as arc_swap::strategy::sealed::InnerStrategy<std::sync::Arc<usize>>>::load at /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:184:9
--> /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:184:9
|
184 | / LocalNode::with(|node| {
185 | | let fast = if Cfg::USE_FAST {
186 | | HybridProtection::attempt(node, storage)
187 | | } else {
... |
190 | | fast.unwrap_or_else(|| HybridProtection::fallback(node, storage))
191 | | })
| |______^
note: inside arc_swap::ArcSwapAny::<std::sync::Arc<usize>>::load at /Users/andy/Code/arc-swap/src/lib.rs:443:34
--> /Users/andy/Code/arc-swap/src/lib.rs:443:34
|
443 | let protected = unsafe { self.strategy.load(&self.ptr) };
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
note: inside closure at tests/stress.rs:237:51
--> tests/stress.rs:237:51
|
237 | let guards = (0..256).map(|
| shared.load()).collect::<Vec<
>>();
| ^^^^^^^^^^^^^
= note: inside closure at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/adapters/map.rs:84:28
= note: inside <std::ops::Range<i32> as std::iter::Iterator>::fold::<(), [closure@std::iter::adapters::map::map_fold<i32, arc_swap::Guard<std::sync::Arc<usize>>, (), [closure@tests/stress.rs:237:47: 237:64], [closure@std::iter::Iterator::for_each::call<arc_swap::Guard<std::sync::Arc<usize>>, [closure@<std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_extend::SpecExtend<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::spec_extend::{closure#0}]>::{closure#0}]>::{closure#0}]> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:2414:21
= note: inside <std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]> as std::iter::Iterator>::fold::<(), [closure@std::iter::Iterator::for_each::call<arc_swap::Guard<std::sync::Arc<usize>>, [closure@<std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_extend::SpecExtend<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::spec_extend::{closure#0}]>::{closure#0}]> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/adapters/map.rs:124:9
= note: inside <std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]> as std::iter::Iterator>::for_each::<[closure@<std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_extend::SpecExtend<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::spec_extend::{closure#0}]> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:831:9
= note: inside <std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_extend::SpecExtend<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::spec_extend at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/vec/spec_extend.rs:40:17
= note: inside <std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_from_iter_nested::SpecFromIterNested<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::from_iter at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/vec/spec_from_iter_nested.rs:62:9
= note: inside <std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_from_iter::SpecFromIter<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::from_iter at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/vec/spec_from_iter.rs:33:9
= note: inside <std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::iter::FromIterator<arc_swap::Guard<std::sync::Arc<usize>>>>::from_iter::<std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/vec/mod.rs:2645:9
= note: inside <std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]> as std::iter::Iterator>::collect::<std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>>> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:1836:9
note: inside closure at tests/stress.rs:237:34
--> tests/stress.rs:237:34
|
237 | let guards = (0..256).map(|
| shared.load()).collect::<Vec<
>>();
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

note: tracking was triggered
--> /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:48:23
|
48 | let confirm = storage.load(Acquire);
| ^^^^^^^^^^^^^^^^^^^^^ read outdated atomic value 0x14dfc0[alloc51026]<1>
|
= note: inside arc_swap::strategy::hybrid::HybridProtection::<std::sync::Arc<usize>>::attempt at /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:48:23
note: inside closure at /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:186:17
--> /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:186:17
|
186 | HybridProtection::attempt(node, storage)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
note: inside closure at /Users/andy/Code/arc-swap/src/debt/list.rs:214:17
--> /Users/andy/Code/arc-swap/src/debt/list.rs:214:17
|
214 | f(head)
| ^^^^^^^
= note: inside std::thread::LocalKey::<arc_swap::debt::list::LocalNode>::try_with::<[closure@arc_swap::debt::list::LocalNode::with<arc_swap::strategy::hybrid::HybridProtection<std::sync::Arc<usize>>, [closure@<arc_swap::strategy::hybrid::HybridStrategy<arc_swap::strategy::hybrid::DefaultConfig> as arc_swap::strategy::sealed::InnerStrategy<std::sync::Arc<usize>>>::load::{closure#0}]>::{closure#0}], arc_swap::strategy::hybrid::HybridProtection<std::sync::Arc<usize>>> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/std/src/thread/local.rs:445:16
note: inside arc_swap::debt::list::LocalNode::with::<arc_swap::strategy::hybrid::HybridProtection<std::sync::Arc<usize>>, [closure@<arc_swap::strategy::hybrid::HybridStrategy<arc_swap::strategy::hybrid::DefaultConfig> as arc_swap::strategy::sealed::InnerStrategy<std::sync::Arc<usize>>>::load::{closure#0}]> at /Users/andy/Code/arc-swap/src/debt/list.rs:208:9
--> /Users/andy/Code/arc-swap/src/debt/list.rs:208:9
|
208 | / THREAD_HEAD
209 | | .try_with(|head| {
210 | | if head.node.get().is_none() {
211 | | head.node.set(Some(Node::get()));
... |
214 | | f(head)
215 | | })
| |______________^
note: inside <arc_swap::strategy::hybrid::HybridStrategy<arc_swap::strategy::hybrid::DefaultConfig> as arc_swap::strategy::sealed::InnerStrategy<std::sync::Arc<usize>>>::load at /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:184:9
--> /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:184:9
|
184 | / LocalNode::with(|node| {
185 | | let fast = if Cfg::USE_FAST {
186 | | HybridProtection::attempt(node, storage)
187 | | } else {
... |
190 | | fast.unwrap_or_else(|| HybridProtection::fallback(node, storage))
191 | | })
| |______^
note: inside arc_swap::ArcSwapAny::<std::sync::Arc<usize>>::load at /Users/andy/Code/arc-swap/src/lib.rs:443:34
--> /Users/andy/Code/arc-swap/src/lib.rs:443:34
|
443 | let protected = unsafe { self.strategy.load(&self.ptr) };
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
note: inside closure at tests/stress.rs:237:51
--> tests/stress.rs:237:51
|
237 | let guards = (0..256).map(|
| shared.load()).collect::<Vec<
>>();
| ^^^^^^^^^^^^^
= note: inside closure at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/adapters/map.rs:84:28
= note: inside <std::ops::Range<i32> as std::iter::Iterator>::fold::<(), [closure@std::iter::adapters::map::map_fold<i32, arc_swap::Guard<std::sync::Arc<usize>>, (), [closure@tests/stress.rs:237:47: 237:64], [closure@std::iter::Iterator::for_each::call<arc_swap::Guard<std::sync::Arc<usize>>, [closure@<std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_extend::SpecExtend<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::spec_extend::{closure#0}]>::{closure#0}]>::{closure#0}]> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:2414:21
= note: inside <std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]> as std::iter::Iterator>::fold::<(), [closure@std::iter::Iterator::for_each::call<arc_swap::Guard<std::sync::Arc<usize>>, [closure@<std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_extend::SpecExtend<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::spec_extend::{closure#0}]>::{closure#0}]> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/adapters/map.rs:124:9
= note: inside <std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]> as std::iter::Iterator>::for_each::<[closure@<std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_extend::SpecExtend<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::spec_extend::{closure#0}]> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:831:9
= note: inside <std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_extend::SpecExtend<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::spec_extend at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/vec/spec_extend.rs:40:17
= note: inside <std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_from_iter_nested::SpecFromIterNested<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::from_iter at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/vec/spec_from_iter_nested.rs:62:9
= note: inside <std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_from_iter::SpecFromIter<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::from_iter at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/vec/spec_from_iter.rs:33:9
= note: inside <std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::iter::FromIterator<arc_swap::Guard<std::sync::Arc<usize>>>>::from_iter::<std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/vec/mod.rs:2645:9
= note: inside <std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]> as std::iter::Iterator>::collect::<std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>>> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:1836:9
note: inside closure at tests/stress.rs:237:34
--> tests/stress.rs:237:34
|
237 | let guards = (0..256).map(|
| shared.load()).collect::<Vec<
>>();
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

error: Undefined Behavior: pointer to alloc51026 was dereferenced after this allocation got freed
--> /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/ptr/mut_ptr.rs:469:18
|
469 | unsafe { intrinsics::offset(self, count) as *mut T }
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ pointer to alloc51026 was dereferenced after this allocation got freed
|
= help: this indicates a bug in the program: it performed an invalid operation, and caused Undefined Behavior
= help: see https://doc.rust-lang.org/nightly/reference/behavior-considered-undefined.html for further information
= note: backtrace:
= note: inside std::ptr::mut_ptr::<impl *mut u8>::offset at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/ptr/mut_ptr.rs:469:18
= note: inside std::sync::Arc::<usize>::from_raw at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/sync.rs:912:17
note: inside <std::sync::Arc<usize> as arc_swap::RefCnt>::from_ptr at /Users/andy/Code/arc-swap/src/ref_cnt.rs:98:9
--> /Users/andy/Code/arc-swap/src/ref_cnt.rs:98:9
|
98 | Arc::from_raw(ptr)
| ^^^^^^^^^^^^^^^^^^
note: inside arc_swap::strategy::hybrid::HybridProtection::<std::sync::Arc<usize>>::new at /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:36:36
--> /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:36:36
|
36 | ptr: ManuallyDrop::new(T::from_ptr(ptr)),
| ^^^^^^^^^^^^^^^^
note: inside arc_swap::strategy::hybrid::HybridProtection::<std::sync::Arc<usize>>::attempt at /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:51:27
--> /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:51:27
|
51 | Some(unsafe { Self::new(ptr, Some(debt)) })
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
note: inside closure at /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:186:17
--> /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:186:17
|
186 | HybridProtection::attempt(node, storage)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
note: inside closure at /Users/andy/Code/arc-swap/src/debt/list.rs:214:17
--> /Users/andy/Code/arc-swap/src/debt/list.rs:214:17
|
214 | f(head)
| ^^^^^^^
= note: inside std::thread::LocalKey::<arc_swap::debt::list::LocalNode>::try_with::<[closure@arc_swap::debt::list::LocalNode::with<arc_swap::strategy::hybrid::HybridProtection<std::sync::Arc<usize>>, [closure@<arc_swap::strategy::hybrid::HybridStrategy<arc_swap::strategy::hybrid::DefaultConfig> as arc_swap::strategy::sealed::InnerStrategy<std::sync::Arc<usize>>>::load::{closure#0}]>::{closure#0}], arc_swap::strategy::hybrid::HybridProtection<std::sync::Arc<usize>>> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/std/src/thread/local.rs:445:16
note: inside arc_swap::debt::list::LocalNode::with::<arc_swap::strategy::hybrid::HybridProtection<std::sync::Arc<usize>>, [closure@<arc_swap::strategy::hybrid::HybridStrategy<arc_swap::strategy::hybrid::DefaultConfig> as arc_swap::strategy::sealed::InnerStrategy<std::sync::Arc<usize>>>::load::{closure#0}]> at /Users/andy/Code/arc-swap/src/debt/list.rs:208:9
--> /Users/andy/Code/arc-swap/src/debt/list.rs:208:9
|
208 | / THREAD_HEAD
209 | | .try_with(|head| {
210 | | if head.node.get().is_none() {
211 | | head.node.set(Some(Node::get()));
... |
214 | | f(head)
215 | | })
| |______________^
note: inside <arc_swap::strategy::hybrid::HybridStrategy<arc_swap::strategy::hybrid::DefaultConfig> as arc_swap::strategy::sealed::InnerStrategy<std::sync::Arc<usize>>>::load at /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:184:9
--> /Users/andy/Code/arc-swap/src/strategy/hybrid.rs:184:9
|
184 | / LocalNode::with(|node| {
185 | | let fast = if Cfg::USE_FAST {
186 | | HybridProtection::attempt(node, storage)
187 | | } else {
... |
190 | | fast.unwrap_or_else(|| HybridProtection::fallback(node, storage))
191 | | })
| |______^
note: inside arc_swap::ArcSwapAny::<std::sync::Arc<usize>>::load at /Users/andy/Code/arc-swap/src/lib.rs:443:34
--> /Users/andy/Code/arc-swap/src/lib.rs:443:34
|
443 | let protected = unsafe { self.strategy.load(&self.ptr) };
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
note: inside closure at tests/stress.rs:237:51
--> tests/stress.rs:237:51
|
237 | let guards = (0..256).map(|
| shared.load()).collect::<Vec<
>>();
| ^^^^^^^^^^^^^
= note: inside closure at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/adapters/map.rs:84:28
= note: inside <std::ops::Range<i32> as std::iter::Iterator>::fold::<(), [closure@std::iter::adapters::map::map_fold<i32, arc_swap::Guard<std::sync::Arc<usize>>, (), [closure@tests/stress.rs:237:47: 237:64], [closure@std::iter::Iterator::for_each::call<arc_swap::Guard<std::sync::Arc<usize>>, [closure@<std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_extend::SpecExtend<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::spec_extend::{closure#0}]>::{closure#0}]>::{closure#0}]> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:2414:21
= note: inside <std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]> as std::iter::Iterator>::fold::<(), [closure@std::iter::Iterator::for_each::call<arc_swap::Guard<std::sync::Arc<usize>>, [closure@<std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_extend::SpecExtend<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::spec_extend::{closure#0}]>::{closure#0}]> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/adapters/map.rs:124:9
= note: inside <std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]> as std::iter::Iterator>::for_each::<[closure@<std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_extend::SpecExtend<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::spec_extend::{closure#0}]> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:831:9
= note: inside <std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_extend::SpecExtend<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::spec_extend at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/vec/spec_extend.rs:40:17
= note: inside <std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_from_iter_nested::SpecFromIterNested<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::from_iter at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/vec/spec_from_iter_nested.rs:62:9
= note: inside <std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::vec::spec_from_iter::SpecFromIter<arc_swap::Guard<std::sync::Arc<usize>>, std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>>>::from_iter at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/vec/spec_from_iter.rs:33:9
= note: inside <std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>> as std::iter::FromIterator<arc_swap::Guard<std::sync::Arc<usize>>>>::from_iter::<std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]>> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc/src/vec/mod.rs:2645:9
= note: inside <std::iter::Map<std::ops::Range<i32>, [closure@tests/stress.rs:237:47: 237:64]> as std::iter::Iterator>::collect::<std::vec::Vec<arc_swap::Guard<std::sync::Arc<usize>>>> at /Users/andy/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:1836:9
note: inside closure at tests/stress.rs:237:34
--> tests/stress.rs:237:34
|
237 | let guards = (0..256).map(|
| shared.load()).collect::<Vec<
>>();
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

note: some details are omitted, run with MIRIFLAGS=-Zmiri-backtrace=full for a verbose backtrace

error: aborting due to previous error; 1 warning emitted

error: test failed, to rerun pass '--test stress'

@cbeuw
Copy link

cbeuw commented Jul 2, 2022

In that diagnostics trace there were two outdated reads getting the same value in this function

fn attempt(node: &LocalNode, storage: &AtomicPtr<T::Base>) -> Option<Self> {
// Relaxed is good enough here, see the Acquire below
let ptr = storage.load(Relaxed);
// Try to get a debt slot. If not possible, fail.
let debt = node.new_fast(ptr as usize)?;
let confirm = storage.load(Acquire);
if ptr == confirm {
// Successfully got a debt
Some(unsafe { Self::new(ptr, Some(debt)) })
} else if debt.pay::<T>(ptr) {
// It changed in the meantime, we return the debt (that is on the outdated pointer,
// possibly destroyed) and fail.
None
} else {
// It changed in the meantime, but the debt for the previous pointer was already paid
// for by someone else, so we are fine using it.
Some(unsafe { Self::new(ptr, None) })
}
}

I'm completely unfamiliar with arc-swap internals so please correct me if I'm wrong - the comments implied that if ptr == confirm then "it didn't change in the mean time", which as we saw here doesn't stand. storage could've changed between these two reads but they can still see the same outdated value.

@vorner
Copy link
Owner

vorner commented Jul 2, 2022

Hello. Sorry for letting this one aside for a while, no free time lately :-|. Just came here to answer, I'll try to prioritize this as much as possible.

I'll have to go over the internals myself, over all the proofs there and such and find if there's an error there. Nevertheless, big thanks for the help, this really narrows the space where to look for and what to verify/fix.

Maybe I did something wrong with the previous experiments, but I've tried to replace all orderings with SeqCst which didn't seem to help. I'll need to find out why :-O.

@vorner
Copy link
Owner

vorner commented Jul 4, 2022

I think I'm starting to understand the problem (not the solution yet; really going over all the orderings in the library and changing them to SeqCst does make the problem go away in miri, but that would also make the library slow to the point where there would be no benefit in using it…). I'd appreciate a confirmation of my understanding.

The library is inspired by hazard pointers. If there's a pointer that shouldn't be done bad things to, such pointer is stored somewhere (in a slot) and other threads can check if their pointer is there. I call it a debt (because the reference count on that Arc is smaller than it should be).

This is what's happening in the usual case when there are no collisions, corner cases, etc:

A writer thread does these operations:

  • Swaps the pointer in the storage (this is SeqCst)
  • Goes over all the debt slots and verifies the pointer is not there.
  • Returns the old pointer to the caller and that one can decrement the ref count, possibly leading to freeing it.

A reader thread:

  • Reads the storage.
  • Stores (using a swap, to make it a read-write operation) the value with SeqCst.
  • Reads the storage again, to see it wasn't replaced and that the stored debt is indeed correct one.

Because both the swap and the store of the debt are SeqCst read-write operations, the threads agree on which one happened first. But I've also assumed that any such pair forms a happens-before synchronization edge (a release->acquire pair). Now, rereading the specs, this seems not to be true. I've thought of SeqCst as pretending all such operations happen on the same „virtual variable“, but that's probably the wrong model.

I suspect no today HW neither code-gen can make the distinction between release->acquire on the same variable and on two different variables, it simply makes sure everything is written out to main memory on any release and read from memory on acquire ‒ tracking the individual variables would be impractical. That would explain why I have never seen the problem manifest even on weak platforms like multi-core ARM. Not that it would be an excuse to have an UB 😇.

So, can someone confirm my understanding of the problem makes sense and this could be the cause? Does miri make this kind of tracking of which variable the Release & Acquire happens on?

If so, my plan would be:

  • Figure out how to introduce the missing edge without changing everything to SeqCst.
  • Going over the proofs and code and see if I've made the same mistake at other places too.
  • Do a release.
  • Decide if this calls for CVE (or what the right way is these days); if the above explanation is confirmed, it is UB, but one that was unlikely to ever have had an opportunity to trigger anything bad.

@cbeuw
Copy link

cbeuw commented Jul 4, 2022

Thanks for the explanation. The summary of writer and reader actions are very helpful to understand the core things at play here, although I'm still not entirely certain where the bug is.

But I've also assumed that any such pair forms a happens-before synchronization edge (a release->acquire pair). Now, rereading the specs, this seems not to be true.

If (and only if) the acquiring load reads the value the releasing store has written, then yes it creates a synchronisation edge between them.

I've thought of SeqCst as pretending all such operations happen on the same „virtual variable“, but that's probably the wrong model.

It does, although SeqCst does not provide any extra synchronisation opportunities than Acquire/Released does. There are two things at play: synchronisation ("happens-before" edge) and modification order ("reads-from" edge). A SeqCst load guarantees that you will always see the latest SeqCst store (should one exists) and nothing earlier. Since SeqCst also provides Acquire/Release semantics, the load creates a synchronisation edge with the earlier store, but not with any other thread, even if the same (or different) variable has been modified in that other thread before the last SeqCst store.

   T1          T2          T3
x <- 1
            x <- 2
                          a <- x

(sorry for the terrible alignment)
Suppose all accesses above are SeqCst, and their vertical positions represent the execution order. T3's read of x is guaranteed to see T2's write of 2 and synchronises with it. But it produces no relationship with T1.

Whereas if T3's read of x is Acquire (and the two writes remain SeqCst), then it could read 1 or 2, and creates a synchronisation edge with T1 or T2 respectively, but again, not both.

There is no way to create a synchronisation edge with two threads simultaneously (edit: with accesses. Fences can create multiple edges). In the formalisations of the memory model, all relationships are binary.

Does miri make this kind of tracking of which variable the Release & Acquire happens on?

Yes, although a synchronisation edge affects everything before the releasing action in the releasing thread, and everything after the acquiring action in the acquiring thread. There is no synchronisation edge that only provides guarantees on a single location.

distinction between release->acquire on the same variable and on two different variables

You cannot create a release->acquire relationship across two different locations (whom did the load acquire a value from?). But as I mentioned above, if you do have a synchronisation edge from a paired up release->acquire on one memory location, all actions before the release in one thread happen before all actions after the acquire in the other thread.

@svr8450
Copy link

svr8450 commented Jul 4, 2022

I suspect no today HW neither code-gen can make the distinction between release->acquire on the same variable and on two different variables, it simply makes sure everything is written out to main memory on any release and read from memory on acquire ‒ tracking the individual variables would be impractical.

This isn't directly relevant to the issue so I won't be long-winded about it, but this assumption is not correct. Microarchitectures in extremely common use today absolutely make this distinction. Atomic acquire and release are not typically implemented by flushing caches all the way to main memory; that would make atomic operations outrageously slow. Actual implementations are beyond the scope of this comment, but suffice it to say that they rely on communication and coordination between the caches associated with each core, with granularity much smaller than "all of memory". (Typically, a single cache line.)

With this in mind, to reemphasize a point from the previous comment: Release/Acquire only does something when the Acquire reads the value written by the Release. That is, it's not even enough that they happen to be on the same variable; they have to agree on a particular write of that variable. SeqCst often doesn't help because it is subject to the same limitation.

@chorman0773
Copy link

chorman0773 commented Jul 5, 2022

Atomic acquire and release are not typically implemented by flushing caches all the way to main memory; that would make atomic operations outrageously slow

And on x86 (or other well-ordered architectures) that means all memory operations would be outrageously slow. You'd need probably 112 more registers to make that usable to the extent its used today.

@vorner
Copy link
Owner

vorner commented Jul 5, 2022

Thank you, you seem to have confirmed my suspicion even though I apparently didn't explain very well what I was mistaken about. I'll try again just for the record, but I think I understand what goes on in there right now.

  • I know about the fact that one needs to observe the written value on release-acquire, even the fact that the previous writes don't count (that's why many read-write operations use AcqRel in the code, to „pass on“ the synchronization from the previous writes transitively; I've fallen to that particular trap before already). Because there's no such thing as „global clock“ between threads, having a witnessed value is the only sane way to reason about something that happened previously.
  • Nevertheless, SeqCst operations provide kind of „global clock“. With two SeqCst operations, one can reasonably talk about which one happened first even though they happened on different values or even different atomic variables. Which led me to think that because the need for the witness value is now gone, a a.store(Release) followed by b.load(Acquire) would form a synchronization edge.

Yesterday reading through the spec and your answer confirm that such understanding was wrong. I've wrongly counted on a synchronization edge from storage.swap(ptr, SeqCst) in the writer thread and slot.swap(debt, SeqCst) in the reader thread and that one doesn't exist ‒ which leads to the possibility of stale read.

Sorry for the oversimplification with the flushing to memory. I know it's much more complex, I meant it as kind of conceptual flush of all changes so they are visible + conceptual checking for changes from others. That is, I was trying to figure out why the bug didn't manifest when stress-testing on ARM even though ARM is a weak platform. What I suspect is that ARM is still a bit stronger platform than the memory model and it in fact does provide the above edge. I think so because otherwise the HW would somehow have to make distinction between two swap(SeqCst) on the same variable and on two different variables, while still synchronizing everything to the point where their order is agreed on by all the threads ‒ which would be likely harder than just dealing with both cases in the same way, by full sync.

So, now I get back to the paper and figure out how do I get the missing edge in there.

vorner added a commit that referenced this issue Jul 11, 2022
... TODO: Describe rationale ...
... TODO: Is that all that's needed? ...

Maybe a solution for #76.
vorner added a commit that referenced this issue Jul 11, 2022
... TODO: Describe rationale ...
... TODO: Is that all that's needed? ...

Maybe a solution for #76.
vorner added a commit that referenced this issue Jul 21, 2022
... TODO: Describe rationale ...
... TODO: Is that all that's needed? ...

Maybe a solution for #76.
@vorner
Copy link
Owner

vorner commented Jul 30, 2022

I've released a fix. That is, this definitely fixes the issues reported by miri. I still will want to go over the proofs few more times and check if I made the wrong assumption in some more places. So I'll keep this open for now.

bors added a commit to rust-lang/miri that referenced this issue Jun 27, 2023
Add missed trophy

vorner/arc-swap#76 appears to have been missed
RalfJung pushed a commit to RalfJung/rust that referenced this issue Jun 29, 2023
Add missed trophy

vorner/arc-swap#76 appears to have been missed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants