Skip to content

Commit

Permalink
Use the 64b inner:monotonize() implementation not the 128b one for aa…
Browse files Browse the repository at this point in the history
…rch64

aarch64 prior to v8.4 (FEAT_LSE2) doesn't have an instruction that guarantees
untorn 128b reads except for completing a 128b load/store exclusive pair
(ldxp/stxp) or compare-and-swap (casp) successfully. The requirement to
complete a 128b read+write atomic is actually more expensive and more unfair
than the previous implementation of monotonize() which used a Mutex on aarch64,
especially at large core counts.  For aarch64 switch to the 64b atomic
implementation which is about 13x faster for a benchmark that involves many
calls to Instant::now().
  • Loading branch information
AGSaidi committed Sep 4, 2021
1 parent 72a51c3 commit ce450f8
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions library/std/src/time/monotonic.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ pub(super) fn monotonize(raw: time::Instant) -> time::Instant {
inner::monotonize(raw)
}

#[cfg(all(target_has_atomic = "64", not(target_has_atomic = "128")))]
#[cfg(any(all(target_has_atomic = "64", not(target_has_atomic = "128")), target_arch = "aarch64"))]
pub mod inner {
use crate::sync::atomic::AtomicU64;
use crate::sync::atomic::Ordering::*;
Expand Down Expand Up @@ -70,7 +70,7 @@ pub mod inner {
}
}

#[cfg(target_has_atomic = "128")]
#[cfg(all(target_has_atomic = "128", not(target_arch = "aarch64")))]
pub mod inner {
use crate::sync::atomic::AtomicU128;
use crate::sync::atomic::Ordering::*;
Expand Down

0 comments on commit ce450f8

Please sign in to comment.