Skip to content

Commit

Permalink
Auto merge of #105840 - saethlin:ord-cmp, r=<try>
Browse files Browse the repository at this point in the history
Micro-optimize Ord::cmp for primitives

I originally started looking into this because in MIR, `PartialOrd::cmp` is _huge_ and even for trivial types like `u32` which are theoretically a single statement to compare, the `PartialOrd::cmp` impl doesn't inline. A significant contributor to the size of the implementation is that it has two comparisons. And this actually follows through to the final x86_64 codegen too, which is... strange. We don't need two `cmp` instructions in order to do a single Rust-level comparison. So I started tweaking the implementation, and came up with the same thing as #64082 (which I didn't know about at the time), I ran `llvm-mca` on it per the issue which was linked in the code to establish that it looked better, and submitted it for a benchmark run.

The initial benchmark run regresses basically everything. By looking through the cachegrind diffs in the perf report then the `perf annotate` for regressed functions, I was able to identify one source of the regression: `Ord::min` and `Ord::max` no longer optimize well. Tweaking them to bypass `Ord::cmp` removed some regressions, but not much.

Diving back into the cachegrind diffs and disassembly, I found one huge widespread issue was that the codegen for `Span`'s `hash_stable` regressed because `span_data_to_lines_and_cols` no longer inlined into it, because that function does a lot of `Range<BytePos>::contains`. The implementation of `Range::contains` uses `PartialOrd` multiple times, and we had massively regressed the codegen of `Range::contains`. The root problem here seems to be that `PartialOrd` is derived on `BytePos`, which is a simple wrapper around a `u32`. So for `BytePos`, `PartialOrd::{le, lt, ge, gt}` use the default impls, which go through `PartialOrd::cmp`, and LLVM fails to optimize these combinations of methods with the new `Ord::cmp` implementation. At a guess, the new implementation makes LLVM totally loses track of the fact that `<Ord for u32>::cmp` is an elaborate way to compare two integers.

So I have low hopes for this overall, because my strategy (which is working) to recover the regressions is to avoid the "faster" implementation that this PR is based around. If we have to settle for an implementation of `Ord::cmp` which is on its own sub-optimal but is optimized better in combination with functions that use its return value in specific ways, so be it. However, one of the runs had an improvement in `coercions`. I don't know if that is jitter or relevant. But I'm still finding threads to pull here, so I'm going to keep at it.

For the moment I am hacking up the implementations on `BytePos` instead of modifying the code that `derive(PartialOrd, Ord)` expands to because that would be hard, and it would also mean that we would just expand to more code, perhaps regressing compile time for that reason, even if the generated assembly is more efficient.

---

Hacking up the remainder of the `PartialOrd`/`Ord` methods on `BytePos` took us down to 3 regressions and 6 improvements, which is interesting. All the improvements are in `coercions`, so I'm sure this improved _something_ but whether it matters... hard to say. Based on the findings of `@joboet,` I'm going to cherry-pick #106065 onto this branch, because that strategy seems to improve `PartialOrd::lt` and `PartialOrd::ge` back to the original codegen, even when they are using our new `Ord::cmp` impl. If the remaining perf regressions are due to de-optimizing a `PartialOrd::lt` not on `BytePos`, this might be a further improvement.

---

Okay, that cherry-pick brought us down to 2 regressions but that might be noise. We still have the same 6 improvements, all on `coercions`.

I think the next thing to try here is modifying the implementation of `derive(PartialOrd)` to automatically emit the modifications that I made to `BytePos` (directly implementing all the methods for newtypes). But even if that works, I think the effect of this change is so mixed that it's probably not worth merging with current LLVM. What I'm afraid of is that this change currently pessimizes matching on `Ordering`, and that is the most natural thing to do with an enum. So I'm not closing this yet, but I think without a change from LLVM, I have other priorities at the moment.

r? `@ghost`
  • Loading branch information
bors committed Feb 14, 2024
2 parents ee9c7c9 + dcea8b1 commit b373f97
Show file tree
Hide file tree
Showing 3 changed files with 142 additions and 24 deletions.
50 changes: 47 additions & 3 deletions compiler/rustc_span/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2191,6 +2191,50 @@ macro_rules! impl_pos {
$(#[$attr])*
$vis struct $ident($inner_vis $inner_ty);

impl ::std::cmp::Ord for $ident {
#[inline(always)]
fn cmp(&self, other: &Self) -> ::std::cmp::Ordering {
self.0.cmp(&other.0)
}

#[inline(always)]
fn min(self, other: Self) -> Self {
Self(self.0.min(other.0))
}

#[inline(always)]
fn max(self, other: Self) -> Self {
Self(self.0.max(other.0))
}
}

impl ::std::cmp::PartialOrd for $ident {
#[inline(always)]
fn partial_cmp(&self, other: &Self) -> Option<::std::cmp::Ordering> {
self.0.partial_cmp(&other.0)
}

#[inline(always)]
fn lt(&self, other: &Self) -> bool {
self.0.lt(&other.0)
}

#[inline(always)]
fn le(&self, other: &Self) -> bool {
self.0.le(&other.0)
}

#[inline(always)]
fn gt(&self, other: &Self) -> bool {
self.0.gt(&other.0)
}

#[inline(always)]
fn ge(&self, other: &Self) -> bool {
self.0.ge(&other.0)
}
}

impl Pos for $ident {
#[inline(always)]
fn from_usize(n: usize) -> $ident {
Expand Down Expand Up @@ -2238,19 +2282,19 @@ impl_pos! {
/// A byte offset.
///
/// Keep this small (currently 32-bits), as AST contains a lot of them.
#[derive(Clone, Copy, PartialEq, Eq, Hash, PartialOrd, Ord, Debug)]
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)]
pub struct BytePos(pub u32);

/// A byte offset relative to file beginning.
#[derive(Clone, Copy, PartialEq, Eq, Hash, PartialOrd, Ord, Debug)]
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)]
pub struct RelativeBytePos(pub u32);

/// A character offset.
///
/// Because of multibyte UTF-8 characters, a byte offset
/// is not equivalent to a character offset. The [`SourceMap`] will convert [`BytePos`]
/// values to `CharPos` values as necessary.
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Debug)]
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub struct CharPos(pub usize);
}

Expand Down
67 changes: 46 additions & 21 deletions library/core/src/cmp.rs
Original file line number Diff line number Diff line change
Expand Up @@ -401,12 +401,18 @@ impl Ordering {
/// assert_eq!(Ordering::Equal.is_eq(), true);
/// assert_eq!(Ordering::Greater.is_eq(), false);
/// ```
#[inline]
#[inline(always)]
#[must_use]
#[rustc_const_stable(feature = "ordering_helpers", since = "1.53.0")]
#[stable(feature = "ordering_helpers", since = "1.53.0")]
pub const fn is_eq(self) -> bool {
matches!(self, Equal)
// Implementation note: It appears (as of 2022-12) that LLVM has an
// easier time with a comparison against zero like this, as opposed
// to looking for the `Less` value (-1) specifically, maybe because
// it's not always obvious to it that -2 isn't possible.
// Thus this and its siblings below are written this way, rather
// than the potentially-more-obvious `matches!` version.
(self as i8) == 0
}

/// Returns `true` if the ordering is not the `Equal` variant.
Expand All @@ -420,12 +426,12 @@ impl Ordering {
/// assert_eq!(Ordering::Equal.is_ne(), false);
/// assert_eq!(Ordering::Greater.is_ne(), true);
/// ```
#[inline]
#[inline(always)]
#[must_use]
#[rustc_const_stable(feature = "ordering_helpers", since = "1.53.0")]
#[stable(feature = "ordering_helpers", since = "1.53.0")]
pub const fn is_ne(self) -> bool {
!matches!(self, Equal)
(self as i8) != 0
}

/// Returns `true` if the ordering is the `Less` variant.
Expand All @@ -439,12 +445,12 @@ impl Ordering {
/// assert_eq!(Ordering::Equal.is_lt(), false);
/// assert_eq!(Ordering::Greater.is_lt(), false);
/// ```
#[inline]
#[inline(always)]
#[must_use]
#[rustc_const_stable(feature = "ordering_helpers", since = "1.53.0")]
#[stable(feature = "ordering_helpers", since = "1.53.0")]
pub const fn is_lt(self) -> bool {
matches!(self, Less)
(self as i8) < 0
}

/// Returns `true` if the ordering is the `Greater` variant.
Expand All @@ -458,12 +464,12 @@ impl Ordering {
/// assert_eq!(Ordering::Equal.is_gt(), false);
/// assert_eq!(Ordering::Greater.is_gt(), true);
/// ```
#[inline]
#[inline(always)]
#[must_use]
#[rustc_const_stable(feature = "ordering_helpers", since = "1.53.0")]
#[stable(feature = "ordering_helpers", since = "1.53.0")]
pub const fn is_gt(self) -> bool {
matches!(self, Greater)
(self as i8) > 0
}

/// Returns `true` if the ordering is either the `Less` or `Equal` variant.
Expand All @@ -477,12 +483,12 @@ impl Ordering {
/// assert_eq!(Ordering::Equal.is_le(), true);
/// assert_eq!(Ordering::Greater.is_le(), false);
/// ```
#[inline]
#[inline(always)]
#[must_use]
#[rustc_const_stable(feature = "ordering_helpers", since = "1.53.0")]
#[stable(feature = "ordering_helpers", since = "1.53.0")]
pub const fn is_le(self) -> bool {
!matches!(self, Greater)
(self as i8) <= 0
}

/// Returns `true` if the ordering is either the `Greater` or `Equal` variant.
Expand All @@ -496,12 +502,12 @@ impl Ordering {
/// assert_eq!(Ordering::Equal.is_ge(), true);
/// assert_eq!(Ordering::Greater.is_ge(), true);
/// ```
#[inline]
#[inline(always)]
#[must_use]
#[rustc_const_stable(feature = "ordering_helpers", since = "1.53.0")]
#[stable(feature = "ordering_helpers", since = "1.53.0")]
pub const fn is_ge(self) -> bool {
!matches!(self, Less)
(self as i8) >= 0
}

/// Reverses the `Ordering`.
Expand Down Expand Up @@ -1169,7 +1175,7 @@ pub trait PartialOrd<Rhs: ?Sized = Self>: PartialEq<Rhs> {
#[must_use]
#[stable(feature = "rust1", since = "1.0.0")]
fn lt(&self, other: &Rhs) -> bool {
matches!(self.partial_cmp(other), Some(Less))
if let Some(ordering) = self.partial_cmp(other) { ordering.is_lt() } else { false }
}

/// This method tests less than or equal to (for `self` and `other`) and is used by the `<=`
Expand All @@ -1186,7 +1192,7 @@ pub trait PartialOrd<Rhs: ?Sized = Self>: PartialEq<Rhs> {
#[must_use]
#[stable(feature = "rust1", since = "1.0.0")]
fn le(&self, other: &Rhs) -> bool {
matches!(self.partial_cmp(other), Some(Less | Equal))
if let Some(ordering) = self.partial_cmp(other) { ordering.is_le() } else { false }
}

/// This method tests greater than (for `self` and `other`) and is used by the `>` operator.
Expand All @@ -1202,7 +1208,7 @@ pub trait PartialOrd<Rhs: ?Sized = Self>: PartialEq<Rhs> {
#[must_use]
#[stable(feature = "rust1", since = "1.0.0")]
fn gt(&self, other: &Rhs) -> bool {
matches!(self.partial_cmp(other), Some(Greater))
if let Some(ordering) = self.partial_cmp(other) { ordering.is_gt() } else { false }
}

/// This method tests greater than or equal to (for `self` and `other`) and is used by the `>=`
Expand All @@ -1219,7 +1225,7 @@ pub trait PartialOrd<Rhs: ?Sized = Self>: PartialEq<Rhs> {
#[must_use]
#[stable(feature = "rust1", since = "1.0.0")]
fn ge(&self, other: &Rhs) -> bool {
matches!(self.partial_cmp(other), Some(Greater | Equal))
if let Some(ordering) = self.partial_cmp(other) { ordering.is_ge() } else { false }
}
}

Expand Down Expand Up @@ -1563,12 +1569,31 @@ mod impls {
impl Ord for $t {
#[inline]
fn cmp(&self, other: &$t) -> Ordering {
// The order here is important to generate more optimal assembly.
// See <https://github.com/rust-lang/rust/issues/63758> for more info.
if *self < *other { Less }
else if *self == *other { Equal }
else { Greater }
let mut res = 0i8;
res -= (*self < *other) as i8;
res += (*self > *other) as i8;
// SAFETY: The discriminants of Ord were chosen to permit this
unsafe { crate::mem::transmute(res) }
}

#[inline]
fn max(self, other: Self) -> Self {
if self > other {
self
} else {
other
}
}

#[inline]
fn min(self, other: Self) -> Self {
if self > other {
other
} else {
self
}
}

}
)*)
}
Expand Down
49 changes: 49 additions & 0 deletions tests/codegen/newtype-relational-operators.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
// The `derive(PartialOrd)` for a newtype doesn't override `lt`/`le`/`gt`/`ge`.
// This double-checks that the `Option<Ordering>` intermediate values used
// in the operators for such a type all optimize away.

// compile-flags: -C opt-level=1
// min-llvm-version: 15.0

#![crate_type = "lib"]

use std::cmp::Ordering;

#[derive(PartialOrd, PartialEq)]
pub struct Foo(u16);

// CHECK-LABEL: @check_lt
// CHECK-SAME: (i16 %[[A:.+]], i16 %[[B:.+]])
#[no_mangle]
pub fn check_lt(a: Foo, b: Foo) -> bool {
// CHECK: %[[R:.+]] = icmp ult i16 %[[A]], %[[B]]
// CHECK-NEXT: ret i1 %[[R]]
a < b
}

// CHECK-LABEL: @check_le
// CHECK-SAME: (i16 %[[A:.+]], i16 %[[B:.+]])
#[no_mangle]
pub fn check_le(a: Foo, b: Foo) -> bool {
// CHECK: %[[R:.+]] = icmp ule i16 %[[A]], %[[B]]
// CHECK-NEXT: ret i1 %[[R]]
a <= b
}

// CHECK-LABEL: @check_gt
// CHECK-SAME: (i16 %[[A:.+]], i16 %[[B:.+]])
#[no_mangle]
pub fn check_gt(a: Foo, b: Foo) -> bool {
// CHECK: %[[R:.+]] = icmp ugt i16 %[[A]], %[[B]]
// CHECK-NEXT: ret i1 %[[R]]
a > b
}

// CHECK-LABEL: @check_ge
// CHECK-SAME: (i16 %[[A:.+]], i16 %[[B:.+]])
#[no_mangle]
pub fn check_ge(a: Foo, b: Foo) -> bool {
// CHECK: %[[R:.+]] = icmp uge i16 %[[A]], %[[B]]
// CHECK-NEXT: ret i1 %[[R]]
a >= b
}

0 comments on commit b373f97

Please sign in to comment.