Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Long-running const-eval (loops) unusable in practice. #93481

Closed
oli-obk opened this issue Jan 30, 2022 · 28 comments · Fixed by #103877
Closed

Long-running const-eval (loops) unusable in practice. #93481

oli-obk opened this issue Jan 30, 2022 · 28 comments · Fixed by #103877
Labels
A-const-eval Area: Constant evaluation, covers all const contexts (static, const fn, ...) P-medium Medium priority T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. T-lang Relevant to the language team, which will review and decide on the PR/issue.

Comments

@oli-obk
Copy link
Contributor

oli-obk commented Jan 30, 2022

I've recently hit the problem of the hard error in the case when there is no infinite loop: my code generates PEXT Bitboards for a chess engine and it requires few seconds to finish but it's certainly not an infinite loop. The problem is that it forces me into either computing in the build time and going the code generation route (which is not pleasant) or having to only allow Rust Nightly which is maybe even less pleasant.

Originally posted by @kirillbobyrev in #71800 (comment)

@oli-obk
Copy link
Contributor Author

oli-obk commented Jan 30, 2022

I moved your comment from the other issue, as it is unrelated. Fixing the other issue will not make your problem worse.

Do you have more information about your failure? E.g. the entire error? Your code should definitely work, and we're gonna make it work.

What makes nightly different for your case?

@kirillbobyrev
Copy link
Contributor

kirillbobyrev commented Jan 30, 2022

Thanks a lot! This was the issue linked to by the compiler so I thought it belongs there.

What I'm trying to do is to fill some arrays with pre-computed values in compile-time. The computation shouldn't take more than few seconds and is mostly bearable when I put it somewhere, e.g. in "independent" (of the project code) build.rs file. However, having it in the build script slows down cargo build and other commands, which seems to create a huge delay for the tooling (e.g. Rust-Analyzer) and is quite frustrating because it slows down the feedback cycle by a lot.

The computation on its own is not too "heavy": it fills an array of up to ~1e6 elements performing ~1e2 operations (rough estimate) for each of them, so it takes seconds instead of minutes or hours.

Error message

When trying to compile the compile-time evaluation version of the computation, I get two errors:

error: any use of this value will cause an error
  --> src/chess/attacks.rs:52:35
   |
14 | / const BISHOP_ATTACK_OFFSETS: [usize; BOARD_SIZE as usize] =
15 | |     generate_table::<BISHOP_ATTACKS_COUNT>(&BISHOP_ATTACK_DIRECTIONS).2;
   | |________________________________________________________________________-
...
52 |               let attacked_square = to_square(column, row);
   |                                     ^^^^^^^^^^^^^^^^^^^^^^
   |                                     |
   |                                     exceeded interpreter step limit (see `#[const_eval_limit]`)
   |                                     inside `generate_attacks` at src/chess/attacks.rs:52:35
   |                                     inside `generate_table::<5248_usize>` at src/chess/attacks.rs:129:61
   |                                     inside `BISHOP_ATTACK_OFFSETS` at src/chess/attacks.rs:15:5
   |
   = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
   = note: for more information, see issue #71800 <https://github.com/rust-lang/rust/issues/71800>

and

error: any use of this value will cause an error
  --> src/chess/attacks.rs:70:5
   |
21 | / const ROOK_ATTACK_OFFSETS: [usize; BOARD_SIZE as usize] =
22 | |     generate_table::<ROOK_ATTACKS_COUNT>(&ROOK_ATTACK_DIRECTIONS).2;
   | |____________________________________________________________________-
...
70 |       while scanning_bit < 64 {
   |  _____^
   | |_____|
   | |_____|
   | |_____|
   | |
71 | |         if mask == 0 {
72 | |             break;
73 | |         }
...  |
79 | |         scanning_bit += 1;
80 | |     }
   | |     ^
   | |_____|
   | |_____exceeded interpreter step limit (see `#[const_eval_limit]`)
   | |_____inside `pdep` at src/chess/attacks.rs:70:5
   | |_____inside `generate_table::<102400_usize>` at src/chess/attacks.rs:128:35
   |       inside `ROOK_ATTACK_OFFSETS` at src/chess/attacks.rs:22:5
   |
   = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
   = note: for more information, see issue #71800 <https://github.com/rust-lang/rust/issues/71800>

I don't know exactly why these two are pointed out by the compiler but it seems that in both cases the variables in question are really large (those are bitmasks, so I'm moving 1s close to the Most Significant Bit of u64). The variables themselves are just stored in array afterwards, they're not being the limit for loops or whiles. The only bitmasks that are are at most 2^12 which does not look very large.

Unstable Rust

So, the thing that forces me to use Nightly Rust is that there I can do

#![feature(const_eval_limit)]
#![const_eval_limit = "0"]

After I do that and switch to nightly version of the compiler, the code works as expected and is compiled without a noticeable delay (maybe 2-3 extra seconds as most for a single array computation, more if I do 6 computations instead of 2 because of the inability to do tuple deconstruction). But the point is, even in compile-time the code looks to be correct and doesn't loop infinitely.

Context

Also, apologies for the code being super dense and maybe hard to understand. Please let me know if you want me to extract (and possibly minimize) the entire snippet to make it independent of the project I'm building (it's quite easy to do so) or just make it cleaner & easier to follow. If you need more information, please let me know.

@oli-obk
Copy link
Contributor Author

oli-obk commented Jan 31, 2022

Thanks for the details, yea, this is an unfortunate situation. The only recommendation I have at this time for stable code is to split the computation into multiple constants and merge it in another constant.

A minimal repro is

const _: () = {
    let mut i = 333_333;
    while i > 0 {
        i -= 1;
    }
};

If the initial value of i is 333_332, then this code compiles.

It is very suboptimal that stable code mentions #[const_eval_limit]. We should probably just stabilize this attribute.

Opinions @rust-lang/wg-const-eval

@kirillbobyrev
Copy link
Contributor

Thanks for the details, yea, this is an unfortunate situation. The only recommendation I have at this time for stable code is to split the computation into multiple constants and merge it in another constant.

A minimal repro is

const _: () = {
    let mut i = 333_333;
    while i > 0 {
        i -= 1;
    }
};

If the initial value of i is 333_332, then this code compiles.

It is very suboptimal that stable code mentions #[const_eval_limit]. We should probably just stabilize this attribute.

Opinions @rust-lang/wg-const-eval

Thanks! Splitting is... I would say very hard: the code is really tightly coupled, I don't think I can decompose.

Stabilizing #![feature(const_eval_limit)] sounds great to me, but better defaults also make sense to me. I'm not very well versed in the Rust Compiler internals, so I wouldn't know where to look for it but having some sensible limit allowing few seconds in compile-time should probably be preferable. Maybe have a warning for it, but certainly not denying such attempts to move parts of computation to the compile-time.

@oli-obk
Copy link
Contributor Author

oli-obk commented Jan 31, 2022

I'm not very well versed in the Rust Compiler internals, so I wouldn't know where to look for it but having some sensible limit allowing few seconds in compile-time should probably be preferable.

Yea... it's hard to come up with a good limit. The problem is that we also want a quick response when you end up in an infinite loop. Can you check what limit will allow your example to run so we have some numbers?

@Mark-Simulacrum
Copy link
Member

The problem is that we also want a quick response when you end up in an infinite loop.

@oli-obk Is there somewhere to read more detail on this? It seems to me that const eval shouldn't necessarily be special compared to just running a program -- i.e., an infinite loop leads to hanging, which we probably 'always' want to diagnose (unlikely to be intended unless written literally as loop {} or so) but otherwise 'seems ok'.

In general a limit seems pretty annoying, since presumably then users will constantly need to be bumping it or setting it "infinitely" high if they're e.g. expecting to deserialize configuration data at compile-time which can be somewhat arbitrarily large. That seems like a pretty terrible experience, and comparatively the value of "detecting stalls" feels lower.

Ideally we'd have a nice way to debug those -- maybe we can hook things up such that rustc will provide and/or dump a const backtrace in some way when stopped via gdb/lldb or something like that? Those are typically the tools I at least reach for when trying to debug a stalled loop in a normal program, and it's not clear that it's terrible for const to be similar.

@RalfJung
Copy link
Member

@Mark-Simulacrum I would liken this more to the recursion limit that we have for trait resolution. Both are really just different forms of user-defined computation happening at compile time. The compiler cannot reliably tell apart infinite loops from long computations, so any heuristic that diagnoses infinite loops will be wrong sometimes.

@Mark-Simulacrum
Copy link
Member

Sure, they're similar/equivalent at a theoretical level, but I suspect that many users will not view them as such. Notably, it seems easy to write long-running const-eval code (it's just regular Rust code!) whereas the number of folks writing traits and clauses that cause 'sufficiently deep' recursion seems pretty likely to be low and/or at least more indicative of unintentional behavior.

Maybe it just means we should bump the default limit up really high (e.g., so that it takes minutes to get there on an average computer), or make an assumption that most users aren't using const-eval to compute anything serious, but it feels like a materially different case to me compared to trait resolution based on how users typically interact with these things.

My suggestion of basically removing the limit entirely and working on good interfaces to debugging const eval (e.g., making gdb/lldb or similar tooling work well with const eval) sort of goes towards that direction -- it leaves the user to determine whether they've created an infinite loop or not, and then provides tools to fix it, rather than heuristically guessing at an arbitrary point that a loop has been hit.

@kirillbobyrev
Copy link
Contributor

kirillbobyrev commented Jan 31, 2022

I'm not very well versed in the Rust Compiler internals, so I wouldn't know where to look for it but having some sensible limit allowing few seconds in compile-time should probably be preferable.

Yea... it's hard to come up with a good limit. The problem is that we also want a quick response when you end up in an infinite loop. Can you check what limit will allow your example to run so we have some numbers?

A quick binary search over #[const_eval_limit] limit shows that 40000000 seems to work, 30000000 doesn't, so it's on magnitude of 3e7..4e7 ops (meaning my original estimate was about right, I honestly didn't expect that).

@RalfJung
Copy link
Member

RalfJung commented Jan 31, 2022

A quick binary search over #[const_eval_limit] limit shows that 40000000 seems to work, 30000000 doesn't, so it's on magnitude of 3e7..4e7 ops (meaning my original estimate was about right, I honestly didn't expect that).

So that's 30-40 times the current default limit.
How long does this computation take, roughly?

@kirillbobyrev
Copy link
Contributor

A quick binary search over #[const_eval_limit] limit shows that 40000000 seems to work, 30000000 doesn't, so it's on magnitude of 3e7..4e7 ops (meaning my original estimate was about right, I honestly didn't expect that).

So that's 30-40 times the current default limit. How long does this computation take, roughly?

time cargo +nightly build gives ~3.5 secs when I change the file (so, this should include other operations on top of the actual compilation), so not much.

@gavofyork
Copy link

FWIW I'm also coming up against this. Thankfully I can work around by splitting the expression, but I'd definitely be grateful for stabilising const_eval_limit.

@scottmcm scottmcm added T-lang Relevant to the language team, which will review and decide on the PR/issue. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. labels Jun 29, 2022
@Mythra
Copy link

Mythra commented Jul 26, 2022

Sorry I'm coming in late. It's not clear to me exactly if there are any blockers with regards to stabilizing const_eval_limit. I have a codebase running up against this (very similarly filling out constant arrays for validation rules that I get from a third-party, that are guaranteed to not change), and would like to use it on stable. Splitting up is not easily doable in this case unfortunately, though in theory possible. Are there any blockers with stabilizing const_eval_limit? If not, would we be okay with a stabilization pr?

@oli-obk
Copy link
Contributor Author

oli-obk commented Jul 27, 2022

I'm on board with just scrapping the limit entirely and adding some way for users to output note or warn messages, so they can println debug their own long running const eval things.

Having a print mechanism doesn't need to be a blocker for just removing the limit.

@oli-obk
Copy link
Contributor Author

oli-obk commented Jul 27, 2022

Wrt automatically detecting loops, we've had that before and it was messy logic. We can detect plain loop but anything more complex gets annoying fast

@Mythra
Copy link

Mythra commented Jul 27, 2022

I think this all sounds good, sounds like the next steps are (posting explicitly here to make sure everyone's okay with it, and I'm reading @oli-obk 's responses correctly):

  1. Open a PR to remove the limit.
  2. Add on this usecase to: Tracking issue for making dbg!(x) work in const fn #58278, or perhaps open a new issue?
  3. Work on that issue separately as the PR opened in step 1 eventually makes it's way to stable.

@RalfJung
Copy link
Member

RalfJung commented Jul 30, 2022 via email

@eddyb
Copy link
Member

eddyb commented Jul 31, 2022

if we have a way of emitting diagnostics during CTFE without halting evaluation

You should be able to emit arbitrary diagnostics from at least any part of the compiler that has access to Session, since diagnostics are just methods on that (you can do it with even less but since you have a TyCtxt that should be enough).

Though I am a bit confused since I thought miri already emitted lint warnings and whatnot?
Are they always propagated up using ? then emitted at a central point?
If so, you can just not that for some warnings.

However, I suppose that extremely speculative interpretation could be an issue.
Even then, you can just buffer them in the interpreter instead of returning Err, and only emit them at the very end if they're not invalidated by e.g. speculation bail-out.

@RalfJung
Copy link
Member

Though I am a bit confused since I thought miri already emitted lint warnings and whatnot?
Are they always propagated up using ? then emitted at a central point?

Do you mean the core interpreter, or Miri-the-tool?

Miri-the-tool has some support for non-halting diagnostics. But the core interpreter never emits a diagnostic, it just raises errors and then it is up to whoever created the InterpCx to handle the resulting InterpResult and do suitable error reporting.

The interpreter runs inside a query so I have no idea what happens when we start emitting diagnostics in it. If you have lots of constants you might also get lots of diagnostics... and it's unclear how allow should work or would work; is just the allow at the root constant relevant or the one at the span where the error is triggered or any span up the call stack? So many questions.

@eddyb
Copy link
Member

eddyb commented Jul 31, 2022

The interpreter runs inside a query so I have no idea what happens when we start emitting diagnostics in it.

All diagnostics involving HIR/MIR/type-system/trait-system/etc. come from inside queries, so that's all good.

This is an example of emitting a MIR lint (with significant metadata from SourceInfo).
Once .build(...) is done on that lint-specific thing, you just get a normal DiagnosticBuilder, and instead of the .emit() at the end you could do .buffer(&mut self.buffered_lints) or something.

(rg '\.buffer\(&mut ' for some examples, that's how I found some stuff)


(just realized that the second part of this is quite long, so I'm posting this smaller comment first)

@eddyb
Copy link
Member

eddyb commented Jul 31, 2022

and it's unclear how allow should work or would work; is just the allow at the root constant relevant or the one at the span where the error is triggered or any span up the call stack?

For "running too long", we can at least narrow the "loopy frames" if we:

  1. have, at the InterpCx level (miri-the-tool's warning sounds a bit like this?):
    • now, as deterministic "timestamp" - counting BBs/statements(?)/"steps"(??)
    • last_notify for when we last warned (or "notified") the user
    • notify on now - last_notify > notify_threshold
      (updating last_notify = now afterwards)
  2. track per-frame start (now when frame was entered, immutable)
  3. (when notifying) skip "fresh" frames (now - frame.start < notify_threshold)
    (they're too "short-lived" to be relevant - though this could use a threshold less than
    notify_threshold just in case there's some for loop layering going on?)
  4. (when notifying) skip "stuck" frames (frame.start < last_notify),
    with the exception of the innermost one, which can be rendered as "still executing ..."
    or "haven't left ... since last warning" etc. - its ancestors are effectively implied
  5. maybe consider exponential backoff? on every notify, notify_threshold *= FACTOR, with
    FACTOR being 1.5, 2.0 etc. - either configurable or just some constant that works well
    (this could make warning spew logarithmic instead of linear in compile time)
    • e.g. if it takes ~2s to hit the first warning, and ~60s to finish:
      FACTOR = 1.0: ~30 warnings in total (T+2, T+4, ..., T+58, T+60)
      FACTOR = 1.5: 6-7 warnings (T+2, T+5, T+9.5, T+16, T+26, T+41.5, T+64)
      FACTOR = 2.0: 4-5 warnings (T+2, T+6, T+14, T+30, T+62)
      FACTOR = 3.0: 3-4 warnings (T+2, T+8, T+26, T+80)

So my thought process here is that for something like this:

const fn root() {
    leaf_1();
    loopy();
    leaf_2();
}
const fn loopy() {
    leaf_3();
    for _ in ... {
        leaf_4();
    }
    leaf_5();
}
const fn leaf_{1,2,...}() {
    // Does some work but relatively quickly.
}

(with potentially more frames above/below loopy that don't themselves have introduce extra loops)
we should be able to limit all diagnostics to talk about loopy even if it might not have obvious loops in it
(could instead be doing e.g. a dozen calls to functions that individually are all below the threshold)


An even cooler thing (but more expensive so probably not a good idea?) is to have in each frame
history: IndexVec<BasicBlock, Option<Range<_>>, where:

  • history[B].start: now when B was entered for the first time in this frame
  • history[B].end: now when B was exited most recently in this frame
    (alternatively, history[B].end can be constantly updated, while executing B)

In a sense, this is an approximation of the (BB-level) execution trace for that frame.

When entering a block B more than once (i.e. history[B] is Some), we're looping:

  • let L be the loop-body entry-block of the innermost loop B is in
  • L being an entry-block, dominates its entire loop-body, so L dominates B
    • requires history[L].start < history[B].start once B is reached
      (i.e. B could only have been reached by going through L first)
    • if L was enumerated as one of the dominators of B (see below) using the
      MIR body's static CFG, this can be asserted instead of being a filter, as
      no possible execution trace on that MIR body can produce any other result
    • this definition is specific to SEME/"reducible" loops and will break in
      the event of MEME/"irreducible" loops, but MIR tends not to have those?
  • innermost loop means B can't repeat without going through L first
    • requires history[L].end > history[B].end on every B re-entry
      (i.e. B has not been reached since last going through L)
  • so dominates + innermost require history[L].contains(&history[B]) (on B re-entry)
    • IOW, all past B executions are "sandwiched" between L executions, making
      the execution trace like this (when excluding all other BBs): LBLLBLLLBLBLLBL
      (though from the viewepoint of a single B re-entry, we cannot determined whether
      LBBL may have happened in the past, only if LB->LBB is about to happen)
    • while a necessary condition, this is not sufficient to find an unique L,
      as any "smaller" dominators of B also nested in L will also satisfy it,
      but we can pick L as the "largest" dominator of B that satisfies it
      (with "larger" dominators that don't pass the check being e.g. outer loops)
    • (a "domtree" lets you enumerate dom[B], dom[dom[B]], ... as dominators of B, since "domtrees" are "parenting-only trees", i.e. dom[child] == parent)

Even ignoring loops, conditions like these can be used to categorize a block B wrt last notify:

  • history[B].start < last_notify: B had been reached last time
    • history[B].end < last_notify: haven't reached B since, not relevant
    • history[B].end > last_notify: B was reached again since (relevant - looping)
  • history[B].start > last_notify: B has only been reached more recently (maybe relevant?)

Though you also need to track "total elapsed while in B", not just history[B], to be able to e.g. sort blocks by how much they contributed to the notify threshold being reached (and perhaps combine as many "longest-running" blocks together as needed, until e.g. 50% of notify_threshold is reached, and merge their Spans to describe the relevant region of code to the user).

That only helps you with the start > last_notify case, since for everything else you also don't have a way to know what the old values were at last_notify, but OTOH for start < last_notify the only relevant subset is end > last_notify, which can be described as "this very same loop is still here and still looping".

Well that's enough ranting, I doubt most of this is implementable anyway.

However, if done isolated enough to not leak implementation details into the rest of miri (esp. if this can be done through machine hooks, per-frame machine-specific data etc.), it might be a decent compromise of "well-bounded" O(Frames * BlocksPerFrame) storage, when compared to exact execution traces can be many orders of magnitude larger (both are useful in their own right I suppose0).

(it would be fun to end up with a miri-based coverage/heatmap system, that merges the per-frame data into something more global/per-instance, every time a frame is popped - for the existing instrumentation-based coverage system, miri could even output a compatible file with existing tooling, and all of the above shenanigans aren't even relevant, since miri-the-too could literally be interpreting the coverage counter increment instructions, into a global set of coverage counters, heh)

@eddyb
Copy link
Member

eddyb commented Jul 31, 2022

Oh another last-minute idea (should maybe get own issue?): so right now Ctrl+C just kills rustc, right?
What if we had a way to trigger some slow path in the query system without slowing down the normal fast path?

I guess it would have to go on the "cache miss, time to run provider" codepath... oh yeah we could make the provider fn pointers AtomicPtrs instead, pray and hope relaxed loads are free, and rewrite all of them from another thread when Ctrl+C is observed, to force the custom bail-out code to eventually run.

But for miri specifically, we can have it check some AtomicBool explicitly once in a while, and bail out from within CTFE - potentially it could actually be a whole enum encoded in an atomic with the kind of states that would help coordinate priorities (e.g. if miri has been stuck for a while, we wouldn't want Ctrl+C to be handled through a query it calls, but through miri itself, since that has greater value).

And after the Ctrl+C thread has tried to get the message across to miri and/or the query system, it can put itself to sleep for long enough (100ms?) to give everything a chance to react, but when it wakes up it either:

  • aborts the process right away, as if no special handling had been done
  • or maybe we can hack something up that hijacks the non-responsive thread?
    (pthread_ APIs don't seem sufficient, but presumably we can attach a debugger)
rustfilt --help > /dev/null || cargo install --force rustfilt
echo 'fn main() {}' | rustc - -C linker="$(
  bash -c '\
    PROXY="$(mktemp --tmpdir proxy.XXXXXXXX)"; \
    echo "#!/usr/bin/env bash" > "$PROXY" && \
    echo "$0" >> "$PROXY" && \
    chmod +x "$PROXY" && \
    echo "$PROXY" \
  ' \
  'gdb -q --pid=$PPID -ex bt | grep "^#" | rustfilt; false'
)"
error: linking with `/tmp/proxy.PM2QqZrH` failed: exit status: 1
  |
  = note: "/tmp/proxy.PM2QqZrH" "-m64" "/tmp/rustcKqEWTn/symbols.o" "rust_out.rust_out.b68f70e0-cgu.0.rcgu.o" "rust_out.rust_out.b68f70e0-cgu.1.rcgu.o" "rust_out.rust_out.b68f70e0-cgu.2.rcgu.o" "rust_out.rust_out.b68f70e0-cgu.3.rcgu.o" "rust_out.rust_out.b68f70e0-cgu.4.rcgu.o" "rust_out.rust_out.b68f70e0-cgu.5.rcgu.o" "rust_out.rust_out.b68f70e0-cgu.6.rcgu.o" "rust_out.fa8jhqqi3in8ke7.rcgu.o" "-Wl,--as-needed" "-L" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib" "-Wl,--start-group" "-Wl,-Bstatic" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd-91db243dd05c003b.rlib" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libpanic_unwind-72269a4525d4f5cf.rlib" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libobject-28d8f1c01a28b12d.rlib" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libmemchr-5b78018a9f8ae4bc.rlib" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libaddr2line-f4160de9657f17b2.rlib" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libgimli-1cd8b958acdf2395.rlib" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/librustc_demangle-a4c4a7e7edfa8aea.rlib" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd_detect-061c02acc74ada37.rlib" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libhashbrown-2aed706f056a5dfb.rlib" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libminiz_oxide-1e1f90ff4bfdca6f.rlib" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libadler-2d16c932daf0ad41.rlib" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/librustc_std_workspace_alloc-8f15fae89f489a33.rlib" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libunwind-81f3d85dace75e64.rlib" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libcfg_if-e071db8735f10456.rlib" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/liblibc-6db7e05a8de4df10.rlib" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/liballoc-7c03f666869e802a.rlib" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/librustc_std_workspace_core-2a6a2797f7a73818.rlib" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libcore-0e3656b1fda5fd7b.rlib" "-Wl,--end-group" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib/libcompiler_builtins-b09abe545ed38eb1.rlib" "-Wl,-Bdynamic" "-lgcc_s" "-lutil" "-lrt" "-lpthread" "-lm" "-ldl" "-lc" "-Wl,--eh-frame-hdr" "-Wl,-znoexecstack" "-L" "/home/eddy/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib" "-o" "rust_out" "-Wl,--gc-sections" "-pie" "-Wl,-zrelro,-znow" "-nodefaultlibs"
  = note: #0  0x00007feeedc4dcb5 in __futex_abstimed_wait_common ()
          #1  0x00007feeedc529e3 in __pthread_clockjoin_ex ()
          #2  0x00007feeede853dd in std::sys::unix::thread::Thread::join ()
          #3  0x00007feef0874f94 in <std::thread::JoinHandle<core::result::Result<(), rustc_errors::ErrorGuaranteed>>>::join ()
          #4  0x00007feef085c3d1 in rustc_interface::util::run_in_thread_pool_with_globals::<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_errors::ErrorGuaranteed>, rustc_driver::run_compiler::{closure#1}>::{closure#0}, core::result::Result<(), rustc_errors::ErrorGuaranteed>> ()
          #5  0x00007feef08847bd in <rustc_driver::RunCompiler>::run ()
          #6  0x00007feef085e5a0 in <core::panic::unwind_safe::AssertUnwindSafe<rustc_driver::main::{closure#0}> as core::ops::function::FnOnce<()>>::call_once ()
          #7  0x00007feef0888d2d in rustc_driver::main ()
          #8  0x0000559ca9014e1d in rustc_main::main ()
          #9  0x0000559ca9014e03 in std::sys_common::backtrace::__rust_begin_short_backtrace::<fn(), ()> ()
          #10 0x0000559ca9014e59 in std::rt::lang_start::<()>::{closure#0} ()
          #11 0x00007feeede5a9be in core::ops::function::impls::{impl#2}::call_once<(), (dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe)> () at library/core/src/ops/function.rs:280
          #12 std::panicking::try::do_call<&(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe), i32> ()
          #13 std::panicking::try<i32, &(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe)> ()
          #14 std::panic::catch_unwind<&(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe), i32> ()
          #15 std::rt::lang_start_internal::{closure#2} () at library/std/src/rt.rs:128
          #16 std::panicking::try::do_call<std::rt::lang_start_internal::{closure_env#2}, isize> () at library/std/src/panicking.rs:492
          #17 std::panicking::try<isize, std::rt::lang_start_internal::{closure_env#2}>
          #18 std::panic::catch_unwind<std::rt::lang_start_internal::{closure_env#2}, isize> () at library/std/src/panic.rs:137
          #19 std::rt::lang_start_internal () at library/std/src/rt.rs:128
          #20 0x0000559ca9014e41 in main ()


error: aborting due to previous error

Oh yeah that works, cool!

cc @wesleywiser

@eddyb eddyb changed the title long running const eval unusable in practice Long-running const-eval (loops) unusable in practice. Oct 13, 2022
@tarcieri
Copy link
Contributor

We are encountering this limit in practice evaluating constants in a bignum arithmetic library:

RustCrypto/crypto-bigint#130 (comment)

Namely we're trying to add support for modular arithmetic where the moduli are represented as (associated) constants, and encountering this limit while trying to compute the constant values using const fn.

This is a fairly trivial computation compared to what we would like to eventually do with const fn, e.g. compute basepoint tables by evaluating elliptic curve equations at compile time, so hitting the limit already is rather concerning.

@oli-obk oli-obk added P-medium Medium priority A-const-eval Area: Constant evaluation, covers all const contexts (static, const fn, ...) labels Oct 16, 2022
@apiraino
Copy link
Contributor

apiraino commented Nov 9, 2022

Bumping up priority so this issue will get more eyeballs during T-compiler meeting

@rustbot -P-medium +P-high

@pnkfelix
Copy link
Member

pnkfelix commented Apr 14, 2023

Discussed at 2023 Q1 P-high review

Is this resolved by PR #106227? Or is there more work we need to do here beyond that?

@oli-obk
Copy link
Contributor Author

oli-obk commented Apr 14, 2023

No, that just made us not regress it. We would still need to get rid of the limit entirely and report some sort of progress messages.

Ideally we'd also catch Ctrl+C and print a const eval backtrace before exiting rustc.

@RalfJung
Copy link
Member

Something like #103877 would have helped though, right?

@pnkfelix
Copy link
Member

After further discussion in P-high review meeting linked above, decided to downgrade this to a P-medium feature request

@rustbot label: -P-high +P-medium

@rustbot rustbot added P-medium Medium priority and removed P-high High priority labels Apr 14, 2023
@bors bors closed this as completed in 23f93a1 Jun 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-const-eval Area: Constant evaluation, covers all const contexts (static, const fn, ...) P-medium Medium priority T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. T-lang Relevant to the language team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging a pull request may close this issue.