Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fault-tolerant GHC compilation pipeline #63

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

michaelpj
Copy link

@michaelpj michaelpj commented Jan 5, 2024

This proposal aims to make GHC progress further in the case of errors. This has two benefits:

  1. Giving users more diagnostics
  2. More reliably producing (partial) output from compiler passes for use by tools

This is a HF proposal: it requires a substantial amount of work that will need funding.

I'm keen to get feedback on how much work this would be. I have an intuition in the abstract, but I don't know how difficult it would be in GHC.

rendered proposal

@simonpj
Copy link
Contributor

simonpj commented Jan 8, 2024

I'm all for this in principle. Some comments.

  • The parser is particularly bad at the moment. It just fails right, immediately. A decent attempt to recover would be good.

  • We need to be careful that an attempt to recover in one phase does not produce misleading errors in a subsequent phase. Examples:

    data T x == K1 | K2
    f :: T a -> Int
    f K1 = 3
    

    The data type declaration is syntactically bogus. But we don't really want to report out of-scope T and K1 arising from f's definition.

    I'm sure there are lots of other examples. To me this is the biggest potential problem with attempting to recover. Sadly, the devil is in the details; one needs to think about this in every individual case.

  • The renamer and typechecker already make pretty signficant attempts to recover. For example:

    • If the renamer finds an unbound identifier, it doesn't complain; it just uses a HsUnboundVar. The typechecker doesn't fail immediately either; it add an unbound-var constraint to be reported later.
    • If the typechecker can't reasonably continue at some point it fails in the monad; but it then tries to
      recover (search for recoverM). Notably, at a binding f = e if typechecking in e fail, the typecheck
      recovers by adding f :: forall a. a to the type environment, and carries on with the body of the binding.
      That is usually good enough; but can give rises to ambiguity errors if, say, f's real type is Int -> Int.
      See previous comment.
  • It's not clear that proceeding beyond typechecking makes sense. -fdefer-type-errors can help, but doesn't defer all type errors.

TL;DR: It's not an all-or-nothing thing. GHC already follows the plan outlined here to some extent. The biggeest win would (IMHO) be in the parser.

@goldfirere
Copy link
Contributor

With my Haskeller / GHC contributor hat: I'm in support of trying this.

  • Like Simon, I think the parser is the best place to score a win here.
  • I agree we'll have to worry about bogus downstream errors. Maybe in a stretch of unparseable code, we accumulate a list of identifier names that appear and mark them as potentially in scope? Then later errors around those identifiers can be treated specially (perhaps suppressed).
  • A very cheap-and-cheerful approach to the parser might be simply to break the input file into stretches where the first line of each stretch begins in column 0 and subsequent lines of each stretch do not begin in column 0. (Of course, the entire module can be indented after the module M where, or maybe the file doesn't use layout at all, or other exceptions. But this is an excellent heuristic that will work 99% of the time.) One small challenge is that some work will have to be done to stitch function equations together, but that shouldn't be too bad.
  • Why would the parser have the most benefit? Both because it's the least fault-tolerant phase and the one whose improved behavior will affect the most other tools.
  • One thing that I think would not be that hard in later phases is to continue to the next phase after one fails. That is, if the renamer fails unrecoverably in one definition, we can perhaps still type check other definitions.

With my HF hat on: I'm not sure about the large scope of the proposal. Instead, I think I'd rather see a much smaller proposal (say, focused just on the parser, perhaps with a chosen cheap approach that doesn't require a big rewrite of happy), with the goal of witnessing actual improvement in downstream tooling before repeating the exercise. That is, the proposal as written sounds like we would do a large overhaul of fault-handling in GHC, and then subsequently update tooling to take advantage of the more robust GHC. Instead, I think it would be better to make a small improvement to GHC and then make sure that all the tools around GHC can consume the improvement. That would seem to deliver the improvement to users sooner and mitigate the risk of working in the wrong direction.

@michaelpj
Copy link
Author

Sadly, the devil is in the details; one needs to think about this in every individual case.

I'm not sure this is entirely true. I think a general approach of conservatism can work. That is "if you can't get it right, don't say anything". For example, we might say "if in a scope S there is an error in a form that can introduce names, then do not emit out-of-scope errors for names in S". This is still somewhat case-by-case, but I think there are general principles to be had.

Obviously there are a variety of heuristics of varying levels of conservatism (I do like some of Richard's suggestions too!). Also, we should look in more detail at what other people do.

Aside on the specific example: I would actually hope that a good enough parser could recover in the given program and still realise that there were two constructors called K1 and K2!

The renamer and typechecker already make pretty signficant attempts to recover. For example...

This is just the sort of thing I think we'd want to work out as a first step in the design. Even if the passes do recover to some degree, I don't know whether we then allow future stages to continue. Does the typehchecker do the right thing with HsUnboundVar? (This is a rhetorical question: I don't think we should try and answer all such questions here)


With my HF hat on: I'm not sure about the large scope of the proposal.

Yeah, there's a lot in here. I started writing and I realised that there was in principle a lot to do. However, as @simonpj says, it's quite possible that many of these are partially (or completely!) done already - I am writing from a place of moderate ignorance about the facts of GHC.

That is, the proposal as written sounds like we would do a large overhaul of fault-handling in GHC, and then subsequently update tooling to take advantage of the more robust GHC

The idea with the phases was to exactly ensure that we get something useful as quickly as possible. The proposal could very much be done phase-by-phase.

Would it help if we added to each phase "Step N: update a key usecase (e.g. HLS) to benefit from the increased robustness" (which might be a no-op if there are no interface changes)?


I do generally agree that if we do any part of this we should do Phase 1 (the parser). However, given the discussion of GHC's existing fault-tolerance it also sounds like later Phases may be less work. So perhaps it would be a shame to stop when there is more low-hanging fruit to pick.

One approach would be to specifically ask to fund this in stages: i.e. fund Phase 1, assess from that point how much work it would be to progress, fund the next Phase (or not).

@Ericson2314
Copy link
Contributor

The lowest hanging fruit I know if is errors hiding warnings when the errors are things that could be warnings.

I.e. we have diagnostics that are, as a matter of implementation, "non-fatal". However, we have a policy that they should be fatal, and then we choose not to compute (or display those which we have already computed?) further non-fatal errors/warnings.

I think it would be pretty easy to make sure such "make it fatal" policies do not prevent further errors/warnings from appearing.

@michaelpj
Copy link
Author

(I was gardening the HLS issue tracker and lo, I found an issue that would be solved by this: haskell/haskell-language-server#273)

@michaelpj
Copy link
Author

Another one! haskell/haskell-language-server#853

This one is quite interesting. If the user has specified -Werror, we have a problem, in that a single warning stops us from typechecking downstream modules, and so all IDE features stop working. But the user did specify -Werror, and it would be nicer to honour what they said. At the moment we override it.

But errors from -Werror are manifestly non-fatal, and so if we did Phase 4 of this proposal, then we would be able to still typecheck downstream modules even if there was a -Werror error in upstream. So we would be able to stop messing with the user's flags, because GHC would give us good information in any case!

@sgraf812
Copy link

sgraf812 commented Jan 11, 2024

The parser is particularly bad at the moment. It just fails right, immediately. A decent attempt to recover would be good.

Agreed. Some progress in !4711, but I'm a bit sceptical; I spent some time looking around at other tools and am convinced that we need to equip happy first with a decent mechanism for classic panic mode-based error recovery. The basic idea is plain simple; define anchor/synchronisation tokens such as ITsemi (which are helpfully inserted by layout) to which the parser skips on a parse error. Then Simon's example

data T x == K1 | K2
f :: T a -> Int
f K1 = 3

lays out to (perhaps missing opening and closing braces, I forgot)

data T x == K1 | K2;
f :: T a -> Int;
f K1 = 3;

and parsing after the error may continue in f. Is this prone to skipping many potential errors after the first syntax error? Yes, definitely! But so is every approach that people came up with in the last 50 years and this is good enough to get us started.

What we need is something like menhir's %on_error_reduce or yacc full error token machinery.

The hacky error that happy implements is unfortunately insufficient, at least that's the impression I get when looking at !4711 and from having played around with it a little; it is simply there to insert one missing token on error, nothing more, nothing less.

Of course, this needs someone dedicated to reading related approaches (this is especially important!) and implementing a good one in happy. (Do note that menhir is 3 steps ahead of us in that regard, they implement vastly better error recovery mechanisms based on their incremental parsing mode.) I don't think we'll find this someone in time without discussing payment, given how long https://gitlab.haskell.org/ghc/ghc/-/issues/16955 has been lingering. I'd be very happy to advise, having a decent grasp on LALR parsing and happy. Not enough time to take on this project, though.

@goldfirere
Copy link
Contributor

I'm not convinced we need to touch happy at all here. If instead we just break the input file into stretches where the first line of each stretch begins in column 0 and subsequent lines of each stretch do not begin in column 0, I think we'll be most of the way there. Each stretch would be fed into the happy-generated parser independently, and then the results would be appropriately combined. A happy-based solution is likely better (recovery in the middle of a parse), but significantly harder.

@michaelpj
Copy link
Author

I don't think we need to decide how we get better error recovery in this proposal. But this discussion does suggest that it's quite a bit of work, since it sounds like what happy provides is insufficient, and so we need to either a) use a different tool, b) enhance happy, c) work around it in some way like Richard suggests.

Perhaps that makes sense as an independent chunk: we might want a different skillset (someone who knows a lot about parsers, rather than someone who is good at wrangling GHC).

@deemp
Copy link

deemp commented Jan 15, 2024

For a), there's possibly relevant info in usethesource/rascal#1909

@gbaz gbaz changed the title Fault-tolerant GHC compilation pipeline RFC: Fault-tolerant GHC compilation pipeline Jan 19, 2024
@gbaz gbaz changed the title RFC: Fault-tolerant GHC compilation pipeline Fault-tolerant GHC compilation pipeline Jan 19, 2024
@gbaz
Copy link
Collaborator

gbaz commented Jan 19, 2024

At our tech track meeting today we discussed this briefly and feel like we're not in a position to comment or decide on this yet -- its very promising, but we will need timeline, plan, etc. more worked out as well as more discussion with GHCHQ regarding the right architecture.

@michaelpj
Copy link
Author

Okay. I don't know if it's feasible for me to do that. In particular, I'm not enough of a parser/GHC expert to actually have a good take on what the right thing to do is or how long it would take.

I guess it's not reasonable to say "coming up with a more detailed plan is part of the work". Can we say "the HF would fund this in principle" and then invite a commercial partner to help make it into a more specific plan?

@gbaz
Copy link
Collaborator

gbaz commented Jan 20, 2024

The discussion has been pretty rich so far, with helpful and supportive comments from ghchq. Personally, I would tend to hope that it could develop further into something resembling more a full plan of attack and estimate.

We don't really have a "would fund this in principle" vote as an actionable item, though the overall disposition was pretty favorable. Perhaps at next month's meeting we can have some further discussion and think through what we are able to do to help along this and things that may arise in similar "in between" states.

@goldfirere
Copy link
Contributor

In the interest of moving this forward, might I suggest two possibilities:

  1. I conjecture my approach would take ~2 weeks to implement, especially for someone who knows their way around GHC a bit. (Not a lot. A bit.) So we could imagine this proposal becoming a request for funding someone (e.g. a contractor) to undertake this work.
  2. Fixing happy to do the better thing is likely harder. Maybe a good (Google) Summer of Code project? It would need a mentor. But it's somewhat well scoped, is roughly the right size, and might connect with an undergrad's desire for a parser adventure after taking a compilers class. The challenge: finding an appropriate mentor. But maybe that mentor is reading this ticket and will be inspired to volunteer.

Do others have thoughts here? Have I wildly misestimated the challenges somehow?

@sgraf812
Copy link

For the past week, I've been hacking on a patch for happy to add a new catch terminal to implement what I called a "resumptive" parser. The idea is to specify points where parsing may resume after having encountered an error, for example

Stmts : {- empty -}           { [] }
      | Stmt                  { [$1] }
      | Stmts ';' Stmt        { $1 ++ [$3] }
      | catch ';' Stmt %shift { [$3] } -- Could insert error AST token here in place of $1

Stmt : Exp { ExpStmt $1 }

Exp : '1'                { One }
    | Exp '+' Exp %shift { Plus $1 $3 }

Note the use of catch. For an input such as 1++1;11, the error handler will be called on the second +, which then may call a resumption continuation if it thinks the error was not fatal. In this case, the resumption would pop items of the state stack until it finds the item . catch ';' Stmt, then shifts the artificial catch terminal to enter item catch . ';' Stmt, then discards input tokens until it finds a ;, meaning it will throw away +1 and resume at ;11. Then the error handler will be called again for the second parse error at the last token, and so on.

Incidentally, that is how I think the error token works in bison/yacc, but that name is taken by the mechanism in happy that inserts virtual closing curlys. Fortunately, the above extension is orthogonal to that and we can acommodate both.

I hope that this will work on GHC as well as on this test case: https://gist.github.com/sgraf812/a55f68b8ede8230ff2fa644d090e726c

@michaelpj
Copy link
Author

@sgraf812 is this something you think you are likely to want to take over the finish line into happy and ghc if it seems to work well? Or is it a POC you'd prefer to hand off to someone else?

@jmct
Copy link
Contributor

jmct commented Jan 24, 2024

I'm also curious about where on the proof-of-concept <--> working solution spectrum this is.

As far as I can tell there are a few paths forward, depending on the answer:

  • nearer to POC: it might still be worth trying the approach suggested by @goldfirere, perhaps with some HF funding for someone to take a time-boxed stab at it

  • if it's somewhere in the middle, perhaps it can be converted into a Summer of Haskell project, we can probably find a few good mentors for this one

  • if it's closer to 'working solution', I have the same questions as @michaelpj, but if it's not something you have the time to take over the finish line, perhaps the HF can fund someone doing so (ideally we'd have a clear path to that finish line).

@sgraf812
Copy link

sgraf812 commented Jan 25, 2024

My PoC passes happy's testsuite (well, except for an annoying 16 bit issue, see below) but needs a couple of bold changes to happy to work reliably, which could be accomodates with the pending 2.0 release:

  1. Stop replacing the default error action by the most common reduction. Otherwise resumption hands over to an item on the stack that says "I can reduce on this token", when in fact it should say "I would error on this token". That in turn leads to an error after performing a few more reductions, causing an infinite loop if no input was discarded.
  2. Because that leads to larger action tables, we run into a similar situation as in Code generated is unpredictably incorrect haskell/happy#93, where some table entries can't be encoded in 16 bit any longer. Therefore I suggest to encode the relevant table in 32 bit instead: Encode 32 bit integers in arrays haskell/happy#266
  3. This is not a must-have, but the happy LR backend has 24 different modes, making it quite insane to maintain and extend. See Bold change: Continue supporting LR's --array --ghc --coerce mode, abandon the rest haskell/happy#268

IME, bringing these changes through review always takes more time than one would think.
So my plan for now is to bring the patch into a form that allows me to apply it to GHC and see if I can make error resumption work in one particular production. After that, the remaining work should be well-defined enough for a GSoC student. Depending on when that starts, I would be happy to mentor. I have a few other work items on my wish list that I simply lack the time to try out:

  1. Bring the PoC into a mergeable form
  2. Apply it to GHC
  3. See whether we can use happy's GLR mode to replace the DisambECP mechanism documented in Note [Ambiguous syntactic categories]. After all, that is what GLR was made for! Although looking at https://github.com/sgraf812/happy/blob/230979c2e35af03ec15bc93541ad0ec4811383de/packages/backend-glr/data/GLR_Lib.hs#L128 I have my doubts that it is fast enough.
  4. Stretch goal: Pick up work on one of the loose ends

Perhaps some of the current happy maintainers would have ideas here, too? @int-index @Ericson2314, @andreasabel.

@simonpj
Copy link
Contributor

simonpj commented Jan 25, 2024

I conjecture my approach would take ~2 weeks to implement

But what is "my approach"? Scrolling back I think you may mean this:

A very cheap-and-cheerful approach to the parser might be simply to break the input file into stretches where the first line of each stretch begins in column 0 and subsequent lines of each stretch do not begin in column 0.

The proposal is quite high level. I think @rae is suggesting one particular approach to making the parser more fault tolerant. (Meanwhile @sgraf812 suggests another.) Have I understood correctly.

@michaelpj
Copy link
Author

GSoC begins at the end of May, but I believe the haskell.org organizers would appreciate ideas within the next week, as it affects how likely we are to be included by Google. So it would be good to submit an idea soon if you want to go that route @sgraf812 , I think it's fine to tweak it later.

@sgraf812
Copy link

I'll write down a proposal next week.

Meanwhile, I opened https://gitlab.haskell.org/ghc/ghc/-/merge_requests/11990 to report my current progress; bootstrapping GHC with the PoC seems to be working.

@sgraf812
Copy link

sgraf812 commented Feb 2, 2024

I wrote up a GSoC proposal: haskell-org/summer-of-haskell#182

Feel free to suggest improvements/sharpen goals

@michaelpj
Copy link
Author

That looks great, thank you for writing that up!

@sgraf812
Copy link

sgraf812 commented Feb 2, 2024

Huh, while browsing through the excellent references that were suggested in usethesource/rascal#1909, I stumbled over https://arxiv.org/abs/2303.08044, describing a novel GLL backend for happy. Let me invite @ltbinsbe to this discussion, who seems enthusiastic about GLL parsing. The measurements in Fig. 5 of that paper suggest that GLL still has to do some catch up, but perhaps he has opinions on how best to improve GHC's pipeline wrt. error recovery? Maybe he has ideas to improve the resumption mechanism for happy described in #63 (comment)?

@Tritlo
Copy link

Tritlo commented Feb 16, 2024

I like this proposal, though it is indeed missing some details.

One that I particularly missed is: how will this interact with source plugins? I think we would need to have some flag to indicate "this is authentic from the source" and "this was a guess by GHC".

@michaelpj
Copy link
Author

One that I particularly missed is: how will this interact with source plugins? I think we would need to have some flag to indicate "this is authentic from the source" and "this was a guess by GHC".

I guess the answer will be "just the same as it interacts with later pipeline stages". If we conjure up nodes through doing error recovery, then downstream stages might need to know about that, and presumably source plugins will get the same thing.

It might be helpful to have an example of a source plugin and how you would like it to work on a program with parse errors?

@Tritlo
Copy link

Tritlo commented Feb 18, 2024

I don't have a specific example in mind, but one use case might be a plugin that does different recovery, i.e. you have a HsParsedModule with marked "guess nodes", and the user could then either a) replace those with their own or b) a plugin that "accepts" the conjured nodes to allow further progress. It certainly opens up an interesting design space!

@kd1729
Copy link

kd1729 commented Mar 3, 2024

Hi @sgraf812 I went through this issue. https://summer.haskell.org/ideas.html#parse-error-recovery
I am interested in this and want to take it up for GSOC'24. I am going to write a draft proposal for the same and want to get it reviewed by you. Any guidelines or suggestions regarding the same are appreciated.

@sgraf812
Copy link

sgraf812 commented Mar 6, 2024

Hi Kaustubh, great! I'll continue conversation in haskell/happy#272 and before long in private.

@xevor11
Copy link

xevor11 commented Mar 19, 2024

I'm a potential gsoc contributor I am interested in this project! I am currently taking a compiler course at my university along with a functional programming course in Haskell. My thesis project revolves around extending the Cool Compiler (a subset of Scala, utilized by Stanford) with LLVM as it's backend and utilizing ANTLR. I have just recently worked with yacc in building a lexer and scala bison (a version of bison) in building a parser for the same language aforementioned. Last year, I was a GSOC contributor for the GNU organization working on adding support for the Hurd OS to the Rust Compiler. Since I am a beginner in both Haskell and have had some experience working with parser generators and defining grammars I think this project would be a good starting point for me. I'd be happy to have a short private chat @sgraf812 to further discuss my background and projects I have worked on to see if this good be a good fit? This is one project I attempted using the E-Graph Library (that might be interesting):
https://github.com/xevor11/E-Graph-Optimizer
also: haskell/happy#272 (comment)
rust-lang/rust#115230
rust-lang/libc#3325
Thanks!
Vedant

@jmct
Copy link
Contributor

jmct commented Apr 23, 2024

At the TWG meeting we discussed this again and were wondering how the proposal might change with the coming GSOC projects. Overall there's still enthusiasm about the ideas in this proposal, but we aren't fully certain of where it stands in the eyes of the proposers. Of course, part of that is the reality of not knowing how GSOC is going to pan out.

Speaking for myself: I'm wondering if this is something we should explicitly put on the back-burner until after GSOC? Right now it seems like there are a lot of open question marks and it might be easier to revisit once some of the proposed 'side-quests' have been explored/worked out.

@michaelpj
Copy link
Author

Yes, I think we should definitely park this until the GSOC project completes, I think that will shed a lot of light.

@dylant-da
Copy link

Wanted to chime in here to mention an application for a fault-tolerant parser within Daml.

Daml's syntax is essentially Haskell's syntax with a few new varieties of declarations. Currently, we achieve this by forking the compiler and and making modifications to Parser.y. It would be great if instead we could call the unmodified parser, which with fault-tolerant parsing would instead leave those parts of the source that aren't Haskell untouched, and then we can pick up those pieces and fix them up afterwards.

Another possible more flexible approach (and possible stretch goal) would be for Happy to take a plugin that it will call when it encounters a parse error, which can then do its own parsing and hands back control to Happy when it's done. This is very flexible, but I'm not sure how difficult that will be to implement with Happy depending on how it's designed and how difficult it would be to use in practice.

Obviously I don't want to change the scope of the project this close to the date - this is mostly my two cents about other potential applications where fault tolerance will be useful. If it turns out enabling the latter plugin approach is easy, that's great too.

@michaelpj
Copy link
Author

cc @sgraf812 I wonder if extending the brief of the GSoC project from "recoverable parsing" to "resumable parsing" with a user-supplied continuation would be reasonable? I think that would cover @dylant-da 's use case too?

@dylant-da
Copy link

Yes, that'd cover our use case

@sgraf812
Copy link

I think my fork in https://github.com/sgraf812/happy/blob/resumptive/tests/monaderror-resume.y that @Kariiem is working on is somewhat compatible with that use case. You would still need to insert an explicit catch as in the linked example, to define where you actually want to hook into the parser. Without such a hook, it's quite hard to know where exactly you want to resume; for eample when the extended declaration syntax shares a prefix with the Haskell one. The parser must then backtrack and be instructed to "try this new kind of declaration"; that's what catch productions do. So if in this line https://github.com/sgraf812/happy/blob/resumptive/tests/monaderror-resume.y#L16 you do not panic in reportError and instead call your own parser, that should hopefully work well.

Unless GHC explicitly provides a hook production of the form catch { return-custom-decl-from-IORef }, you would still need to recompile the parser, though.

I'm not sure if we can change the requirements of the GSoC project after the fact though. At any rate, I hope that once we have catch it should not be too hard to support the desired extension hook as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.