-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HasField #343
Comments
That's a useful way to overload arithmetic, I've done that as well. These kinds of segfaults can definitely happen by design, similar to I don't think that it's a loop in I can add a feature to trace the constraints it's trying to resolve, then we should see in this case that there's a cycle (it tries to solve constraint A by solving constraint B which tries to solve A ...). I don't see offhand where that's happening here but will come back when I have a bit more time. For what it's worth, there's also a shorthand for the "HasField" constraint that's a little bit more convenient:
|
Ah, that shorthand is much nicer. On a potentially related note, would you mind expand on what
is up to? |
Sure, that's another syntax for the same constraint (so these are all different ways to write the A simple example is where we want to "lift" all fields up through "maybe" types (you can do the same for array types). A less abstract example is where we have some large universe of data (like a structured log file) and then different types to give a "biased view" into that data (e.g. we just want to be able to see information relevant to one specific order). (One small detail, the "HasField" constraint is one level of abstraction up because we have other kinds of constraints that want to hook into it besides type classes, but the type class hook is called "SLookup" for "static lookup" since the field to lookup is known statically). So an example might look like:
This last expression "looks" pretty straightforward, but it's calculated first by going through the first field overload that's generic for "maybe" types, and then within that going through the next field overload that takes a field access on an "Order" to be a lookup in this global table. I hope that explains the feature well enough -- I've lost several people at that explanation because (I think) it confuses them, but it's actually a very useful feature that we do use in a lot of different ways. |
And maybe a simpler example just with arrays:
|
By the way, out of curiosity, why not do your arithmetic lift like this?
|
Hi adam, I created a pull request to add an option to the shell to allow you to see constraints as they're resolved (#344). This was enough to see what was going on in this case, basically your Hopefully this feature makes it easier to understand how your type class resolution happens (or why it diverges if it does). With that generic feature aside (hopefully it helps reduce frustration in the future at least), it looks like the problem is there in the bit that you added at the end (like you said):
Actually this definition should have been rejected straight away, because it purports to introduce a scheme for making But suppose that you did intend for this to generate new I think that what you actually want at the tail of this script is an If I replace that bit of your script with this instead:
Then I'm able to have what I think are your expected interactions:
I hope that this has helped. :) |
I certainly started like you did above, but then I had to duplicate the logic for another type, and that's a lot of boiler-plate to rewrite. I also wasn't really happy with the mixing of the broadcasting and arithmetic logic. By using the slightgly odd Op2 class, we avoid both those issues. Lifting arithmetic to a new type we just need to define two new instances, op2, and op1, its unary cousin, rather than the 12 for +,-,*,/, plus handful of others. |
Thanks for the code in the PR, that looks very helpful. It looks like I can do what I want with
Actually, that's not exactly right, it works for the left variant as above, but not on the other case with getList on the right. Strange. If I define
then 1 + [1] and [1] + 1 work. You mentioned above that as this was unguarded it would necessarily loop, but I don't know see the difference between the record thing and the int to [int] promotion, and why the left variant should work and the right not. Ok, so this seems to work
We don't need to assert a pre-existing instance of Op2, just show we can create something which unification can sort out. |
One more question on deconstructing records in instances. Suppose I write
foo({a=10}) then returns {a=1}, but foo({a=10, b=2}) complains that "Cannot unify types: { .pa:int, b:int } != { b:int }", why does recordTail seem to have a '.pa' in it's return type? This isn't what one sees just typing
in the repl. |
I see, On the Do you think that If you think that it should allocate and construct, then reasonable uses of If you think that it shouldn't allocate and construct, then you have to come up with a description of the "tail" type that is consistent with requirements for memory layout and alignment (that fields are placed at offsets aligned to their type). For example, what's the tail type of So I went the route of not allocating in Maybe there's another good way to do it efficiently, I'd be happy to consider it. HOWEVER, all of that aside, I bet you're asking this because you want to also lift arithmetic operations into tuples/records. I have done this, working around the funny business with record deconstruction, in two passes -- first (at compile time) calculating the result type, and then second (at run time) to allocate and modify the result structure. Here's what I mean:
Granted, the action behind the scenes is not the prettiest, but it does get a very nice result (being able to add together records whose shapes line up and whose types can be added pointwise across the record). And it composes with all of the other ways you have of overloading arithmetic (so e.g. lifting record addition into arrays, or addition of records with arrays for fields). If you read this closely you'll find that the most important bit is in calculating the type where we have this special "backward mode" use of the "record deconstruction" constraint (written Hope this helps. |
Don't worry, I'm not sitting here writing instances for each prim type. In this case, I had a timeseries type where I wanted to replicate the logic on streams. As for records, I was actually trying to make a copy of a new record with a single field modified. Obviously I'm not about to write out each individual other name out by hand, there's six of them! So I was having a look at doing this in a generic way. I can probably do this in a very similar way to what you set out, above, and in the other comment on fieldvalue. A lens library for hobbes would be nice. |
A lens library is a good idea, we've got algebraic data types so it does little harm to introduce calculus (by way of infinitesimal types). We already have structural types by default, so some of the awkwardness of Haskell lenses (IMHO) can be avoided. I agree with you that it's probably the right idea to differentiate (no pun intended) a record type and a focus into it. We just need to do it in a way that's minimal for space/time (if there's an unnecessary cost, I've found that people will hack around it). Generic deconstruction of sum types has a similar problem, but the try/fail approach with prisms (IIRC) is a disaster. For example, when you've got a billion variants and you want to |
FWIW, the path syntax is a pretty useful shorthand for consecutive field selection, for example:
For the kind of updating it sounds like you have in mind, I made a test script based on this same two-stage approach (calculating the result type, then calculating the merge):
So that I get an interaction that I think is similar to what it sounds like you're looking for:
To get that script to work, I had to fix a small bug (in this PR: #345). Just FYI in case you were going down that path, you might want this fix. |
Very nice! Actually, I should be able to modify the rmerge logic to write write that in the form |
Yep, we can add
and then |
Doing that without type annotations is an interesting problem. It's come up before in other ways. The problem is that we've got to communicate to the 'Add' constraint that the first argument to 'RModifyTy' is fixing its second argument. This is basically function overloading (with a record of functions acting like a function). Solving this would also probably solve the currying problem, where we want to calculate closure types from partial applications with a type class like this. |
Yes, having looked through the code a bit, the currying problem does seem a bit of a poser.
|
Putting these previous bits together, I guess you could make something kind of like what you describe this way (using a phantom type to carry the intended modification field):
Then with that set of definitions in place, you can have an interaction like this:
Hope this helps! :) |
Also if you don't like that syntax for updating and you prefer your
HTH! |
The wife thanks you for removing my excuse to play with this this weekend... But thanks for this. I have no evidence for this at the moment, but it seems using the same object to get and set values would be useful (the fact we no longer needed explicit type annotations on the update function is an indication this is more the right way to do this) . If I get some time next week, I'll look at getting paths to work in this. This already starts to give a lot of the utility of a proper lens library in operating with records. If only we had a WW based parser to allow us to define custom operators. [There's something very pleasing about such type-level hackery where over half the lines of actual code involve unsafeCast or newPrim] |
Oops, sorry, you'd said that your goal was doomed so I took it as a challenge. If you're still looking for weekend plans, extending it to deeper paths is a good idea, or applying the idea to variants and recursive types could be useful. Maybe better to spend time with your wife though, if she's that annoyed with you. :) I agree with you that it's useful to be able to use the same path to read and update a field. I never did get really into lenses in Haskell, maybe I should look at that more closely. I was very interested in Conor McBride's derivative types though (a related idea). It's an interesting idea to turn the parser generator back on the compiler itself. I wanted to do that 6 years ago or so, but had a lot of momentum against that -- plus several people are afraid of the idea (they already have a hard time with the fixed syntax of hobbes). If the term and type grammars are reflected as regular data structures, then a parser could just be a function producing that type. Maybe if we have a use-case that demands it. I'd like to hide things like unsafeCast and newPrim, but at least where they're used here it's a good sign that we'll see minimal runtime overhead. |
I seem to have run into an internal bug on this. If we take the easy case of non-polymorphic path based modification (if we allow the modifier to change the type I don't yet see how to construct the final type, which I seem to need), and modify your uapply function above to take closures instead, we get overall
(yes, the naming sucks). Now if we try
If we instead have Altering M so we can go up and down
Then What's going on here? |
Hmm, I'll have to look at that closely to see where the LLVM error gets through. It's definitely a bug if that happens. Taking a step back, the
So we're already very close to something that works for paths deeper than 1 step and for other function types. This is kind of like the "zero" case (do a 1 step update in a record), and we just need a "successor" case to do an update in a field and then patch it back into the current level (which can then be applied recursively). An easy step to make this work is to allow this "UpdateAt" to nest, so it becomes a "suspension telescope" down to the field to modify, rooted at the total record type. For example, in the record First we'll want to generalize field lookup through these suspensions:
Then we can make these suspensions to any depth by adding one case to the
And if we add a replacement within suspensions:
Then we can add a case to
So with this final script (adding your
Then I can have this interaction (which does change the type of the update field, for what it's worth):
I hope that this helps! I will have to come back to this to see what happened with that backend LLVM error ... |
That does help a lot, and thanks for the step-by-step working. I'll wrap my head around this, then maybe I'll be able to write something myself without a new github issue :) |
I love your github issues, the discussion has brought up interesting ideas and improvements. |
I'd like to write instances for records which have a specific field with a specific type. HasField seems to be what I want. If I write
then this seems to work, thing1 + 1 evaluates to 4. If I try something more complicated though...
Suppose I define
The idea is that defining how to do a binary operation then defines + and *, so
then lets us write [1,2] + [3,4] * [10]
But if I now try to do this
and say thing2 + [1], then the interpreter segfaults, in some infinite loop in findBidiMatches.
Interestingly, just having the above definition now also makes thing1 + 1 segfault in the same way. So I'm doing something horribly wrong, but not too sure exactly what.
The text was updated successfully, but these errors were encountered: