-
Notifications
You must be signed in to change notification settings - Fork 423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
definition of shadowing is inconsistent between variables and functions #19167
Comments
That shouldn't be true. But that's a side issue. At a high level, the reason variable and function resolution is different is because functions don't necessarily conflict with each other unless they can be called in the same manner. This means that you can write a function at one scope that takes one set of arguments and a function at a different scope with a different set of arguments, and be able to call both of them from a third scope, no matter their relative distances - where your |
We're talking about disambiguation, not finding candidates, so we're not discarding any overloads; just choosing among them. But anyway I'm not proposing here that we change where scoping/shadowing matters in function disambiguation -- only what the definition of shadowing is. (It is true one could argue that the definition of shadowing should ignore the argument types, but I'm not arguing for that). |
Okay, cool. As long as that's clear in whatever documentation we adjust as a result, I'm okay with A or B, then. |
In: import A; should this be a |
So here is what I do in Felix. My solution is not completely good. First principle is that "most specialised" has a single coherent definition. There are no complex rules. The algorithm determines if a polymorphic type A is a subtype of a polymorphic type P, and, the reverse. The algorithm is a stock-standard unification algorithm supporting subtyping in which the two sides of the inequality being tested are kept separated. We are seeking a set of equation which fix the values of all the type variables in P, in terms of type variables in A, such that P, after substitution, is a supertype of A. This is all you need for overload resolution unless you also have additional constraints. Given a set of candidates, we try to find the most specialised. If we find one, we're done. If we find TWO we have an ambiguity, report an error and terminate. Note: there are no "hops" or other rubbish. There are no conversions or other crap to think about. The algorithm asks only if a type is a subtype of another. If you have incoherent implicit conversions in your language, you have to fix them. The algorithm doesn't care about implicit conversions, but it needs to know subtyping rules, and subtyping is transitive (full stop, no arguments). C++ screws this up. It only allows one implicit conversion when overloading. Note: the algorithm is fixed at being correct. Now fix the design faults in your language so it works. In this case ========== Now there is one more case: there is no most specialised function. In this case, we search the parent scope for candidates and try again. Note that there can be two candidates that match, and we discard them both and proceed to look for a solution in the parent scope. This algorithm is deliberately chosen because the C++ algorithm is really bad: in C++, you just give up. Here is an example I resolve which C++ would not:
Here all the f's match the call. However neither of the inner f's is more specialised than the other. So we drop them both and chose the outer one. The equivalent C++ always halts when it find a symbol in a scope, and if resolution fails there that's the end. This is really bad. In C++ In Felix, the name of a function is NOT just the identifier used, the type of the domain is part of the function name. ============ So now for the second issue. When you import modules into another exclusively for the implementation of that module and not for re-exportation, you need to be able to resolve conflicts, but only inside that module. In Felix What I do is open a shadow scope to put these things in. So for every scope, there is a shadow scope hiding behind it. If you cannot find a symbol in a scope, you look in the shadow scope first, before looking the in parent scope. Now the important thing is this: if you have a problem in your shadow scope due to duplicate symbols or ambiguity, now you can resolve it easily with shadowing definition in the main scope. You can do this because, the main scope is searched before the shadow scope. Note the client of the module never has an issue because the shadow scope is private to the module. No symbols in the shadow scope are ever exported. In Felix, I have another directive for that. In the new C++ module system, you have Exporting modules is harder because you cannot easily resolve the conflicts. In Felix I use |
It does not matter for the example - the behavior is the same (and x vs f differ). The point of the example is to show that the logic in function disambiguation is ignoring the details of what kind of use/import and just traverses everything assuming it is a |
@skaller - it is nice to know that at least one other language has the "shadow scope" idea. We do that today but have two shadow scopes for
Can you say more about what you mean by "incoherent implicit conversions" here? Later you say:
Is this exactly the implicit conversions you were calling "incoherent"? I don't think we'll be willing to remove all non-subtyping implicit conversions from the language, and in any case that would be a massive change and is definitely off-topic for this issue. |
Two shadow scopes? Interesting! So: removing some implicit conversions is a trivial change. It simply forces the user to write them explicitly. Since the compiler knows when one is used that will be removed it can simply tell the user to write one or their code will soon stop working. In fact, with some work, a tool can be written to automatically fix user code. So actually it is not a massive change although it is a breakage. I would argue breaking something is just fine, if it is needed to repair an even worse breakage. So, there is a correspondence between implicit coercions and subtyping as explained in discourse post. Subtyping is used to select between candidates in overload resolution, to select the most specific function. I will note there are the usual subtyping rules but also we include a concrete type as a subtype of a type variable. One of these is called subsumption but i can never figure out which. Subtyping relations are transitive. Now, if you have say a coercion from real to int and a coercion from int to real and they're subtyping relations as well, then we can prove int is the same type as real. Note: I said PROVE. And then, we have just proven the type system is unsound, which is equivalent to being able to prove every statement in a logic system, including both A and not A for every A. So we simply cannot allow this. Well, the way it works in Felix and will also work in Chapel I guess, is that if you have an argument of type A and a parameter of type P, and they're not equal types, the compiler fixes the discrepancy by magically inserting a conversion. This conversion is implicit by definition: the compiler put it in there, not the user. So ALL subtyping relations imply the existence of an implicit coercion. [In our kinds of languages] So now, can we have an implicit conversion anywhere else? Remember, implicit conversions are inserted by the compiler. Where else could it insert one? The answer is NOWHERE. There is ONLY one way for the compiler to insert the coercion implicitly and that is on a function call or any context where you have a target and source type. For example, in assignment: same deal. The RHS has to be a subtype of the LHS. Initialisation: same deal. You see? You can have an implicit coercion that isn't a subtype but it is certain to be entirely useless. It will never get inserted by the compiler, so you might as well take it out of the documentation. So now, lets consider real and int are equivalent and show where both conversions can be inserted:
The problem is, with the specialisation algorithm, unless both int->real and real->int are subtypes, this call will fail. There's no match. And if they are both subtypes, your type system is unsound. You simply cannot allow this, and you really DO have no choice but to overload based on the standard algorithm which depends entirely on a strict subtyping rule which must be transitive. Otherwise you're polluting your overload resolution with a heap of special cases and that NEVER works, because such special cases almost universally fail to be recursive. Eg if you have nested tuples. The reason this comes up is that when you import modules you're importing function sets. Even if you put all the imports in a shadow scope, you are still merging function sets from multiple imports. The original issue is how to handle the conflicts which arise here and that is directly related to overload resolution (so it is not really off topic :-) |
@skaller - I will continue the conversation about subtyping and implicit conversions over on https://chapel.discourse.group/t/overload-resolution/10252 . |
I've opened #19198 to describe a related problem with |
Here is an even simpler program showing an inconsistency between scope resolution and function resolution: module M {
var X: int;
proc f() { writeln("M.f"); }
}
module N {
var X: real;
proc f() { writeln("N.f"); }
proc main() {
use M;
writeln(X); // prints out 0 so refers to M.X
f(); // ambiguity error
}
} |
I've created #19219 about whether or not |
I think some of my previous comments might represent a misunderstanding of the current rules. From #11262 (comment) :
So if I were to turn this in to a numeric distance, a regular enclosing scope would add 2, so that a use statement can create a distance between the symbols contained and the outer scope. E.g. module M {
var x = ...;
}
module N {
var y = ...;
{
// y is available here at distance 2
use M; // x is available here at distance 1
var z = ...; // z is available here at distance 0
}
} |
I think that's a simplification. A use can bring in an arbitrary number of scopes, depending on if the module being used itself contains public use statements. I tend to think of scopes brought in by use and import statements as following a different dimension from the scopes defined with |
Right but the compiler has this |
Hang on. If the rules are that complicated the compiler cannot figure it out in a simple way (meaning the algorithm is quite complex) then the rules are bad because the user has no hope of figuring it out. So perhaps the wrong question is being asked. In my language, the rules for variables and functions are indeed different. It is an error to define the same non-function symbol in a scope. A function, on the other hand is identified by its name and domain type and duplicate definitions are allowed. I use a single shadow scope, however there are two name maps, one for private use inside a module only, and one for public use from outside the module. Explicit qualification looks only in the specified module. It does not look in shadow scopes or in parent scopes. Each name map has two kinds of entry, a non-function entry and a function entry (a name map is a hash table). Function entry in the lookup table contains a set of functions. When I import two modules, the functions of the same name are merged into a single set. I had to do an experiment to see what happens to variables. To my dismay its a not exactly what I might have expected:
Output:
I don't know if this is random or it just takes the first definition, but it is a design fault either way I think. The problem is, you cannot ban the importation or use of two modules just because they happen to contain duplicate variables. But it should be an error to use one (not silently picking one of them). The problem is, how can the compiler represent this? If it cannot bug out on constructing the scope, it would need to put an entry for the duplicate symbol which if found, would generate a diagnostic .. the only other option is to not put either symbol in the scope. However that is also bad because then, a symbol from the parent scope might be found instead of being hidden. I would note also you need to think about Interfaces. In Felix, a module and and Interface are the same thing, a module is simply a non-generic Interface (i.e. with no type parameters). I used to have them separate, but it was too hard to figure out the differences and impossible to maintain two sets of complicated lookup rules. So when you're trying to figure out rules for modules .. don't forget you will have to do it for interfaces as well. |
In an off-issue discussion with @lydia-duncan I learned that one problem with Option B. is that it makes the rules for shadowing different within a module and when the module is Another thing we talked about is that the current rules could lead to "hijacking" in this scenario: We start with Library which publicly uses Dependency: // this is Library v1.1
module Library {
public use Dependency;
} Let's suppose that when // this is Dependency v1.1
module Dependency {
// other stuff unrelated to example
} And the next thing that happens is that // this is Dependency v1.2
module Dependency {
proc foo();
} Then, somebody makes an module Application {
use Library;
foo(); // intended to call Dependency.foo
} Now, suppose that the author of // this is Library v1.2
module Library {
public use Dependency;
proc foo() { }
} They might test (Vs in both Option B and Option C there would be a compilation error). Edit: Adding the One approach would be to point at the |
BTW: if the module rules are complicated (and they usually are in any language) thing the user is going to need a psychiatrist to cope with POI lookup, because that has to look in at least THREE distinct places: the original point of definition, the module in which the instantiated type is defined, and the point of instantiation .. all of which can involve modules and imports and stuff. |
I've edited #19167 (comment) a bit with some thoughts about how the "hijacking" scenario relates to semantic versioning and mason. |
dyno: function disambiguation This PR adds a port of the function disambiguation logic to the new compiler rework effort, including some new tests. None of these changes impact the production compiler at this point. The new code is primarily porting the old logic and rules to work with uAST and the dyno type system. Reviewed by @vasslitvinov - thanks! - [x] full local testing Future Work: * This implementation inherits the complexity of the language and the old implementation. The following issue asks if we can simplify the language in this area: * #19195 * This implementation uses a different approach for isMoreVisible than the production compiler and this issue discusses the specifics of the language design in that area: * #19167
Returning late to this issue and jumping into Michael's simpler example in the middle: module M {
var X: int;
proc f() { writeln("M.f"); }
}
module N {
var X: real;
proc f() { writeln("N.f"); }
proc main() {
use M;
writeln(X); // prints out 0 so refers to M.X
f(); // ambiguity error
}
} I think the behavior of I also think that proc foo() {
writeln("In outer foo");
}
{
proc foo() {
writeln("In inner foo");
}
foo();
} we get a call to inner I think the trick here is how far that shadowing goes. For example, if inner foo() had been declared to require arguments as in |
A next step for me is to check to see if we have cases in the test system where more aggressive shadowing here would be a problem for non-method non-operator cases. |
Catching up a bit more, I am generally in favor of something along the lines of approach B. Specifically, when I think about what a module logically does in Chapel, I think of it as making a set of symbols available to another piece of code via Where I might pause with option B (if I'm understanding it correctly) is with a concern that I think Lydia raised about having the resolution of a call |
Right, if |
Ah, I'd missed option C' earlier. Yeah, I think option C' makes the most sense to me: It keeps An example that came up in talking with Michael seems worth capturing here. Say I had: module M {
proc foo() { ... }
}
module N {
public use M;
proc foo() { ... }
// foo(); // maybe I call foo here?
}
module O {
use N;
// foo(); // and/or maybe I call foo here?
} Because we don't resolve overloads until a call is made—and because we generally can't very easily determine whether or not two functions do have overlapping signatures or not because of type inference, generics, and the like—while the code above effectively defines two We discussed whether the compiler could try to consider certain function signatures to shadow others, but apart from some easy cases like 0-argument routines above, this felt pretty intractable to do in any way that felt like it wasn't a band-aid solution, covering some cases but letting others through. |
@bradcray wrote:
Yes it does. When I face this issue in Felix, knowing in C++ if a name shadows an outer name, that's the end: if overload resolution in the inner scope fails there is no recourse to the outer scope. [In C++ you can put a It is important to understand the algebraic reason for this: in C++ a function set is identified by a name. In Felix, I changed the notion of identity: a function is Felix is identified by its name and the type of the domain. So if you are looking for a function with a given name, and arguments of some type T, then a function in the inner scope with a type X, where T is not a subtype of X, does not hide functions in the outer scope. I'm not saying Chapel should do it my way or the C++ way or any other way, I'm saying that you have to step back and ask the right question: what identifies a function? Do you identify individual functions, or do you identify sets of functions? The idea of a shadow scope is that you have the usual lookup rules for looking for a symbol by name in the current scope and if you fail, you go to the parent scope, etc all the way to the global scope. So with a shadow scope you would use the same rules, but just add an extra scope into the environment. The reason of course is simplicity: you can decouple the problem into the already implemented parent lookup rules and construction of the shadow scope. So in Felix, I first search for an identifier, and find either a non-function or function set. If i'm looking for a function, the lookup routine must also accept the argument type being sought, not just the name. Which means there are two distinct lookup routines one for functions and one for everything else. It gets complicated because sometimes you don't know, and sometimes in Felix, a type can be a function, for example the name of a In fact in Felix lookup system, the shadow scope does not contain symbols, it contains the directives (use, import etc), which modify the lookup. So my system actually breaks the principle I cited above, so to re-establish it I have to encode the interpretation of lookup in spaces defined by directives, to work as if I had actually eagerly built the environment, instead of lazily waiting until i need to (performance is part of the reason but not the only one). If you identify a function only by it's identifier name, then, an inner function of that name hides all functions in all enclosing scopes, including shadow scopes. This is the case in C++. It is not in Felix precisely because of the effect @bradcray noted above. The lookup algorithm should be derived from your concept of identity and environment. |
@bradcray wrote
I don't follow. The way this is done is absolutely stock standard using a well known algorithm, namely (a modification of) unification. Generics do not cause a problem, nor do implicit conversions provided they're transitive. Furthermore you have to use precisely this algorithm to resolve overloads anyhow right now, and, you will have to use it again in a different way to resolve overloads mediated by Whether or not a function domain type hides another or not is trivial to compute and is not an issue at all. The actual issue is what to do when you get either an ambiguity, or, no matches at all. Felix continues if there are no matches. C++ will not continue, unless there is no function with the given name, in which case unification is not done in the first place. |
@bradcray wroteL
I strongly support @bradcray concept here (never thought of it as a bill of sale but it's a nice analogy). The client of a module shouldn't care how the author of a module got a public symbol into the module. It's either in there or not. Unfortunately, my own lookup rules do not support any way of doing this, I consider that a design fault in my system. In Felix, an @brad is saying, the client of a module shouldn't care how a function got in there, and, there should be a way to bulk synthesise a module from other modules (module composition in other words). Indeed, with his view, it is still possible to pick the right function with a qualified name to its original location in case of ambiguity. The only real issue here is what to do about clashes: detecting them lazily seems the best way, but it has a downside: you don't find out about the problem until you try to use an ambiguous symbol, and generally, late error detection is contrary to the spirit of static typing: it means to be sure your module correctly implements a specification you have to run tests which cover all cases, the very problem static typing tries to solve. But you can have your cake and eat it too: issue a warning if there is a possible clash, and an error only when it causes a problem. |
From earlier comments:
It is interesting to consider how shadowing is related to overriding for child classes. There, we have the design that, basically, you can think of the combination of method name + argument names + types as the thing that determines what a method is, in terms of overriding. If we were to make the language design have overloads shadow more aggressively, would we would need to also change what we consider to be an override? In particular, if you are overidding something that has multiple overloads, you would have to override all of the overloads. In practice this is easy to do simply by copying the signatures from the parent class(es). But it is not required today. Current checking is described here: https://chapel-lang.org/docs/language/spec/classes.html#overriding-base-class-methods Here is an example: class Parent {
proc f(arg: int) { writeln("Parent f(arg: int)"); }
proc f(arg: string) { writeln("Parent f(arg: string)"); }
}
class Child : Parent {
override proc f(arg: int) { writeln("Child f(arg: int)"); }
}
var x: owned Parent = new owned Child();
x.f(1); // Child f(arg: int)
x.f("hi"); // Parent f(arg: string) In this case I think it would actually improve the clarity of the situation to require that the author of Child also write Edit - Taking a completely different tack, what if we had two different record types that both wanted to define a method? If we were to consider defining anything named module M1 {
record R1 {
proc f(arg: int) { writeln("M1 R1.f(arg: int)"); }
}
}
module M2 {
use M1;
record R2 {
proc f(arg: int) { writeln("M2 R2.f(arg: int)"); }
}
proc main() {
var r1 = new R1();
var r2 = new R2();
r1.f(1); // is this a legal call? Is it shadowed by R2.f?
r2.f(1);
}
} So, I think that if we were to have more shadow-y functions, we would have to also start to treat method receivers in more special ways than we do today. (It is already the case that methods can come from more scopes - the method-ness just does not impact the shadowing rules, and it would need to if we made shadowing of functions just based on the name, say). |
I have been investigating option C': Do not consider public use or public import to create additional scopes, but private use/import can do so. This design has two implications that were not obvious to me at the start. One implication is that, if there is both a E.g. module M1 { var x: int; }
module M2 { var x: int; }
module Program {
public use M1;
private use M2;
x; // this is M1.x because the public use did not introduce a shadow scope
// but the private use did
} I think that this is defensible on the argument that it makes the behavior of the code in the module better match what happens when you The second implication is that, without adjustment, it's no longer possible to create a module that one can module ReplaceE { param e = 10; }
module Program {
// use StandardLibrary; is effectively here but implicit
use ReplaceE;
e; // now ambiguous, because ReplaceE.e and StandardLibrary.e are
// both in the same shadow scope
} (But this pattern would work if it were I think this pattern is something that we probably want to support (e.g. if we want to have a module you can |
I agree that this is defensible, but does it generate a new opportunity for hijacking? Say you wrote your code intending to refer to All that said, I still like the new interpretation of
This seems acceptable to me since there is actually an ambiguity (i.e., why should I agree we'll want a way to override such symbols, but I continue to imagine doing that through some way of opting out of auto-used modules/symbols (altogether, or maybe on a case-by-case basis). I'm also cautiously optimistic that as we continue to reduce the sizes of auto-used modules (like Math, ChapelEnv, etc.) this will be less of an issue, though your point about overriding built-in operators is well-taken—though also something that probably shouldn't be done lightly. I was going to say something about Of course, fully qualifying Maybe turning the question around: Can we rationalize why it's a good thing for the user module to "win" today? (given that the user module may not be a user module but some package module that they know nothing about which could be malicious). It seems to me that this ambiguity is making things better (?). |
That's true; however for In any case, I think it is reasonable to encourage library authors to prefer
👍
I've written about this a bit in this comment: That comment talks about why I think we need for For the standard library, there I was arguing that For a user library, we won't be able to handle it in that way. For example, if we have typical use cases of programs that need Now, what is different in the current discussion is that the conflicting symbol is defined in something that is also module DefinesFoo { var foo = 10; }
module Program {
// use StandardLibrary; is effectively here but implicit
use DefinesFoo;
foo; // OK, at least until StandardLibrary adds a foo
} Now suppose that in Here is the similar case for a user module: module DefinesFoo { var foo = 10; }
module Program {
use LinearAlgebra;
use DefinesFoo;
foo; // OK, at least until LinearAlgebra defines a foo
} I don't have a solution for this case and it would get the ambiguity error if LinearAlgebra added a I think another viewpoint on this issue is - suppose everybody were using module DefinesFoo { var foo = 10; }
module Program {
import LinearAlgebra;
import DefinesFoo.foo;
foo; // always refers to DefinesFoo.foo even if LinearAlgebra adds a foo
} Similarly, if |
Is there precedent for this? My initial reaction is that sounds like a pain for implementers of Child classes.
That seems pretty obviously wrong to me.
This seems pretty subtle to me. Suddenly if you decide to make a use public that was private before, you may find you have conflicts that weren't present before. Which is very relevant because the default use is private - a user could very easily write |
I agree it is subtle but I'm not sure what to do about it. It is something we already have today (at least in the behavior in the spec) for import. Also I think it is less bad than some of the alternative designs. I suppose we could make this specific case of |
Responding to a comment from here -- #19167 (comment)
Well, today we cannot do that at all, because the |
I've made To spin-off the discussion about shadowing an automatically used symbol (like Math.e) that started here: #19167 (comment) |
I've made To spin-off the idea described here #19167 (comment) |
simplify use/import shadowing This PR describes and implements some candidate language adjustments to shadowing behavior for use and import statements. We need to do something in this area because the definition of shadowing is currently inconsistent between variables and functions (#19167). This PR attempts to simplify the language design in this area. The adjustments to the language design in this PR are as follows: * isMoreVisible in function disambiguation as well as scope resolution use the same rules for determining shadowing with use/import statements * isMoreVisible starts it search from the POI where the candidates were discovered (see issue #19198 -- not discussed further here) * private use statements still have two shadow scopes * public and private import statements now do not introduce a shadow scope * public use statements now do not introduce a shadow scope * `public use M` does not bring in `M` (but `public use M as M` does) * methods are no longer subject to shadowing Note that the above design leads to less shadowing of symbols from the automatic library (see the section "Less Shadowing of Symbols from the Automatic Standard Library" in #19306 (comment)) ## Discussion Elements of the language design direction are discussed in these issues: * #19167 * #19160 * #19219 and #13925 * #19312 * #19352 Please see also #19306 (comment) which discusses pros and cons of these language design choices. ### Testing and Review Reviewed by @lydia-duncan - thanks! This PR passes full local futures testing. Resolves the future `test/modules/vass/use-depth.chpl`. Also fixes #19198 and adds a test based on the case in that issue. - [x] full local futures testing - [x] gasnet testing ### Future Work There are two areas of future work that we would like to address soon: * #19313 which relates to changes in this PR to the test `modules/bradc/userInsteadOfStandard/foo2`. The behavior for trying to replace a standard module has changed (presumably due to minor changes in how the usual standard modules are now available). I think that changing the compiler to separately list each standard module would bring the behavior back the way it was. (Or we could look for other implementation changes to support it). * #19780 to add warnings for cases where the difference between `public use M` and `private use M` might be surprising
I'm closing since the PR added tests from this issue (test/modules/shadowing/issue-19167) and these are now working as I would expect. |
I have been looking at this code in function resolution as part of porting it over to the new resolver. I have been scratching my head a bit. I am pretty sure that the current behavior is not reasonable. But, that might be a bug in function resolution or maybe it is a sign that we need to adjust the language design.
One thing to note about the current implementation is that it's pretty complicated and it does a traversal of all scopes visible from the call (including use/import), twice, in order to decide if one candidate is more specific than another candidate. I am worried that this contributes to performance problems. It would be much less worrisome if the consideration only needed to go up through the parent scopes (not counting use/import).
However a larger issue is that the behavior seems unnecessarily inconsistent between variables and functions. More details about that are in the next section.
Program to Explore the Current Behavior
Here is a program to explore the current behavior.
To summarize the current situation:
with
UseA_UseUseB
in the program below:with
CUseA_ImportA
import A
does not impact what could be definingx
, so it has no effect on whatx
could refer toimport A
is considered the same asuse A
and so is considered creating a path toA.f
at the same number of hops as the path toC.f
.use only
with an unrelated name will have the same problem for functions as the import hereHistory and Related Issues
import
and probably predatesuse only
and private uses.import
should be subject to shadowing at alluse
statements should have two shadow scopesWhat does the spec say on the matter?
https://chapel-lang.org/docs/language/spec/procedures.html#determining-more-specific-functions
For functions X and Y, we have this rule about which is more specific (which is considered after things like formal argument types):
However this section does not define shadows in any way.
https://chapel-lang.org/docs/language/spec/modules.html#conflicts
Describes shadowing in terms of a distance idea:
What should we do about it?
I think that at the very least, the language should have one definition of shadowing that is used for both resolving variables and for resolving functions.
Here are some ideas:
A. Consider the behavior with variables today to be correct and formalize it by describing a distance in number of hops. A symbol shadows another symbol if, in a given scope, it has a smaller distance. Have the new compiler code literally compute this distance and compare distances.
use
and any symbol brought in byimport
adds 1 to the number of hopsuse
adds 2 to the number of hops{ }
) adds 3 to the number of hopsB. Simplify the rules about number of hops:
use
/import
s something defining that symbol, then we have shadowing within that module. This can continue to consider 3 scopes: things defined directly in the module; modulesuse
d; and contents of modules brought in byuse
. This is the situation withinCUseA
in the example program.use
/import
ing that module views the symbols defined in it as completely flat. So if you wanteduse CUseA
to findC.x
andC.f
(and not find them ambiguous withA.x
andA.f
) you would have to adjustCUseA
to not publicly exportA.x
andA.f
.isMoreVisible
operation can reuse other scope resolution ideas) and it's probably easier for users to predict what will happen with their programs.C. Do not consider
use
/import
to create additional scopes.CUseA
would become a multiply-defined symbol error. A user would need to use renaming orimport
to address it.C'. Do not consider
public use
orpublic import
to create additional scopes, but private use/import can do sopublic import
is already documented as not creating an additional scope.The text was updated successfully, but these errors were encountered: