Skip to content
This repository has been archived by the owner on Sep 2, 2023. It is now read-only.

Schemes versus Namespaces #347

Closed
SMotaal opened this issue Jul 4, 2019 · 24 comments
Closed

Schemes versus Namespaces #347

SMotaal opened this issue Jul 4, 2019 · 24 comments

Comments

@SMotaal
Copy link

SMotaal commented Jul 4, 2019

A while back in #222 and #169 we talked about the idea of having some prefix.

Recent discussions about prefixes have led to a polarizing debate between "prefix" and "namespace" notations — for things that are not necessarily the same thing.

  • You do file://… if the implementation supports the scheme with the respective resource location and access protocols of that spec or people file bugs)
  • You do https://… if the implementation supports the scheme with the respective resource location and access protocols of that spec or people file bugs, and potentially lawsuits)
  • You do mailto: if the implementation supports the scheme — I don’t think a good implementation wants this to lead to a network request by the browser context itself

If you have a scheme, it can have zero network-transport and yet an implementation can make it very very meaningful to one loader.

You see this with blobs today — they are memory things — they cannot extend beyond the lifespan of the context, and they even CSP so that they don’t see yours, browser context wise.

The Point

import process from 'process'; // internally node:process
import pkg from 'pkg'; // internally node:… something that makes this resolve

Can we not say the above is implicit? If in node the default scheme is node: then specifiers are all of that scheme unless they are not.

What about this:

import extendedProcess from '@nodejs/extended-process';

Is it not just like:

import extendedProcess from 'node:@nodejs/extended-process';

Would that not lead to a balance between all things we care about ecosystem wise and still balance with newer notions others are exploring a decade later.

Thoughts?

@jdalton
Copy link
Member

jdalton commented Jul 5, 2019

I can dig it. It makes sense to me :)

@devsnek
Copy link
Member

devsnek commented Jul 5, 2019

Is this issue saying that all builtin modules should become node: internally? If so, that's already something we do.

@SMotaal
Copy link
Author

SMotaal commented Jul 5, 2019

@devsnek — this largely aligns with that (I noticed this a while back but have not checked lately)

@SMotaal
Copy link
Author

SMotaal commented Jul 5, 2019

I think it takes a little more than the internal reality though moving forward — to align with ecosystem wide shifts/proposals.

So this is more of a call to explore and more concretely define such aspects.

@SMotaal
Copy link
Author

SMotaal commented Jul 6, 2019

@devsnek What do you think a more complete spec for this scheme/protocol being a way to move forward on open issues elsewhere, maybe even vendor modules?

Such a protocol would concretely specify possible behaviours like the example above:

  • User code would import 'process' // in ~/app.js
  • Spec would define resolve('process', 'node:[no idea yet]/app.js')
    to extending usual resolution behaviours for a specifier and its referrer.

There is no firm opinions on what that spec needs to say at this point, but imho having thought of this on many occasions over the span of two years now, how platforms handle this critical but often punted detail now will set the stage for interoperability stories of modules for years to come.

The goal here cannot exclude portability, without that leading to actually excluding portability altogether!

@ljharb
Copy link
Member

ljharb commented Jul 6, 2019

I’m still really confused by this issue. All protocols are in the same bucket - the owner of the protocol decides if it makes a network request or hits the filesystem or both or neither. You can use a protocol as a namespace since namespacing is just a concept, and I’m not sure I’ve seen the term “scheme” used in this context except also as a generic concept.

Can you restate the purpose of the OP?

@SMotaal
Copy link
Author

SMotaal commented Jul 7, 2019

@ljharb I certainly agree with you — I decide node: behaves like a namespace, and I want that to still resolve relative paths like const internalProcess = await import(new URL('./internal/process', import.meta.url)) and so for that simplicity to translate, you want to have the native URL recognize this scheme as a standard scheme — ie follows similar mechanisms as file: or http:.


The purpose of this OP, is that I think that the time is right now to say that we need to align across venues where such scheme-as-a-namespace or @scope-as-a-namespace will forever affect portability of ECMAScript modules.

I am not here to say I have the answers, more like with everyone (others all being far more qualified than me on many fronts) — I'd like us to write a cohesive story with a little less emoting and more dialogue.

@bmeck
Copy link
Member

bmeck commented Jul 7, 2019

Maybe we should look at loader requirements/patterns for ESM and that might sway things. While CJS does not parser URLs, ESM does. The current ESM hooks require loaders to return valid URLs. As pointless out above, internally we already have a node: scheme in place so we can expand builtins to valid URLs. If we use @node/ a prefix it would be converted to a valid URL still. A custom scheme would be used because of not being aligned with any other ones (not a file not http not email etc.). I don't think this issue is really about just specifiers inside import/require as a loader wishing to return fs would need to still use the custom URL if we want to make loaders have an API with a single return type for the resulting specifier. Making loaders expose their already internal representation for userland usage seems same, and avoids creating "node:@node/fs" from being a double encoding of sorts to get to the real fs from a loader.

@SMotaal
Copy link
Author

SMotaal commented Jul 7, 2019

A custom scheme would be used because of not being aligned with any other ones (not a file not http not email etc.)

@bmeck can we take this a little slower (for my benefit and maybe others) we say custom scheme here, are we talking from the perspective of the runtime's URL constructor of the environment, right?

Browsers do not know node: because we make no effort for that to happen with it being internal, so if you did new URL('./a', 'file:a') in the console it works but new URL('./a', 'node:a') will throw… a workaround can lead to polyfilling for the time being.


specifiers inside import/require as a loader wishing to return fs would need to still use the custom URL

I am not sure I follow exactly, maybe a bit though, so I think we need to work on a few examples (maybe gists) to come to work through and understand the possibilities more closely.


Making loaders expose their already internal representation

I don't think it is necessarily the outcome here, they could be separate — I'm thinking about compartmentalized module keys having potentially more than one mapped identifier in nested realms.


Resolution protocols can define convenience omission forms (like the behaviours of / vs // in standard schemes).

import nodeProcess from '@nodejs/process'; // -> node:@nodejs/process

// just to make a point assuming `~/node_modules/process/` in module's LUT
import fallbackProcess from 'process'; // -> node:process then node:@nodejs/process

@SMotaal
Copy link
Author

SMotaal commented Jul 7, 2019

@bmeck… Can you propose a way to structure some efforts around what you've stated please? I am sure I'd want to do some work here (others too as well).

@bmeck
Copy link
Member

bmeck commented Jul 7, 2019

@SMotaal

A variety of things happen for the non-special schemes including that same behavior of non-relative URLs such as with data: , blob: , std: etc.

See

console.log(new URL('./a', 'data:text/html;'));

blob_url = URL.createObjectURL(new Blob([]))
console.log(new URL('./a', blob_url));

The behavior does not mean that browsers cannot handle the scheme, just it isn't special cased like http and file.


I'm not sure I understand the request for structuring efforts. I'm mostly just looking at how the current loaders already use a scheme like @devsnek points out, and how if we do something else we likely still will be using a custom scheme internally so why not just expose it rather than having an internal vs user facing representation. There isn't really effort here to be had.

An example of why loaders care is to ensure we can have user provided loaders always point to the builtins. Doing so requires a well known string for the built-in, and loaders as they exist currently work on URLs so they want a valid URL.

@SMotaal
Copy link
Author

SMotaal commented Jul 7, 2019

@bmeck I am thinking a little less emphasis on what node needs to do so that node-specific code works for cases of builtins only… There is more to explore in considering this direction.

We have package exports, we also have the legacy resolution protocols and experimental ones (regardless of if they were meant to be considered in this more formal capacity) — all of which are things that may or may not be addressed here.

Buy-in from all the players comes in the form of equal opportunity relative to unique complexities and requirements — because if we have a really ideal node:, js:, or std: in isolation, any JS developer will be at the mercy of rewriting, unless importmaps are supported (but that is not true innate isomorphism), but if not that then…etc. and all those novel ways are cool, but decently being portable without knowing which one here makes them all potentially useless.

@ljharb
Copy link
Member

ljharb commented Jul 7, 2019

Few things are truly portable tho, because every environment provides different privileges around things like fs/network/timing/threading/etc.

@SMotaal
Copy link
Author

SMotaal commented Jul 7, 2019

@ljhard… true — so the example I think of if I get this right, I use importmap (or similar) in a contrived future where I map fs to node:fs|std:fs|undefined

What I am trying to get at is to distinguish specifier portability from portable code — a portable specifier is one that does not unintentionally point to the wrong module.

In the first two scenarios, we assume only that a platform-specific module key identifier must only ever lead to the outcome we expect — ie if a different platform not a browser and not bound to std: specs of the browsers decides to make std:fs mean something completely different semantics wise, then ECMAScript imho is failing us here, not the browsers nor the implementer who obviously pushed things just to make a point here.

So the point here is that while each prefix and platform can design their own schemes and protocols, ECMAScript specifier behaviours that dictate a node: prefix or std: prefix must either:

  1. conform to X if supported
  2. otherwise throw/fallback exactly in certain ways for any conforming implementation

And with this guarantee, you know that if a platform does not support node: or std: it will need to support undefined and there are many views on what could be — imho actually undefined is the only one that would be a really bad idea. My thinking is the undefined module is a specifier-less module that in very specific cases is the fallback for which the set of any imported name binds to undefined and * binds to the single empty namespace instance.

@MylesBorins
Copy link
Contributor

I'm a bit confused by this thread to be completely honest. Something like nodejs:@nodejs/fs is extremely verbose and seems like the worst of both worlds. I don't see why our decision to use node: internally should have any effect on the discussion. This is not an external API and we can freely change it internally without any issues... if we choose to expose it extrenally that is great, if we choose to have a different mechanism externally great... it seems like considering it in the discussion is the carriage driving the horse.

I don't see this as schemes vs namespaces but rather schemes as namespaces. This is the direction the ecosystem is moving... which means that at some point in the future we are going to need to support js:builtin... and if node has a different mechanism I think that will be quite odd.

I'm strongly -1 for @nodejs/ at this point.

What is the desired outcome of this thread?

@SMotaal
Copy link
Author

SMotaal commented Jul 9, 2019

I am not doing a good job addressing the same questions immediately here… I need to step away and get back to this most likely for next meeting.

@jimmywarting
Copy link

jimmywarting commented Nov 3, 2019

Do what the web is doing... use std: for native core node modules stuff so that if/whenever you implement similar stuff like the kv-storage then it will be seemless to use it without any platform specific code.

support importing from urls too, that how the web & deno works also

for npm stuff... i think it should start serving as a cdn, node should cache the file locally (like deno) and use it from there on

importing npm modules from the web dose not work...

so i would not want to see any @nodejs or 'node:@nodejs or anything starting with npm either for that mather. stick to the web specification.

std:x would be nice too see

@ljharb
Copy link
Member

ljharb commented Nov 4, 2019

The web is no longer doing that as part of import maps. Separately, “std” is not a good name from a bikeshedding perspective, and it’s not what TC39 would be going with either.

@SMotaal
Copy link
Author

SMotaal commented Nov 4, 2019

@jimmywarting I wanted to ask if you've rolled out code with std:kv-storage not necessarily production but at least with fallback behaviours for SF/FF (evergreen).

I'm just curious to find something of prospect for compat, for brainstorming. So anything that manages to close the double-edge offering behind importmaps+std: would imho be the missing agent for this to potentially pan out. Still, std: itself and the position of SF/FF, TC39 (and honestly my own personal taste, an aside) this feels momentary or to be revised. We've seen this with HTMLImports.

@jimmywarting
Copy link

jimmywarting commented Nov 4, 2019

@jimmywarting I wanted to ask if you've rolled out code with std:kv-storage not necessarily production but at least with fallback behaviours for SF/FF (evergreen).

No, i haven't, have not even tried kv-storage.

didn't know TC39 stop using it either, i just want stuff to be backed up by some specification instead of inveting something that later becomes node specific not being cross deno/web/node compatible

@MylesBorins
Copy link
Contributor

MylesBorins commented Nov 4, 2019 via email

@devsnek
Copy link
Member

devsnek commented Nov 4, 2019

I'm not 100% sure but if this thread is talking about node namespacing imports, discussion should probably go here: nodejs/node#21551

@SMotaal
Copy link
Author

SMotaal commented Nov 4, 2019

Just to clarify, those two threads are related, this one was revived from older issues opened in the Modules repo.

@MylesBorins
Copy link
Contributor

Closing as we have shipped the node: scheme

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants