Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Standardize JavaScript Functions API #44

Open
Ethan-Arrowood opened this issue Jan 17, 2023 · 15 comments
Open

Standardize JavaScript Functions API #44

Ethan-Arrowood opened this issue Jan 17, 2023 · 15 comments

Comments

@Ethan-Arrowood
Copy link
Contributor

Problem

Today every competing JavaScript functions API is different enough that we end up with lower developer productivity and vendor lock-in. Lower developer productivity because developers have to learn multiple ways to write the JavaScript function code and may have to write more complex code to avoid vendor lock-in. Organizations wanting to leverage multiple functions providers or move from one provider to another incur significant additional cost.

Goal

Goal is to define a JavaScript functions API that avoids vendor lock-in and facilitates developer productivity while being general enough to allow different implementations and optimizations behind the scenes.

Things we should try to standardize

  • function sig (including parameters and what’s available on them)
    • how functions get events(CloudEvents)?
  • key supporting apis like
    • log method signature
    • exporting the function
    • how to report an error

Things we should not try to standardize

  • JS framework for invoking the functions
    • http framework that is used(Fastify, Express, etc…)
  • underlying buildpack/image that ends up running
  • accessing platform specific APIs offered by the vendor platforms(ex: google apis)
  • output format/how you view logs
  • how you monitor functions

Scenarios

Scenario 1: Writing a HelloWorld function in one vendors platform and moving to other vendors platforms
Shouldn’t have to change the code
Build steps and or configuration may have to change

Scenario 2: Writing in one vendor and moving to another (ex using Google Cloud Functions and move to Cloudflare workers)
If code does not use vendor specific APIs you should not have to change the code.
If code uses vendor specific APIs, you may need to change the code if you also need to use a different vendor for those calls.
Build steps and or configuration may have to change

Originally authored by @lholmquist

@jasnell
Copy link
Contributor

jasnell commented Jan 17, 2023

To start filling this out, we should get a comparison of how a simple hello world looks across various platforms.

For workerd/cloudflare workers, we have two API models (one legacy and one preferred that we're actively moving to):

Legacy (service worker style):

// global addEventListener
addEventListener('fetch', (event) => {
  // request is a property on event (e.g. request.event), as is `waitUntil`
  // bindings (additional resources/capabilities configured for the worker) are injected as globals
  event.respondWith(new Response("Hello World"));
});

New (ESM worker style):

export default {
  async fetch(request, env, context) {
    // bindings are available through `env`, `waitUntil` is available on `context`
    return new Response("Hello World");
  }
}

We use console.log(...) for all logging. We do have a more structured logging API configurable through bindings but it's more specific purpose for metrics.

Error reporting is pretty straightforward. There's really nothing fancy. We throw synchronous errors and report unhandled promise rejections to the 'unhandedrejection' event handler using the global addEventListener with either worker model.

Overall, our strong preference is to stick with the ESM worker style as experience has shown us time and again that the service worker model is pretty limited.

The Request and Response objects here are the standard fetch APIs. In the service worker model, we use the standard FetchEvent. For the ESM worker model, env and context are platform-defined, non-standard APIs, although context mimics the FetchEvent API.

Where things are going to get complicated here is that in any function more complicated than a simple "Hello World", there are most likely a number of vendor/platform specific additional APIs in use (e.g. KV, S3, etc) that practically end up making portability difficult to impossible, so I'm concerned about whether the "moving to other vendors platforms" without changing the code is an achievable goal.

/cc'ing @kentonv and @harrishancock for visibility.

@styfle
Copy link

styfle commented Jan 18, 2023

I'm concerned about whether the "moving to other vendors platforms" without changing the code is an achievable goal.

I think you can get really far with only Request and Response. Perhaps the standard should focus on that part first.

Using your ESM example:

export default {
  async fetch(request) {
    return new Response("Hello World");
  }
}

Using an example from Vercel:

export const config = { runtime: 'edge' };

export default (request) => {
  return new Response(`Hello, from ${request.url}`);
};

These look really similar with a slightly different export.

@QuiiBz
Copy link

QuiiBz commented Jan 18, 2023

To add another example, the syntax for Lagon is similar to Vercel, except that the function is a named export:

export function handler(request) {
  return new Response("Hello World");
}

Logging is the same (using console.log / console.debug / console.warn / console.error). I believe this syntax is already used by most (if not all) of the runtimes out there.

@ascorbic
Copy link

The syntax for Netlify Edge Functions is very similar to Vercel and Lagon, and both of those examples would work unchanged on Netlify.

This is the standard signature for Netlify:

export default async function handler(request: Request, context: Context) {
    return new Response("Hello world")
}

The Request and Response are standard Deno objects. The Context object is optional, and provides things like geo and ip, as well as a next() function (which itself returns a Response). We've tried to keep everything as standard as possible, putting any non-standard fields on the context object instead of adding anything to the request or response. console works as expected.

Netlify also supports an optional config export similar to Vercel but in our case it's currently just used for mapping the function to paths.

We're open to adding more fields to Request if they are standardised but would suggest that we avoid using these objects for non-standard extensions if possible.

@lholmquist
Copy link

For completeness, here is the current syntax we use at Red Hat for a "normal" function

/**
 * Your HTTP handling function, invoked with each request. This is an example
 * function that echoes its input to the caller, and returns an error if
 * the incoming request is something other than an HTTP POST or GET.
 *
 * In can be invoked with 'func invoke'
 * It can be tested with 'npm test'
 *
 * @param {Context} context a context object.
 * @param {object} context.body the request body if any
 * @param {object} context.query the query string deserialized as an object, if any
 * @param {object} context.log logging object with methods for 'info', 'warn', 'error', etc.
 * @param {object} context.headers the HTTP request headers
 * @param {string} context.method the HTTP request method
 * @param {string} context.httpVersion the HTTP protocol version
 * See: https://github.com/knative-sandbox/kn-plugin-func/blob/main/docs/guides/nodejs.md#the-context-object
 */
const handle = async (context) => {
  // YOUR CODE HERE
  context.log.info(JSON.stringify(context, null, 2));

  // If the request is an HTTP POST, the context will contain the request body
  if (context.method === 'POST') {
    return {
      body: context.body,
    }
  // If the request is an HTTP GET, the context will include a query string, if it exists
  } else if (context.method === 'GET') {
    return {
      query: context.query,
    }
  } else {
    return { statusCode: 405, statusMessage: 'Method not allowed' };
  }
}

// Export the function
module.exports = { handle };

The only parameter here is the context object, which provides a few pieces of information

If you need a function that can also handle CloudEvents, then an extra event param is used:

const { CloudEvent, HTTP } = require('cloudevents');

/**
 * Your CloudEvent handling function, invoked with each request.
 * This example function logs its input, and responds with a CloudEvent
 * which echoes the incoming event data
 *
 * It can be invoked with 'func invoke'
 * It can be tested with 'npm test'
 *
 * @param {Context} context a context object.
 * @param {object} context.body the request body if any
 * @param {object} context.query the query string deserialzed as an object, if any
 * @param {object} context.log logging object with methods for 'info', 'warn', 'error', etc.
 * @param {object} context.headers the HTTP request headers
 * @param {string} context.method the HTTP request method
 * @param {string} context.httpVersion the HTTP protocol version
 * See: https://github.com/knative-sandbox/kn-plugin-func/blob/main/docs/guides/nodejs.md#the-context-object
 * @param {CloudEvent} event the CloudEvent
 */
const handle = async (context, event) => {
  // YOUR CODE HERE
  context.log.info("context");
  context.log.info(JSON.stringify(context, null, 2));

  context.log.info("event");
  context.log.info(JSON.stringify(event, null, 2));

  return HTTP.binary(new CloudEvent({
    source: 'event.handler',
    type: 'echo',
    data: event
  }));
};

module.exports = { handle };

@kentonv
Copy link

kentonv commented Jan 18, 2023

I guess there are a few main ways Cloudflare's interface is unusual here. Let me try to explain our reasoning.

env

Cloudflare's env contains "environment variables", which we also often call "bindings". But our design here is quite different from "environment variables" in most systems. Cloudflare's bindings implement a capability-based security model for configuring Workers. This is a central design feature of the whole Workers platform.

Importantly, unlike most systems, environment variables are not just strings; they may be arbitrary objects representing external resources. For example, if the worker is configured to use a Workers KV namespace for storage, then the binding's type will be KvNamespace, which has methods get(key), put(key, value), and delete(key). Another example of a complex binding is a "service binding", which points to another Worker. A service binding has a method fetch(), which behaves like the global fetch, except all requests passed to it are delivered to the target worker. (In the future, service bindings could have other methods representing other event types that Workers can listen for.)

So for example, if I want to load the key "foo" from my KV namespace, I might write:

let value = await env.MY_KV.get("foo");

We expect to add more kinds of bindings over time, which could have arbitrary APIs. Obviously those APIs aren't going to be standardized here. But, I think the concept of env could be.

Why not have env be just strings?

Most systems that have environment variables only support strings, and if the environment variable is meant to refer to an external resource, it must contain some sort of URL or other identifier for that resource. For example, you could imagine KV namespaces being accessed like:

let ns = KvNamespace.get(env.KV_NAME);
let value = await ns.get("foo");

However, this opens a can of worms.

First, there is security: What limitations exist on KvNamespace.get()? Can the Worker pass in any old namespace name it wants? Does every Worker then have access to every KV namespace on the account? Do we need to implement a permissions model, whereby people can restrict which namespaces each Worker can access? Will any users actually configure these permissions or will they mostly just leave it unrestricted?

Second, there is the problem that this model seems to allow people to hard-code namespace names, without an environment variable at all. But when people do that, it creates a lot of problems. What if you want to have staging vs. production version of your Worker which use different namespaces? How do developers test the Worker against a test namespace? You really want to force people to use environment variables for this because it sets them up for success.

Third, this model allows the system to know which Workers are attached to which resources. You can answer the question, "What Workers are using this KV namespace?" If the user tries to delete a namespace that is in use, we can stop them. Relatedly, we can make sure that the user cannot typo a namespace name – when they configure the binding, we only let them select from valid namespaces.

By having the environment variable actually be the object and not just an identifier for it, we nicely solve all these problems. Plus, the application code ends up shorter.

Why not make env globally available?

A more common way to expose environment variables is via a global API, e.g. process.env in Node.

The problem with this approach is that is is not composable. Composability means: I should be able to take two Workers and combine them into a single Worker, without changing either workers' code, just placing a new wrapper around them. For example, say I have one worker that serves static assets and another that serves my API, and for whatever reason I decide I'd rather combine them into a single worker that covers both. I should be able to write something like:

import assets from "static-assets.js";
import api from "api.js";

export default {
  async fetch(req, env, ctx) {
    let url = new URL(req.url);
    if (url.pathname.startsWith("/api/")) {
      return api.fetch(req, env, ctx);
    } else {
      return assets.fetch(req, env, ctx);
    }
  }
}

Simple enough! But what if the two workers were designed to expect different bindings, and the names conflict between them. For example, say that each sub-worker requires a KV namespace with the binding name KV, but these refer to different KV namespaces. No problem! I can just remap them when calling the sub-workers:

    if (url.pathname.startsWith("/api/")) {
      return api.fetch(req, {KV: env.ASSETS_KV}, ctx);
    } else {
      return assets.fetch(req, {KV: env.API_KV}, ctx);
    }

But if env were some sort of global, this would be impossible!

Arguably, an alternative way to enable composability would be to wrap the entire module in a function that takes the environment as a parameter. But, I felt that passing it as a parameter to the event handler was "less weird".

Exporting functions vs. objects

Many other designs work by exporting a top-level function, like Vercel's:

export default (request) => { ... }

But Workers prefers to export an object with a fetch method:

export default {
  async fetch(req, env, ctx) { ... }
}

Why?

In Workers, we support event types other than HTTP. For example, Cron Triggers deliver events on a schedule. Scheduled events use a different function name:

export default {
  async scheduled(controller, env) { ... }
}

We also support queue and pubsub events, and imagine adding support in the future for things like raw TCP sockets, email, and so on. A single worker can potentially support multiple event types.

By wrapping the exports in an object, it becomes much easier to programmatically discover what event types a worker supports. A function is just a function, there's not much you can say about it. But an object has named properties which can be enumerated, telling us exactly what the worker supports. When you upload a Worker to Cloudflare, we actually execute the Worker's global scope once in order to discover what handler it exports, so that our UI can guide the user in configuring it correctly for those events.

Why not use the export name for that? Why wrap in an object?

You could argue we should just use export names instead:

export async function fetch(req, env, ctx) {...};

The function is exported with the name fetch, therefore it is an HTTP handler.

The problem with this is that it necessarily means a Worker can only have one HTTP handler. We actually foresee the need for multiple "named entrypoints". That is, in the future, we plan to support this:

export default {
  async fetch (req, env, ctx) { … }
}

export let adminInterface = {
  async fetch (req, env, ctx) { … }
}

Here, we have a worker that exports two different HTTP handlers. The default one probably serves an application's main web interface. The adminInterface export is an alternate entrypoint which serves the admin interface. This alternate entrypoint could be configured to sit behind an authorization layer like Cloudflare Access. This way, the application itself need not worry about authorizing requests and can just focus on its business logic.

What is context?

It looks like several designs feature a "context" object, but the purpose of the object differs.

In Workers' case, the purpose of the context object is to provide control over the execution environment of the specific event. The most important method it provides is waitUntil(), which has similar meaning to the Service Workers standard ExtendableEvent.waitUntil(): it allows execution to continue for some time after the event is "done", in order to perform asynchronous tasks like submitting logs to a logging service.

All event types feature the same context type. This makes it not a great place to put metadata about the request, since metadata probably differs for different event types. For example, the client IP address (for HTTP requests) is not placed in context. Instead, we have a non-standard field request.cf which contains such metadata. (I am not extremely happy about request.cf, but repeated attempts to design something better have always led to dead ends so far.)

Why not put env inside context?

We could, but this poses some challenges to composability. In order for an application to pass a rewritten environment to a sub-worker, it would need to build a new context object. Applications cannot construct the ExecutionContext type directly. They could create an alternate type that emulates its API and forwards calls to the original context, but that seems tedious. I suppose we could provide an API to construct ExecutionContext based on an original context with an alternate env. But it seemed cleaner to just keep these sepanate. env contains things defined by the application, context contains things that come from the platform.

@mhdawson
Copy link

mhdawson commented Jan 18, 2023

Some notes on the AWS model that I took a little while back.

AWS Function input/output model:

From: https://docs.aws.amazon.com/lambda/latest/dg/nodejs-handler.html function signatures are:

  • async function(event, context) or
  • function (event, context, callback)

Parameters are:

  • event - The invoker passes this information as a JSON-formatted string when it calls Invoke, and the runtime converts it to an object.

  • The second argument is the context object, which contains information about the invocation, function, and execution environment. In the preceding example, the function gets the name of the log stream from the context object and returns it to the invoker.

  • The third argument, callback, is a function that you can call in non-async handlers to send a response. The callback function takes two arguments: an Error and a response. When you call it, Lambda waits for the event loop to be empty and then returns the response or error to the invoker. The response object must be compatible with JSON.stringify.

For asynchronous function handlers, you return a response, error, or promise to the runtime instead of using callback.

Response is object must be compatible with JSON.stringify as JSON is returned

Logging

Logging uses console.log, err

Error handling

https://docs.aws.amazon.com/lambda/latest/dg/nodejs-exceptions.html
still returns JSON

@mhdawson
Copy link

Reading through some of the descriptions they seem to assume http as the request/response. As mentioned for cloudflare workers I think we need an API which supports other types of events for functions as well (not that the APIs described can't do that, just not sure if that was considered).

Ideally the function itself would not need to know if an event was from HTTP, pub/sub or something else, all of that would be up to the serverless framework to handle. The function would get an event in an expected format and generate a result in an expected format. The plubing and configuration that gets the event to the function and the result to the right place (next function, back to user etc) would be part of the specific setup for the platform on which the function was running.

I think the AWS model is effectively JSON in, JSON out which I think supports that model.

The data received could have fields which indicates what kind of request it is if that is necessary, and that could be used to return the result in a specific way but at the highest level the API might not need to be tied to that. We could then define some specific in/out formats as a second level if that is helpfull (for example http requests/responses)

@kentonv
Copy link

kentonv commented Jan 18, 2023

@mhdawson HTTP isn't necessarily just request/response, though. The request and response bodies are streams, and the streaming nature is extremely important for a lot of use cases, such as proxying, SSR, etc. And then there's WebSockets. I don't see how these could be represented using plain JSON -- you really need an API designed around the specific shape of the protocol.

@mhdawson
Copy link

@kentonv I'm sure you understand the use cases/current implementations much better than me. I'll have to read up more/learn more about those use cases in the functions context to understand better.

I can understand that streaming might need a different API but still wonder if it needs to be specific to the protocol versus class of protocols (for example an API for request/response, one for streaming, etc.)

@jasnell
Copy link
Contributor

jasnell commented Jan 19, 2023

@mhdawson

Ideally the function itself would not need to know if an event was from HTTP,

Unfortunately I don't think that's practical. Each of our exported events share the env and context constructs but vary significantly in other aspects, necessarily so. Abstraction in this case is not ideal. I think we should focus on standardizing the http handler case first, keeping the fact that we have other cases that also need to be supported, but stopping short of trying to define a function signature that works for all possible cases.

@mhdawson
Copy link

I agree on focusing on the http case first as I think that is the most common, but keeping in mind that other types need to be supported as well.

@mhdawson
Copy link

From discussion in the meeting today next step is to

  • Create repo - in personal space, may be later transferred to Winter CG org
    1. Readme.md - problem statement
    2. Subdirectory with docs per existing implementation
    3. PR Starting API def
    4. Issue, where would standard live?
    • Daniel, ECMA might want to host it. Would restrict to ECMA members and invited
    • experts, Cloudflare/fastly not members of ECMA
    • Linux Foundation SDF
    • OpenJS Foundation
    • W3C technical report or something like that?

@lholmquist
Copy link

I've created the repo here: https://github.com/nodeshift/js-functions-standardization

Nothing added yet

@lholmquist
Copy link

Added an implementations sections to the repo: https://github.com/nodeshift/js-functions-standardization/tree/main/docs/implementations

These were basically just a copy/paste of the above comments into each platform

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants