Replies: 4 comments 5 replies
-
In this morning's Moonwalk meeting, @darrelmiller asked if the functional areas would be sections in the table of contents, or if the relationship to the spec structure is less direct. I replied that in my ideal view, at least some of them would be entirely separate documents that could, if necessary, be released on their own schedule. There are a few ideas wrapped up in that answer:
This does not mean wa have to literally have separate specs. There are appealing things about that model, and it has worked very well for CSS. But most of the benefits can be achieved whether they are different specs or different sections or whatever. It's just that the lines are very clear if we have separate documents. It's much harder to give in to the temptation to take shortcuts. If we were to have separate documents, it would not necessarily be one per area. I could see:
After that it gets less clear. It would be easy to make an Area 2 (Data Modeling) spec, and we would need to do that if we want to split up the rest as they all depend to some degree on data modeling. We've already talked splitting shape and deployment, including possibly as separate specs (one of the inspirations for this whole idea - see #66 ). And as noted above, I think security has a strong case for being separate. But with some of these the interfaces are less clear, as are the correlations to tools. Areas 0 and 1 have obvious target tools or libraries analogous to things that exist today. Before we decide to break up the spec beyond that (if we even do that much), we should be clear on the benefit to tool developers and users. |
Beta Was this translation helpful? Give feedback.
-
Looks @handrews . I did a small edit to fix a typo. |
Beta Was this translation helpful? Give feedback.
-
Ok, I am going to take a stab at a tough one: securing an API. I'll open a new discussion to capture some of the work, ideas, etc. I'll stick to using yaml. |
Beta Was this translation helpful? Give feedback.
-
Is there an issue/discussion on replacing JSON Schema? |
Beta Was this translation helpful? Give feedback.
-
In the June 18th meeting (#134) I presented several functional areas as a way to organize the spec and align it with tooling and end-user use cases. The seven areas were sufficiently well-received to warrant further development, so I'm starting a discussion for them.
Functional Areas
Here is the list, slightly modified based on discussions in the call:
0. Resolving import/reference IRIs to URLs and retrieving the target documents
This is "0" because it's a prerequisite, but is in many ways the easiest to split off into a 3rd-party tool. This replaces the current notion of reference "resolvers" (which are often reference removers, which is not always possible and AFAICT usually implemented incorrectly for 3.1) by drawing the boundary in the right place: between the abstract identity (IRI) and a concrete and secure location.
Input:
Extensibility:
file:
andhttps:
being the most obvious / common)file:
support will likely be built-in for most implementations, unless they run in an environment without a filesystem.Output:
Getting this area and its boundaries correct is essential to avoid the pathological non-interoperable mess we currently have with referencing in 3.x. For organizations that have to create descriptions that work across multiple toolchains, this lack of interoperability is one of their most severe challenges.
1. Parsing, manipulating, and serializing OpenAPI Descriptions (OADs)
The critical aspect here is defining a language/environment-neutral representation of a parsed and resolved OAD. This should include both a round-trip safe form that preserves file and order (e.g. line number) mappings, and an efficient form that does not track such information because the OAD will either not be serialized after being parsed, or because there is no need for round-trip safety.
This representation is not any sort of attempt at a generic abstraction of HTTP APIs that would be independent of the OAS. It's just a way to allow testing parsers for compliance with the specification and to support interoperable ways of working with parsed OADs. This allows different tools to share the same parser, even one made by a different tooling vendor.
Input:
Interfaces:
Output:
All use cases require at least the non-round-trip-safe parsing aspect of this area.
Round-trip safety is essential for use cases like building an editor / IDE for OADs. Depending on the exact use case, OAD pipelines may or may not need the round-trip-safe form, but definitely need the manipulation and serialization aspects.
2. Modeling data
All data is modeled in the same way, regardless of whether it is:
This likely involves an extensible registry defining how to map non-JSON-compatible media types and grammars to Schema Objects or whatever other Objects are involved (e.g. Encoding Objects). We do not want to attempt to define anything like a generic mapping between ABNF and Schema Objects, just to make that perfectly clear.
This is where a lot of the most difficult design decisions need to be made. The Objects currently involved in this take a variety of different strategies, and it is not clear how to reduce the duplication (or, more confusingly, near-duplication) in the current arrangement without getting too generic to be useful in the concrete usages that we need. Those Objects include:
schema
,encoding
, and a bit of metadataexplode
)multipart
types, despite what the text in versions to date has saidIt's notable that three different ways of augmenting Schema Objects appear here: wrapping the schema from the outside (Media Type Object, Parameter Object), paralleling the schema structure (Encoding Object), and embedding information in subschemas (XML Object, Discriminator Object).
What is clear from the recent 3.x patch release work is that data model mapping and encoding / escaping need to be orthogonal, because (as much as I love RFC6570 on its own), tangling these up creates incredibly complex, subtle, and error-prone interactions.
What is not clear is how to reduce all of these different ways of augmenting schemas to something more consistent and coherent. Nor is it clear that we should continue to (ab)use a constraint validation and annotation system by treating it as a data definition system.
We also need to be sure we can express data semantics sufficiently to meet our declared ambitions around semantics.
3. API Shape: Modeling HTTP interactions
This is mostly where the concept of operation "signatures" lives, although it will depend on data modeling and possibly other thinsg as well.
We need to:
Output:
4. Securing an API
There are many people who can define this area better than I can.
Output:
5. Deploying an API
6. Organizational and presentation metadata
Tooling and User Use Cases
The point of this is to align areas of the spec with how tools are constructed and used. The current misalignment of ref "resolvers"/removers shows the problems that are caused when we fail to do this up front. Tools that work with the OADs (e.g. editors), generate human-oriented documentation, generate code from data models, or manage the API at runtime use different sets of areas, and should be able to ignore areas that are irrelevant.
Discussion here should, among other things, focus on making those tool and user use cases, and their correlation (or not) with the areas, more clear.
Beta Was this translation helpful? Give feedback.
All reactions