-
-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Perf] SharedWorker Cache #95
Comments
I think locking data fetching library to a specific kind of cache is not a good idea. IMO a better approach would be to use configurable cache backends ex. |
@elderapo Maybe for generic clients such as Apollo, where caching is optional. But with gqless, caching is required for the API to work (otherwise re-rendering your app would refetch everything). Hence why it should be built in I guess I could implement configurable backends, but this issue is mostly addressing the Cache format |
Hmm what about browser support? Hmm from what I see ArrayBuffer has even worse support than Proxy but at least it can be polyfilled. SharedWorkers support is bad. I like making things faster but just hope that for basic cache management any super new features that are not polyfillable won't be required. So if someone doesn't care about sharing cache between tabs he/she can opt out. WASM is probably not the best option for many reasons |
@elderapo @samdenty also from what I understand Apollo also has some specific cache structure and every library like that will have. And you can't really change apollo cache when you're using it. What @elderapo you're talking about is persisted cache which you could store in the IndexedDB but also if you really want to you could also convert it to JSON and store in the localStorage or whatever other storage. From what I understand In the app at my work, we used to store Apollo cache in the localStorage but it quickly became so big that it was exceeding available localStorage space, so we ended up disabling it. It's not a big deal to store cache data in memory. I know that Apollo team is working on some cache improvements in v3.0 but it's mostly about garbage collection. @samdenty are you also planning to do some garbage collection? |
@lukejagodzinski Browser support isn't something I'm prioritising. My target is recent browsers / nodejs.
This is the reason for using ArrayBuffer with a custom cache representation. It'll be smaller & faster, so you won't need to worry about garbage collection. Persisting the cache will just clone the ArrayBuffer into IndexedDB, which is super fast |
@samdenty ok that sounds reasonable :) And I agree about not prioritising browser support as supporting IE just stifles progress. So if it doesn't work in IE I think most people will be ok with that. We should kill IE long time ago :) |
With the new version just published the focus will be for now getting it to be completely stable, the performance has been improved overall due to the new design. |
SharedWorker
Motivation
The new architecture for gqless's Cache, will be SharedWorker backed.
By utilizing a SharedWorker, cache updates can happen crosstab.
Although limited to the same origin, this could be worked around using an iframe. Imagine cross-domain caches for the same API. TravisCI, loading data already cached on github.com
We don't want to have to clone the cache to/from main-thread on each change, so we will utilize a SharedArrayBuffer.
ArrayBuffer cache representation
Using an ArrayBuffer means we have to create a structured memory representation for the GraphQL data. As GraphQL is a strongly typed language, we can utilize the schema to create a highly efficient representation of the data (without JSON keys).
Benefits
This ArrayBuffer can be directly written into IndexedDB without a serialization/deserialization step. This will be blazing fast
This binary format can be directly returned by the server, instead of JSON. Allowing for super-small payloads, and super fast cache merges.
SSR hydration will be super fast
Why not use X
We could use existing tech like protobuf, but we can get better perf/bundle sizes from scratch.
Additionally they have concepts like RPC, which aren't required
WASM
The SharedWorker will be responsible for fetching/merging cache updates. We could either implement this logic in JS or Rust.
WASM will inevitably be way faster, but has massive payload sizes. TBD
Optimized server responses
As we now have a binary representation for GraphQL data, servers can return raw bytes - instead of JSON responses.
This will need to be baked into Apollo server or whatnot, as the representation requires the schema.
The client add the
Accept-Encoding: application/gqless
header, which the server will respect or ignore.For servers that don't implement the binary format, gqless will still support JSON responses
Compatibility
As SharedWorker aren't available inside Node, we need to have a fallback that works without SharedWorker.
This fallback will have to run on the main thread, which means we don't technically need SharedArrayBuffer.
It would make sense to use this fallback when SharedWorker/SharedArrayBuffer is not available. So all that should be required is ArrayBuffer
The text was updated successfully, but these errors were encountered: