-
-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Grouping requests by datatype #336
Comments
I dont have much time to go deep in your post at this time, but I hope I got the whole idea. Generally with normalisation all data updates will be automatic, apart from array mutations (and only those top level, will show you what I mean). The issue is, you can have query ALL_BOOKS, and FAVOURITE_BOOKS, if you have a mutation add book, how we can know this should mutate 1 query or both. In Apollo this is also not possible, because probably there is no way to predict that. Maybe we could think about some helpers, but full automation is not possible like in case of object updates. However, if you have an object like Speaking about optimistic updates, you have |
I also thought about this, but have other important todos, very curious indeed where conversation will lead us :) I am also very interested with fragment idea from your another issue to fetch objects by id, this will be added for sure. |
Well, I'll throw in some thoughts. The case you mentioned, The bigger use case is if you have So if we have a grouping key, perhaps even with a function that tells us what data fits here... const booksByAuthor = (author) => ({
type: BOOKS_BY_AUTHOR,
meta: {
requestKey: author,
groupKey: "book",
grouper: (book)=>book.author===author
}
})
const favoriteBooks = {
type: FAVORITE_BOOKS,
meta: {
groupKey: "book",
grouper: (book)=>book.favorite===true
}
} Not sure if it's a good idea or not, but in theory it should allow Eh, I'm not sure. |
@zeraien thx for those snippets, this topic is super interesting, I cannot work on it though as there are other important stuff on TODO list, but surely I will come back to this at some point, will definitely let you know about potential API before implementation! |
I might be completely off base here, but I feel like there needs to be a way to group requests by data type.
If you have two requests,
LOAD_OBJECT+requestKey
andLOADS_OBJECTS
, and you decide to delete anOBJECT
, you need to write a mutation that mutates both requests, which I suppose is fine, unless you happen to have even more requests likeLOAD_OBJECTS_BY_JOB
andLOAD_OBJECTS_BY_CLIENT
etc, all need to be added in the delete mutator.But then, you also need to make
LOAD_OBJECT+requestKey
mutate theLOADS_OBJECTS
request, and maybe evenLOAD_OBJECTS_BY_CLIENT
need to mutateLOAD_OBJECTS
, depending on your situation. In theory it is doable, but it might become an issue down the road, and turn into a spaghetti of mutations.In theory you could group the requests by some arbitrary key and mutate that key instead?
For example, I have a situation where I
LOAD_OBJECTS
, but when a singleOBJECT
is updated, I actually push aWebSockets
update to the client with the newOBJECT
, which then triggers a fake requests request (I wrote a simple driver that allows me to make local requests hehe), anyway, I digress, this new updatedOBJECT
triggersLOAD_OBJECT+requestKey
, but in order to update the remainingOBJECTS
inLOAD_OBJECTS
, it needs to mutate this.Subsequently, if any
OBJECT
is deleted, theDELETE_OBJECT
request is triggered which needs to mutate bothLOAD_OBJECTS
andLOAD_OBJECT+requestKey
...And then you have to add to that all the different update mutators etc. In theory, because of normalization, the data is updated anyway, but if you want to do optimistic updates you need to write mutations for every request that touches those objects...
I don't have a full idea yet because I'm still trying to figure out the best way to handle mutations in my own code... This library is really cool and offers quite a bit of epic functionality so I am trying to make it work for me :-)
Anyway, I'm not sure it's a good idea, but I figure I'll start the conversation and we'll see where it goes.
The text was updated successfully, but these errors were encountered: