-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Record keys are deduplicated, but should duplicates generate a thrown error? #361
Comments
Consistency with object literals would imply silent deduplication, leaving it to linters to detect duplicates. Given that |
I think the trade-off here can be summarized as follows: If duplicate keys silently override earlier instances of the same key, then
The champions believe that it is more important to enable intentional overriding than to protect against the unintentional. Is that a fair summary? |
I'd also add a point about consistency with object literals, which have always had silent override behavior. |
Sorry for the delay for weighing on this. Thank you @brad4d for summarising - yes that is our position, we want to enable the possibility to update a single key value by copy by spreading & overriding. As @ljharb pointed out this is consistent with objects, meaning both Records & Objects are interoperable for those operations. Unless we have a strong reason to prevent unintentional overrides we will likely proceed with the current behaviour. What are your thoughts on this @brad4d ? |
@rricard I see the reasoning for not throwing an exception now. It initially seemed very counter intuitive to me. However, I can definitely see the need to leverage the silent overwriting in order to create an modified form of a record, I'm OK with closing this issue if no one has anything further to add. |
I'm hoping I've just misread the current version of the spec text.
If so, please point me at what I'm missing and close this issue.
It looks like the current spec text indicates that multiple values for the same key will be silently ignored by the DeDuplicateRecordEntries AO.
I would expect an
Error
of some kind to be thrown at runtime if creation of a record encountered multiple values for a single key.Perhaps there is some good reason to silently ignore duplicates?
I couldn't find an issue in which the decision to ignore or throw was discussed.
The text was updated successfully, but these errors were encountered: