-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[greasyfork] Add Greasy Fork rating badges #8087
[greasyfork] Add Greasy Fork rating badges #8087
Conversation
|
Of course I'd be fine with option 1 and leaving it as is, but I think the second option sounds good. It would keep it pretty simple. |
Want to let this one sit for a bit to see if any of the other maintainers want to weigh in |
I know the other maintainers have been quite busy of late, but I don't think they're likely to have strong opinions one way or another. I think the best foot forward is to probably start with the combo badge. As you noted, these aren't mutually exclusive, so if there ends up being demand for the individual count badges we can always circle back to them. Sound good? |
Also, the render function is on the verge of being sufficiently "non-trivial" as to warrant serious consideration for some unit tests. Have you seen/written any such tests for service class members before? If not I can share some examples that may serve as a helpful reference |
I've added tests based on examples I found |
Awesome thank you. I realize there's at least potentially an open question about if we'll be able to move forward with these badges based on the openuserjs discussion, but I think it would probably be best to continue operating under the assumption as if we will move forward with these greasyfork badges anyway (however, let me know if you'd rather wait, which certainly be understandable). The tests you added actually raise some interesting questions for me relative to the coloring, and I confess I'm ambivalent about it myself. Colors for these combo-types of badges can inherently be tricky, and I think the closest thing we'd have to an analog is probably the test result summary badges. However, even still those are easier because any non-zero number of test failures is an unequivocal failure/critical type of thing; with ratings/feedback, it's definitely not that clear cut. In this context, the only scenarios that would have obvious color representations in my opinion would be brightgreen if all the ratings were positive, or red if all the ratings are negative. The tricky territory is the obviously common set of scenarios with mixed sets of feedback. I appreciate the balance you've tried to strike with your current implementation, but would you mind elaborating on your thinking and the "why" behind the color scale your proposing (not pushing back on it at this point, just curious to understand your reasoning) |
Bad rating aren't really something you can get rid of, if people have an issue while using the userscript and give bad feedback, only the user who gave the bad feedback can change the rating. Because of that I don't think it would make sense that having any amount of bad ratings should cause it to be red. If the majority of the ratings are good, I would consider that a good script, so it should be green. If majority are bad, it should be red. And following the pattern, to keep things simple, if majority are ok, I make it yellow. In case of a tie, it favors the better color (green if good is involved, yellow if ok is involved).
I don't think they're related as GreasyFork and OpenUserJS are independent platforms. I also didn't get the impression that they had any intent to not have OpenUserJS included, but rather it seems they have a point in general about the handling of limits for all supported sites. As a side note, it seems they don't take well to strict moderation, continuing to push that does not seem like an effective strategy as it could only cause more conflict. I understand that shields.io has the right to moderate comments, but stretching out long arguments won't resolve things. |
Thanks. The code behavior is straightforward but it's helpful to know the thinking behind it. For example, how would you view a script that had received two ratings, one good and one bad? I ask because such a 50/50 split would be assigned a color of yellow (or yellowgreen) in our other badges. This is different though in the sense that on other platforms with ratings what's left by reviewers is a scaled value (e.g. 1-10 number or x stars or whatever) so I think there's potentially room to do something different here |
Possibly it could average the colors together and round to the nearest integer: eg. If the result is What do you think of that idea? (This may favor 2's over 1's and 3's) |
I was also thinking about some sort of calculation like this which took all the ratings into effect. I think that sounds reasonable, though I'd also tack on the far ends of the scale too (all positive = brightgreen, all negative = red) |
// from color-formatters.js
function floorCount(value, yellow, yellowgreen, green) {
if (value <= 0) {
return 'red'
} else if (value < yellow) {
return 'yellow'
} else if (value < yellowgreen) {
return 'yellowgreen'
} else if (value < green) {
return 'green'
} else {
return 'brightgreen'
}
}
score = (good*3 + ok*2 + bad*1)/(good+ok+bad) - 1,
color = floorCount(score, 1, 1.5, 2) How's this? The score will range from 0 to 2 based on the average of the ratings. 0 - red Only question is what to do when there are no ratings? |
Fair enough, just somewhat confusing as there's been a lot of cross talk about the two on the respective implementations for them, especially for someone like me who's never heard of either service apart from a name drop in a feature request.
Thanks for sharing your perspective. I appreciate that you're probably most interested in a peaceful resolution, and would like to see the badges come to fruition (especially since you've spent so much time on them!). I share that goal, however, I am also responsible for ensuring another goal. It's not simply that we have the right to moderate, but that we have the obligation to do so, the obligation to enforce the rules of our community. I've never lost a minute's sleep over something a stranger says to or about me on the internet, but I take the responsibility of maintaining the health and culture of the communities I'm a part of (and at least partially responsibly for) very seriously, as do my fellow maintainers. The code of conduct and community management aspects of open source maintenance are often a source of difficult conversations, but they exist for a reason. A lot of studies, and even past anecdotal experience in this project, have shown that the failure to curb negative behavior can have long term negative impacts on your community. This is why we have to enforce our rules and put a stop to inappropriate behavior, even if the person on the receiving end of that behavior didn't notice or didn't care (for example, you may not have cared at all at how that person was speaking to you in response to some of your questions, but it's still incumbent upon us to stop it regardless). It's not my intent to "stretch out long arguments", and that's a part of why I'd previously terminated the debate. However, when it became clear we'd have to re-engage conversations with the person, it became necessary to establish the issue at hand, our authority in this context, and the associated bounds for future discussion. We just cannot allow someone to assert authority or bully their way through our rules, even if that would be the path of least resistance in the moment. |
I think we do something like a grey/inactive color in these scenarios with a message to the same effect, e.g. 'no ratings' but I'd need to double check. The rest of the proposal for the calc looks good as well 👍 |
I am probably not going to have time to give this a final pass tonight, so just be aware that today's the day of the week when we apply dependency updates so there may be a number of merges that get in front of this one |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good overall, couple minor inline items
color = floorCountColor(score, 1, 1.5, 2) | ||
} | ||
return { | ||
message: `${metric(good)} good, ${metric(ok)} ok, ${metric(bad)} bad`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suppose the only other thing to consider here would be whether or not to include the stats when there's 0 of a type of feedback. for example with the test results combo badge we won't display 0 failed if all the tests passed.
I don't have strong opinions, and would be fine proceeding with this as is, just wanted to gauge your thoughts on whether we should do something similar here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think having all 3 numbers makes the most sense since there are only 3 types of ratings and it shows how many of each type. It would correspond well with the website which shows a number for all categories even if it is 0:
If some categories were left out, it would make it less obvious that the ratings are in distinct categories and for those unfamiliar with the site to know which ones are missing.
Bumps [moment](https://github.com/moment/moment) from 2.29.3 to 2.29.4. - [Release notes](https://github.com/moment/moment/releases) - [Changelog](https://github.com/moment/moment/blob/develop/CHANGELOG.md) - [Commits](moment/moment@2.29.3...2.29.4) --- updated-dependencies: - dependency-name: moment dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: repo-ranger[bot] <39074581+repo-ranger[bot]@users.noreply.github.com>
Bumps [@typescript-eslint/eslint-plugin](https://github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/eslint-plugin) from 5.30.0 to 5.30.5. - [Release notes](https://github.com/typescript-eslint/typescript-eslint/releases) - [Changelog](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/eslint-plugin/CHANGELOG.md) - [Commits](https://github.com/typescript-eslint/typescript-eslint/commits/v5.30.5/packages/eslint-plugin) --- updated-dependencies: - dependency-name: "@typescript-eslint/eslint-plugin" dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: repo-ranger[bot] <39074581+repo-ranger[bot]@users.noreply.github.com>
Bumps [@sentry/node](https://github.com/getsentry/sentry-javascript) from 7.4.1 to 7.5.1. - [Release notes](https://github.com/getsentry/sentry-javascript/releases) - [Changelog](https://github.com/getsentry/sentry-javascript/blob/master/CHANGELOG.md) - [Commits](getsentry/sentry-javascript@7.4.1...7.5.1) --- updated-dependencies: - dependency-name: "@sentry/node" dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: repo-ranger[bot] <39074581+repo-ranger[bot]@users.noreply.github.com>
Bumps [simple-icons](https://github.com/simple-icons/simple-icons) from 7.3.0 to 7.4.0. - [Release notes](https://github.com/simple-icons/simple-icons/releases) - [Commits](simple-icons/simple-icons@7.3.0...7.4.0) --- updated-dependencies: - dependency-name: simple-icons dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: repo-ranger[bot] <39074581+repo-ranger[bot]@users.noreply.github.com>
Bumps [nodemon](https://github.com/remy/nodemon) from 2.0.18 to 2.0.19. - [Release notes](https://github.com/remy/nodemon/releases) - [Commits](remy/nodemon@v2.0.18...v2.0.19) --- updated-dependencies: - dependency-name: nodemon dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: repo-ranger[bot] <39074581+repo-ranger[bot]@users.noreply.github.com>
* Add [ROS] version service * review feedback * add spaces
Co-authored-by: repo-ranger[bot] <39074581+repo-ranger[bot]@users.noreply.github.com>
Bumps [@sentry/node](https://github.com/getsentry/sentry-javascript) from 7.5.1 to 7.6.0. - [Release notes](https://github.com/getsentry/sentry-javascript/releases) - [Changelog](https://github.com/getsentry/sentry-javascript/blob/master/CHANGELOG.md) - [Commits](getsentry/sentry-javascript@7.5.1...7.6.0) --- updated-dependencies: - dependency-name: "@sentry/node" dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Caleb Cartwright <[email protected]>
Bumps [ioredis](https://github.com/luin/ioredis) from 5.1.0 to 5.2.0. - [Release notes](https://github.com/luin/ioredis/releases) - [Changelog](https://github.com/luin/ioredis/blob/main/CHANGELOG.md) - [Commits](redis/ioredis@v5.1.0...v5.2.0) --- updated-dependencies: - dependency-name: ioredis dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: repo-ranger[bot] <39074581+repo-ranger[bot]@users.noreply.github.com>
Bumps [fast-xml-parser](https://github.com/NaturalIntelligence/fast-xml-parser) from 4.0.8 to 4.0.9. - [Release notes](https://github.com/NaturalIntelligence/fast-xml-parser/releases) - [Changelog](https://github.com/NaturalIntelligence/fast-xml-parser/blob/master/CHANGELOG.md) - [Commits](NaturalIntelligence/fast-xml-parser@v4.0.8...v4.0.9) --- updated-dependencies: - dependency-name: fast-xml-parser dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
That sounds reasonable to me. If you could update the branch whenever you get a chance that'd be great. I'll do a final pass over this after work but I think this should be good to go |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only thing that gave me one last moment of pause was the noun in the route of rating-count
, but i think this badge is sufficiently different relative to the others to justify that change. Additionally, it gives us room should the upstream endpoint ever decide to provide their own rating score.
This is a separate PR from #8080 to discuss the possibility of adding rating badges for Greasy Fork which has a somewhat unique rating system (see #8080 (comment))
Alternative layout
This represents how ratings are displayed on their website with counts for Good, OK, and Bad ratings. When rating scripts, you select one of these 3 categories.
OpenUserJS has a positive/negative rating system which is also different from most badges currently supported and can possibly be added to this PR after #8081 is merged.