Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transform expose() to a positive value #55

Open
dandaka opened this issue Jan 24, 2023 · 1 comment
Open

Transform expose() to a positive value #55

dandaka opened this issue Jan 24, 2023 · 1 comment

Comments

@dandaka
Copy link

dandaka commented Jan 24, 2023

I have a problem with expose() going negative as soon as a new player loses his first game. Is there any simple arithmetic transformation, that always keeps all ratings in the positive zone? What is the expected range of Trueskill ratings?

@bernd-wechner
Copy link
Contributor

I don't know that that is an easy question. I have over a 1000 active ratings in one system I'm running and they range for about -3 to 30. You should however be aware that there are four configurations:

  • Initial Mean (µ0)
  • Initial Standard Deviation (σ0)
  • Skill Factor (ß)
  • Dynamics Factor (τ)

That effectively define the likely range of values.

You should also be aware that the rating by default in TrueSkill is considered to be:

µ − (µ0 ÷ σ0) × σ (which is what I think expose() is providing you with)

and that by this formula we expect negative ratings to be produced in some circumstances. Again, these will tend to disappear over time, and plays because two things happen every time you update ratings:

  1. µ change sin response to ranking - it tends to go up for winners and down for losers. This models the skill that we believe you have. If you lost, we over-assessed your default initial skill, if you won we under-assessed it. Winners tend to steal µ from losers.
  2. σ tends to go down, with ever new recorded play outcome. This models the confidence we have in µ (well the lack of confidence actually, or uncertainty). If σ were 0 we'dbe saying we know you skill is µ, without a shadow of doubt.

Because of 2, and that the rating has a −σ term the rating tends to go up just by playing ... irrespective of outcome.

If you don't like -ve ratings, then again you can report them as 0 minimally while tracking them with their true value game to game. Or you can change the rating equation and calculate ratings as for example:

5 + µ − (µ0 ÷ σ0) × σ

TrueSkill ratings lack any meaning across games or tracker or applications, they are entirely tuneable, and should be tuned! Notably ß ... because that's just a lousy default. I mean that said, for thousand ratings I still haven't undertaken a tuning effort ;-).

Documentation is poor on this but at GameFest 2007 suggested values of Beta for Xbox games were tabled and can be represented as:

  • 3.33 for golf,
  • 5 for car racing,
  • 20.8 for a game of chance (UNO).
  • 4.167 for standard TrueSkill (a fair medium)

Hope that helps. I worry you're pinning your hopes a little high on TrueSkill ;-). Be aware it has been improved upon:

https://github.com/glandfried/TrueSkillThroughTime
https://github.com/OpenDebates/openskill.py

You can play with it here:

https://trueskill.info/

and find some modest docs. Also here some very interesting thoughts too:

https://openskill.me/en/stable/advanced.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants