-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide API for Training #329
Conversation
Instead of an additional one we create ourselves.
@@ -29,6 +29,22 @@ | |||
|
|||
|
|||
class SetupManager: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a little bit vague whether this class is considered part of our 'public api'. I believe a number of the GUI projects currently in progress make some assumptions about the functions that SetupManager provides.
Let's drag hallo_doei and IamEld3st into this conversation and make sure they can adapt.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I definitely need to split run()
into start_match()
and infinite_loop()
, otherwise I can't have the control flow necessary for training to work. i.e. start_match()
gets called and then we loop until the test is passed; and I'd like to change match without needing to re-launch quick-chat. Also this gives us the opportunity to make naming consistent: now everything which deals with subprocesses is named launch_foo
whereas previously there was a mix of init_
and launch_
.
I'd rather visibly change the API rather than silently change the definition of what run()
does which could lead to unexpected behavior.
Maybe we should have a higher-level public API which just does everything to run a game given a config_path but I think that should be outside this class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your philosophy sounds good, and I agree that it's better this way. @IamEld3st and @hallo-doei you have a short window of time to complain before we go ahead and do this :)
@@ -242,6 +242,9 @@ def white(self): | |||
def gray(self): | |||
return self.create_color(255, 128, 128, 128) | |||
|
|||
def grey(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But this is America!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm pretty sure you're joking, but:
- They're both acceptable in both UK/US.
- CSS has both.
- Wikipedia redirects to "grey" https://en.wikipedia.org/wiki/Gray https://imgur.com/IvWKudS
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep I was joking, but well justified
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might want to add create_colour
too 😛
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks very good, thanks for writing this Dom!
hallo_doei and IamEld3st have approved of this on discord.
This API is to support the standard suite of training exercises found here: https://github.com/RLBot/RLBotTraining
While the above is meant to be the main way of writing Exercises, this API could be used by other implementations which produce Exercises and gather their results.
There are a few small quality-of-life commits sprinkled in there (e.g. adding type annotations) for which I didn't want to make separate PR's.
I'm happy with the run_all_exercises iterator interface and I think it'll be stable even when more features are added to the RLBotTraining repo.
Happy New Year!