-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nits on NSGA2 #262
Nits on NSGA2 #262
Conversation
- remove std::endl
Thanks for opening your first pull request in this repository! Someone will review it when they have a chance. In the mean time, please be sure that you've handled the following things, to make the review process quicker and easier:
Thank you again for your contributions! 👍 |
Replacing |
In hindsight, yes, but I'd rather avoid them if possible. Anycase, these changes are minor nitpicks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @jonpsy, thanks for the comment fix! 👍 mlpack and ensmallen code tend to use std::endl
throughout, so we probably ought to stick with that.
Did you have more changes you were planning to make, or is this ready to go? (If you plan on changing more, we can wait until you're done to finish the review process. :))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch, even if that one is a small fix, I think it's important to have correct and clear comments 👍
Thanks for the reviews. The primary goal of this PR was to get acquainted to the reviewing process. I was making the first pass at this code so I caught a few nitpicks which I've posted.I do realize my changes were not substantial. I've made some more changes, but I'd like your opinions before I make them (more than just grammar fix :) ). Can we do it on a seperate PR? Can we merge this one for the time being? I'd like this code to be
|
There's also a prospect of parallelizing through OMP. |
Co-authored-by: Marcus Edel <[email protected]>
Let's continue the discussion in #263. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Second approval provided automatically after 24 hours. 👍
Hello there! Thanks for your contribution. I see that this is your first contribution to mlpack. If you'd like to add your name to the list of contributors in In addition, if you'd like some stickers to put on your laptop, I'd be happy to help get them in the mail for you. Just send an email with your physical mailing address to [email protected], and then one of the mlpack maintainers will put some stickers in an envelope for you. It may take a few weeks to get them, depending on your location. 👍 |
@@ -75,7 +75,7 @@ typename MatType::elem_type NSGA2::Optimize( | |||
if (lowerBound.n_rows == 1) | |||
lowerBound = lowerBound(0, 0) * arma::ones(iterate.n_rows, iterate.n_cols); | |||
|
|||
// Check if lower bound is a vector of a single dimension. | |||
// Check if upper bound is a vector of a single dimension. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, thanks, great catch! :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@say4n, Hi, thanks. I'd love your reviews on the follow-up PR. Cheers! :)
Thanks for the contribution. |
Hi @say4n , I've been reading your code on NSGA2 and I really like it :). I've made some small corrections.