-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Completed issue 31: Support Reward Per time-step #35
base: dynet
Are you sure you want to change the base?
Conversation
if isinstance(loss, np.ndarray): | ||
for idx, (a, p_a) in enumerate(self.trajectory): | ||
total_loss += (loss[idx] - b) * dy.log(p_a) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here we're assuming gamma is always 0, can we change this to support computing returns for a given gamma?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, will probably have to revisit this anyway for the pytroch branch
shouldn't this be a tail sum? |
Yes, loss[i] is the tail sum of losses starting at timestep i. We can do this at reinforce.py (take loss per time step and convert it to tail sums) or assume the environment already gives us the array of tail sums, I think handling this at the environment side is cleaner. |
i'm not sure i agree. why should the env have to know how the RL algo works? and also not all RL algs will want tail sums |
right, then I think the other way to do it is to accept loss per time step vector and do the tail summing in reinforce? (gamma should be a parameter to reinforce then) Is there a better way? |
ugh gamma. i think this should be an argument of the RL algorithm. there's good reason (eg Nan Jiang's work) to think you migth want to learn with a different gamma than the environment's gamma for regularization reasons anyway |
Always returning tail sums may not be optimal. Sometimes you want to do something with reward like rescaling, injecting noise, exponentiate, etc., It think it is better to return per-step rewards and let the algorithm decide what to do with them. Correct me if I misunderstood this conversation. |
No description provided.