You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question about the implementation w.r.t what the paper describes. The paper says this on Section 6.2 (page 7)
Among all components of the model, the corrective step is presumably the most vital one. In this step, the parameters of all weak learners, that are added to the model, are updated by training the whole model on the original inputs without the penultimate layer features
If I understood correctly, the model shouldn't use the penultimate layer, that is, no concatenation should take place. But during the corrective step in the regression experiment, for instance, forward_grad is called, which uses the penultimate layer's output.
defforward_grad(self, x):
iflen(self.models) ==0:
returnNone, self.c0# at least one modelmiddle_feat_cum=Noneprediction=Noneforminself.models:
ifmiddle_feat_cumisNone:
middle_feat_cum, prediction=m(x, middle_feat_cum)
else:
middle_feat_cum, pred=m(x, middle_feat_cum)
prediction+=predreturnmiddle_feat_cum, self.c0+self.boost_rate*prediction
It this correct? If it is, could you kindly point out what I am missing?
Cheers,
Darley
The text was updated successfully, but these errors were encountered:
Hi Sarkhan,
I have a question about the implementation w.r.t what the paper describes. The paper says this on Section 6.2 (page 7)
If I understood correctly, the model shouldn't use the penultimate layer, that is, no concatenation should take place. But during the corrective step in the regression experiment, for instance,
forward_grad
is called, which uses the penultimate layer's output.It this correct? If it is, could you kindly point out what I am missing?
Cheers,
Darley
The text was updated successfully, but these errors were encountered: