Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cerrective step described on paper #8

Open
darleybarreto opened this issue Mar 24, 2022 · 0 comments
Open

Cerrective step described on paper #8

darleybarreto opened this issue Mar 24, 2022 · 0 comments

Comments

@darleybarreto
Copy link

Hi Sarkhan,

I have a question about the implementation w.r.t what the paper describes. The paper says this on Section 6.2 (page 7)

Among all components of the model, the corrective step is presumably the most vital one. In this step, the parameters of all weak learners, that are added to the model, are updated by training the whole model on the original inputs without the penultimate layer features

If I understood correctly, the model shouldn't use the penultimate layer, that is, no concatenation should take place. But during the corrective step in the regression experiment, for instance, forward_grad is called, which uses the penultimate layer's output.

def forward_grad(self, x):
    if len(self.models) == 0:
        return None, self.c0
    # at least one model
    middle_feat_cum = None
    prediction = None
    for m in self.models:
        if middle_feat_cum is None:
            middle_feat_cum, prediction = m(x, middle_feat_cum)
        else:
            middle_feat_cum, pred = m(x, middle_feat_cum)
            prediction += pred
    return middle_feat_cum, self.c0 + self.boost_rate * prediction

It this correct? If it is, could you kindly point out what I am missing?

Cheers,
Darley

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant