You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that the value of self.var_weight in "self.var_weight_ = sparsestmax(self.var_weight, rad)" is nan.
Is it because of an error in the modified code?
Or sparse_switchnorm cannot apply to the 2D input?
The text was updated successfully, but these errors were encountered:
When I want to apply sparse_switchnorm to a 2D tensor, it fails at self.var_weight and meets the same problem as #2 ?
I modified the code as follows.
`class SSN(nn.Module):
def init(self, num_features, eps=1e-5, momentum=0.997, using_moving_average=True, last_gamma=False):
super(SSN1d, self).init()
self.eps = eps
self.momentum = momentum
self.using_moving_average = using_moving_average
self.weight = nn.Parameter(torch.ones(1, num_features))
self.bias = nn.Parameter(torch.zeros(1, num_features))
`
It seems that the value of self.var_weight in "self.var_weight_ = sparsestmax(self.var_weight, rad)" is nan.
Is it because of an error in the modified code?
Or sparse_switchnorm cannot apply to the 2D input?
The text was updated successfully, but these errors were encountered: