You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have two functional EBM regressors, when I merge them I noticed that the model now always outputs NaNs when I call model.predict.
Investigating I found that the merged model has attributes that are NaN, for example the intercept_ and some arrays in term_scores_ (in fact these arrays seem to always have a 0 as the first and last entry and all NaNs in between). The two models that were used to create the merged model do not have any NaNs in these attributes.
For example the two original intercept_ are 0.5642638560976405 and 0.6517173269644225.
I have the full attributes but am reluctant to share them because the models have been fitted on sensitive data.
I can share snippets of the attributes if you tell me what is important to consider.
Notably this issue does not happen with two EBM classifiers that have been fitted on the same dataset (input features are the same but the targets are different).
This happens with interpret 0.6.1 and 0.5.0. I have not tested other versions.
The text was updated successfully, but these errors were encountered:
Hi @jfleh -- If you still have access to these models, I would be curious to know if any of the values in the merged model's bin_weights_ have zeros, other than the zeros in the first and last indexes which are normally zero. I suspect this is being caused when you have two models where the bin cuts are almost, but not quite exactly equal to each other. In that case the bin width and assigned weight and score for the very small bin could either be zero, or something very small. zero divided by zero would create the NaNs.
Hi @paulbkoch,
I do not have access to these models anymore, and this is from a system that I can only run sporadically. This behaviour was previously happening consistently every time. I was now able to run this with interpret 0.6.2 (with your fix included) and this issue did not occur. So it seems like the fix does indeed work. Thank you.
I have two functional EBM regressors, when I merge them I noticed that the model now always outputs NaNs when I call model.predict.
Investigating I found that the merged model has attributes that are NaN, for example the intercept_ and some arrays in term_scores_ (in fact these arrays seem to always have a 0 as the first and last entry and all NaNs in between). The two models that were used to create the merged model do not have any NaNs in these attributes.
For example the two original intercept_ are 0.5642638560976405 and 0.6517173269644225.
I have the full attributes but am reluctant to share them because the models have been fitted on sensitive data.
I can share snippets of the attributes if you tell me what is important to consider.
Notably this issue does not happen with two EBM classifiers that have been fitted on the same dataset (input features are the same but the targets are different).
This happens with interpret 0.6.1 and 0.5.0. I have not tested other versions.
The text was updated successfully, but these errors were encountered: