-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple imputation methods: Performance #782
Comments
I wonder whether we can figure out new implementations of his where we don't impute every single value so crazily but maybe a few closely related ones? Like calculate KNN first and then impute a group of values? I know that this is a new imputation method but oh well. Maybe an autoencoder imputation is also of interest? Probably faster to train and use. Would need to look at benchmarks.. |
This can absolutely be stretched to coming up with and adding more (well-performing) imputation strategies yes! |
Or even preparing larger synthetic datasets or ones which are well known in the imputation literature, and comparing different methods (and new ones) for performance, runtime, memory requirement, failure modes... Not just an interesting notebook, but also fast and convenient benchmark possibility for others Like the imputation part of the bias notebook but in big, and focused on imputation |
MissForest with Extremely Randomized Trees can maybe be parallelized better |
Question
Within ehrapy, we have
as multiple imputation (MI) methods so far. MI methods are typically computationally expensive but have been shown by many many benchmarks to have the best imputation performance. However, they are simply too slow for our big datasets on CPU. We don't want to force users to use a GPU.
We should profile these two methods and check for
The text was updated successfully, but these errors were encountered: