-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Covariance different from scipy curve_fit
with the same data
#70
Comments
That is indeed a very large standard error for the data given. @iewaij will be working to improve some functionality in this package over the summer as part of google summer of code. If there's a bug in the covariance code, this would be top priority. Thanks for the bug report. We'll trace this 🐛 and squash it :) |
Thanks for the bug report! The bug seems related to the weight parameter. Delete the weight parameter and run Another related issue is #69. I'll have a look on this. |
Thanks for the quick response. The workaround to leave out the weights parameter works for me too. |
This issue is actually two issues:
This can be checked quite readily. One can do multiple fits to a curve with difference random noise added each time with known uncertainty in the data. The standard deviation of the fit parameters from multiple fits should be approximately equal to the estimated standard error computed by the standard_error function. The attached file has the results showing that the revised formula for covar I suggested above is the correct one. |
Hi @rsturley thanks a lot for pointing out the issues and the experiment. I agree that the current code implementation for the weights is wrong. For weighted least squares, the objective function should be I think the current covariance matrix is still correct under the implementation above. There's a documentation explaining the covariance computation using linear approximation. |
It seems this issue still exist? |
Could you try master Juila? Also if you could post the exact code for Scipy and Matlab it would be easier to track down this problem. |
Having played around with it a bit today, I'm quite certain that master LsqFit now has the correct behavior. The weights are either no present, present as a vector (in which case they're inverse variances) or a matrix (in which case they're inverse variance-covariance matrices). This is also documented in the README. IF you provide "stupid" weights such as in the thread starter's gist, it will give the correct parameter values but since the weights are not the inverse variances, you will get "wrong" (but not really, because you said that was the inverse variances) standard errors. |
gist
I'm trying to fit an exponential decay of an autocorrelation function with LsqFit.jl. The optimized value I obtain is correct and is the same that I get with the scipy. But the reported covariance and associated standard error is unreasonable large with LsqFit.jl. Scipy reports a value of ~1e-5 and LsqFit has a value of ~1. I would tend to believe scipy more because visually the data I have fits perfect to a single exponential decay with a very small error. See gist for the code.
I'm using the latest release version of
LsqFit
(4ecb0ec). I'm not sure if this is fixed in the current master branch. Mostly because I'm new to julia and I do not know how to best test using a checkout of the master branch.The text was updated successfully, but these errors were encountered: