-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FFT based kinematic simulator #33
Conversation
Included new diffsims.generators.diffraction_generator.AtomicDiffractionGenerator interface along with corresponding back-end code. This is mainly in diffsims.sims.kinematic_simulation which relies on diffsims.utils.discretise_utils to discretise the atomic volume and diffsims.utils.fourier_transform which interfaces various fft utilities. Incomplete tests have been provided for all new modules
Hopefully this is now in a stable state. I think the only things left are to finish the tests and include a demo notebook. Hopefully I'll get around to these by next week. |
@robtovey do you need anything from us to move this forward? |
@dnjohnstone No, sorry, I'm just being slow to get around to things on my end. |
I think this is basically fully covered now. There are three big holes left
What would you like me to do with the remaining bits? I would suggest excluding them from coverage with a comment to say why in each instance? Otherwise, two remaining tasks are to make a demo and add the copyright etc. |
1) the GPU code is not tested by travis 2) there are several blocks of code designed for anticipating memory errors 3) numba code does not register for coverage, this makes sense because it compiles the lines differently What would you like me to do with the remaining bits? I would suggest excluding them from coverage with a comment to say why in each instance? Otherwise, two remaining tasks are to make a demo and add the copyright etc. |
So the main idea is you want to have a high-resolution discretisation but the full arrays can't all be stored on the GPU at once. In which case the arrays are iteratively partitioned until they do fit onto the GPU. There's a little more to it and it will do the same thing even if you're running on the CPU and running out of RAM but this is the main reason and target for it.
Exactly, I know the functions are called but travis doesn't register any of the lines having been run. This is even true, the actual lines are parsed and compiled by numba but never run by python itself...
Great, I'll start filling that in.
Sure, functionality is the same between the two. Both are (hopefully) trivial to install on anaconda and a bit more effort on pip but still possible. They should look something like:
|
Excluding GPU code and numba from coverage
@robtovey what's left to do on this? It would be good to get merged up before Christmas! |
I think we're pretty close, I now have a demo notebook. I'm not sure quite sure where to put it and I might need a bit of advice on the wording but it does the job. The images are not a great resolution but it all runs in 2 minutes on my CPU so hopefully that tradeoff is appreciated. Something I thought about, do you want code for computing the volume charge density? I've relatively 'hacked' it in as you will see but should I give it a quick polish too? Other than that, I think the only remaining thing for me is to test the GPU code as well the we can merge. |
Added one line to the coverage
Final coverage ammendment?
Hopefully that is finally back to '100%' coverage... On another note, not sure if it was deliberate or slipped through the cracks but I think there is only 1 line missing from coverage now in the file: here |
I am looking to tidy this up and get it merged soon(ish). @robtovey - do you think this is "done" as far as you are concerned? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've made some comments mostly just on naming and naming style - otherwise it basically looks good. I will make a diffsims-demos repository now to put the notebook in.
@@ -0,0 +1,1985 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please can we remove this notebook from here and create a pyxem/diffsims-demos repository for tutorials like this.
@@ -22,12 +22,16 @@ | |||
|
|||
import numpy as np | |||
|
|||
from .generators.diffraction_generator import DiffractionGenerator | |||
from .generators.diffraction_generator import DiffractionGenerator, AtomicDiffractionGenerator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should be able to come up with better names than this - I think the difference is really that we have a structure factor based simulation and an FFT based simulation, we also have (a bit hidden in the code) a real space sum of scattering from many atoms solution. So I think we should either have one DiffractionGenerator with multiple methods or name more based around the simulation approach. I prefer the latter here so maybe we could go with:
SFDiffractionGenerator
FFTDiffractionGenerator
RSDifractionGenerator
Or we could change and have:
StructureFactorSimulator
FFTSimulator
RealSpaceSimulator
What do others think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with the latter. The main problem I see with trying to merge them into one object is that, as far as I understand it, they just take different inputs (whole crystal vs unit cell) and give different outputs (single diffraction pattern vs complete list of Bragg angles).
I'm not entirely sure what your 'real space' simulator is but I'm not sure about using FFT as part of the naming system. If you add a multi-slice implementation later then I think that will also use FFTs but I guess you'd also want to give it another name? I think these are all fundamentally different models and a choice of backend should be a different parameter/method.
I'm not sure what would be a better name, I guess you will both know much better than me. FirstBorn, Ewald, or something with Kinematic?.. Other than that, I don't have much preference over the general naming format.
@@ -201,7 +202,7 @@ def calculate_profile_data(self, structure, | |||
d_hkl = 1 / g_hkl | |||
|
|||
# Bragg condition | |||
#theta = asin(wavelength * g_hkl / 2) | |||
# theta = asin(wavelength * g_hkl / 2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're not using this, let's remove it.
@@ -228,11 +229,11 @@ def calculate_profile_data(self, structure, | |||
# Intensity for hkl is modulus square of structure factor. | |||
i_hkl = (f_hkl * f_hkl.conjugate()).real | |||
|
|||
#two_theta = degrees(2 * theta) | |||
# two_theta = degrees(2 * theta) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Likewise, remove if unused.
diffsims-demos - now here: https://github.com/pyxem/diffsims-demos/ |
Improved docstrings, copyright notices, and all the scattering parameters from the Peng paper included (98 elements).
I think this is basically ready to merge now. Two outstanding comments, the naming of diffraction generators and moving the jupyter notebook to its new home. I think the second is easier for either of you to do and once you've decided on a naming system I'll be happy to implement? |
Sorry, I'm not sure why appveyor is failing. It's complaining about an import of |
The appveyor thing is unrelated to you @robtovey and I'll (attempt to) deal with it, then get this PR sorted and merge for you. |
I've just remembered to talk about installation. The minimum are in the files but there are a couple of packages which will accelerate things, in particular using the GPU and fftw backends. |
I have added a new branch called, for lack of a better name, 'kinematic_simulator_rob' which provides an FFT based simulation of diffraction images which follows an Ewald Sphere model. This hopes to address the enhancement request of #16. Coverage is incomplete but I will continue to add as other issues are dealt with. There is a small hole in the functionality in that I have only implemented a test version of a probe function but I will try to correct that in a minute...
Broad structure of the new code:
Some house-keeping questions:
x, y
are the two coordinate vectors then isx == x.reshape(-1)
orx == x.reshape(-1, 1)
or something else?Other things I need to do:
p.s. I apologise for my editor's rather religious attitude when it comes to upholding pep8's white-space conventions.