-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discrete convolution #1230
base: master
Are you sure you want to change the base?
Discrete convolution #1230
Conversation
@pep8speaks check this |
Review ready? |
I'll get the auto-weighting thing out of the way, then yes. |
Actually I would prefer you have a look at #1177 first, since I don't want to add code here just to fix that missing part. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some comments except for the auto weighting stuff
|
||
"""Fully discrete convolution with a given kernel.""" | ||
|
||
def __init__(self, domain, kernel, range=None, axis=None, impl='fft', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we also need some way to specify which fft backend to use?
Does not apply for ``impl='real'``. A sequence is applied per | ||
axis, with padding values corresponding to ``axis`` entries | ||
as provided. | ||
Default: ``min(kernel.shape - 1, 64)`` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why this choice?
operator. | ||
Default: ``'forward'`` | ||
cache_kernel_ft : bool, optional | ||
If ``True``, store the Fourier transform of the kernel for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If true, is the non-ft kernel "discarded"?
[ -1., -3., -5.]] | ||
) | ||
|
||
Convolution in selected axes can be done either with broadcasting |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awsome docktests here, good coverage without being excessive and easy to read.
kernel = ker_space.element(kernel) | ||
|
||
if ran is None: | ||
if str(impl).lower() == 'fft': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the first time you "check" the impl parameter, perhaps move the self.impl
stuff to above this, would improve error messages.
self.__fft_impl = None | ||
elif self.impl == 'fft': | ||
self.__real_impl = None | ||
self.__fft_impl = 'pyfftw' if PYFFTW_AVAILABLE else 'numpy' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should let users pick this somehow
ifft_x = ifft(x_ft, axes=self.axes, s=s) | ||
|
||
# Unpad to get the "relevant" part | ||
slc = [slice(l, n - r) for (l, r), n in zip(paddings, x_prep.shape)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very hard for me to validate this as is, it looks good but I'll have to go by the tests largely.
flags=['FFTW_ESTIMATE'], | ||
threads=multiprocessing.cpu_count()) | ||
plan_x(x_prep, x_ft) | ||
plan_x = None # can be gc'ed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wouldn't we want to keep the plan?
@@ -0,0 +1,57 @@ | |||
"""Example demonstrating the usage of the ``auto_weighting`` decorator.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This reads more like a test than an example, is it possible to make it easier on the eyes?
|
||
The adjoint convolution is a convolution with the adjoint | ||
kernel, which is the (complex conjugate of the) original kernel, | ||
(roughly) flipped in the convolution axes. See Notes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
notes where?
4584f6d
to
d7d1abe
Compare
@ozanoktem walked by my desk and asked how this is going 😄 |
😄 Well some stuff in the review that needs to be addressed. I also realized (by using the code) that a functional interface would be hugely useful for prototyping. Currently the functions |
d7d1abe
to
0e758df
Compare
0e758df
to
c88ebff
Compare
Well hello! How are we doing today? |
I did a rebase, plus some initial work on the more detailed interface for the implementation. WIP |
@kohr-h , maybe this motivates you: I am looking forward to have this in ODL :-) |
^^ It does! |
c88ebff
to
7c89f67
Compare
Checking updated PR...
|
Hold your horses, I only rebased! |
🏃♂️ |
Closes #209
Note: this does not include the discretized continuous convolution, only the fully discrete one. But the former can be expressed in terms of the latter.
TODOs:
(1, 2)
of(1, 5, 5)
shaped array with(2, 3, 3)
stack of kernels