-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix atomics #15634
Fix atomics #15634
Conversation
* `WORD_SIZE` is in bits not bytes * `cmpxchg` can only handle integer types
|
No, I also think we should just remove the align here since it cannot be used when we add atomic ops on fields. I hope llvm can be smart enough if we just add alignment properties to the pointers. |
I think it's time for a C++ function that determines the alignment of an object under certain circumstances (type, mutability, architecture, etc.), and make it available from Julia. Currently, entirely too many constants are hardcoded in various parts of the code, and the use of |
The GC alignment is pretty much an implementation detail and shouldn't really be relied on. I do agree there are cases (SIMD for example) where larger alignment is needed and in those cases the alignment should be explicitly asked for, either with a more generic mechanism or simply special case vector type as it already has to be. If any other places are using the GC alignment assumptions in the julia code, those should be fixed. I'm not removing this one only because it's a orthogonal issue. |
In the course of looking at SIMD vector types I looked at various alignment declarations. There are obviously many in the C and C++ run-time environment, and I think most of these got cleaned up recently (I looked just before the recent gc alignment changes). However, |
All of those are array alignment and that's different from normal/small objects. I'm fine with relying on and documenting that we garentee freshly created julia arrays should always have 16bytes alignment. |
Also, the only one of those that actually have alignment constraint is Assuming fftw relies on the alignment, this should probably be documented if it is not yet (I didn't find any in the julia fft doc). c.c. @stevengj |
lgtm |
WORD_SIZE
is in bits not bytescmpxchg
can only handle integer types. From llvm LangRefUsing a floating point type seems to work with x86 backend but is still invalid IR and it fails on aarch64.
Atomic load on
Int128
seems to have issues on aarch64 too but I'll see if it can be resolved in llvm first.