Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix atomics #15634

Merged
merged 1 commit into from
Mar 27, 2016
Merged

Fix atomics #15634

merged 1 commit into from
Mar 27, 2016

Conversation

yuyichao
Copy link
Contributor

  • WORD_SIZE is in bits not bytes

  • cmpxchg can only handle integer types. From llvm LangRef

    The type of ‘’ must be an integer or pointer type whose bit width is a power of two greater than or equal to eight and less than or equal to a target-specific size limit. ‘’ and ‘’ must have the same type, and the type of ‘’ must be a pointer to that type.

    Using a floating point type seems to work with x86 backend but is still invalid IR and it fails on aarch64.

Atomic load on Int128 seems to have issues on aarch64 too but I'll see if it can be resolved in llvm first.

* `WORD_SIZE` is in bits not bytes
* `cmpxchg` can only handle integer types
@eschnett
Copy link
Contributor

getindex and setindex! access mutable objects which are presumably gc-allocated. Why does their alignment depend on the word size? Shouldn't it be 16 at all times?

@yuyichao
Copy link
Contributor Author

No, I also think we should just remove the align here since it cannot be used when we add atomic ops on fields. I hope llvm can be smart enough if we just add alignment properties to the pointers.

@eschnett
Copy link
Contributor

I think it's time for a C++ function that determines the alignment of an object under certain circumstances (type, mutability, architecture, etc.), and make it available from Julia. Currently, entirely too many constants are hardcoded in various parts of the code, and the use of WORD_SIZE won't be easy to find when things change again.

@yuyichao
Copy link
Contributor Author

The GC alignment is pretty much an implementation detail and shouldn't really be relied on. I do agree there are cases (SIMD for example) where larger alignment is needed and in those cases the alignment should be explicitly asked for, either with a more generic mechanism or simply special case vector type as it already has to be. If any other places are using the GC alignment assumptions in the julia code, those should be fixed. I'm not removing this one only because it's a orthogonal issue.

@eschnett
Copy link
Contributor

In the course of looking at SIMD vector types I looked at various alignment declarations. There are obviously many in the C and C++ run-time environment, and I think most of these got cleaned up recently (I looked just before the recent gc alignment changes).

However, dSFMT.jl has a comment the underlying C array must be 16-byte aligned. There are also alignment constraints in random.jl, as well as in FFTW.jl. I assume you're aware of these; if not, and if there are potential problem, we should open an issue about them.

@yuyichao
Copy link
Contributor Author

All of those are array alignment and that's different from normal/small objects. I'm fine with relying on and documenting that we garentee freshly created julia arrays should always have 16bytes alignment.

@yuyichao
Copy link
Contributor Author

Also, the only one of those that actually have alignment constraint is FFTW.jl (dSFMT.jl and random.jl only have a branch to handle the alignment) and it might actually be a problem if the user construct the plan or use it on resized arrays since resized arrays (in particular arrays that have elements added to the front) may not have 16bytes alignment anymore.

Assuming fftw relies on the alignment, this should probably be documented if it is not yet (I didn't find any in the julia fft doc). c.c. @stevengj

@vtjnash
Copy link
Member

vtjnash commented Mar 27, 2016

lgtm

@yuyichao yuyichao merged commit c97c3d0 into master Mar 27, 2016
@yuyichao yuyichao deleted the yyc/threads/float branch March 27, 2016 02:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants