-
-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specialization of standard posits #343
Comments
@RaulMurillo I totally agree with your request. I started that work a while back and also to redesign the implementation on a fast limb-based system. The specialization mechanism would still be the better performing configuration as we can make static decisions on the specific parameters of the posit. Thus a reasonable shortcut to get to high-performance standard posits with es=2 would be to add the specialized implementations. Would you be able to drive this development? |
Sure! As far as I know, it should resemble previous specialized implementations, with adjustments to the parameters and arithmetic operations for those specific formats. Am I right? Also, could you please point me to some regression tests to ensure that the specialized implementations provide the same functionality as the default ones? |
I can set up the regression tests for the standard specialized posits so
you can focus on the implementation
…On Fri, Jun 2, 2023, 19:06 Raul Murillo Montero ***@***.***> wrote:
Would you be able to drive this development?
Sure! As far as I know, it should resemble previous specialized
implementations, with adjustments to the parameters and arithmetic
operations for those specific formats. Am I right?
Also, could you please point me to some regression tests to ensure that
the specialized implementations provide the same functionality as the
default ones?
—
Reply to this email directly, view it on GitHub
<#343 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AZZNV3GFI4XZEC4L4Z6PFGTXJJWWPANCNFSM6AAAAAAYX744L4>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
@RaulMurillo I have created a branch 'fast_specialized_posit', and enabled the fast posit path in the regressions. I put a skeleton together for posit<16,2> and that yielded this:
For REGRESSION_LEVEL_1. Levels 2, 3, and 4 will add logic, randoms, and the math library to the mix. I also enabled it in the CI, so any check-ins against that branch will automatically run the regression suite. I can show you how to work with the REGRESSION levels and the manual test regression infrastructure. The 32bit and 64bit posits are much more difficult to validate as the native double and long doubles are not sufficiently precise to serve as reference. I have not weighed down the regression suite with a high-precision reference as it would make the regression suites run too slowly. For 32bit and 64bit regression testing we should do one of two things: 1- create a judicious direct testing suite to target specific corner cases so that we are fast and light weight A third option is to complete the Priest arithmetic type that I started as an adaptive precision Oracle that still has hardware support and thus will run quick enough not to weight down the CI regression cycle. |
This is what you should see when you build with
fast specialized posit<64,2> is stubbed out as we need to figure out how to properly regress test this. |
Thank you Theo! |
@RaulMurillo https://www.semanticscholar.org/paper/Algorithms-for-arbitrary-precision-floating-point-Priest/93e08bcc5581478bf109c22e68b84cf08e4d7354 I started this work but ran into a problem reproducing the paper's results. Need to find a brother in arms to RCA the problem and get over the hump. |
@RaulMurillo if you look at the regression suite for posits in UNIVERSAL_ROOT/include/universal/verification/posit_test_suite.hpp you will find a test called VerifyConversion. This test uses the inductive property that posit<nbits+1, es> with the last bit set falling exactly in the middle of two posit<nbits,es> values. So we can test all the rounding cases for an arbitrary posit by using this property and adding/keeping/subtracting a delta to the higher posit to generate the round-up, tie to even, and round-down values. One idea for a quick regression test is to visit all (and just) the encodings where the rounding decision is going to rejigger the regime, and use this property to generate the golden reference. For 32bit posits we would need to sit on a system that has proper long doubles, but for 64bit posits we need at least a triple-double reference arithmetic. That would be a lovely addition to Universal, to have David Bailey's double-double, quad-double, and add a triple-double arithmetic type so that the lib has a fast Oracle number system for these types of verifications. Douglas Priest would be the adaptive precision Oracle for arbitrary precision questions. |
@RaulMurillo also don't forget that there is a method: When the rounding doesn't change the field configuration, the rounding algorithm is 'invariant' across the field configurations, so no need to test all these cases when you have proven that one works. But when the rounding changes the field configuration the rounding needs to be verified. |
@RaulMurillo how is this task going? Do you need any help? |
Hi @Ravenwater , I'm sorry but I've been very busy during these days (and the following days I will be too) with the end of the course. I think I won't be able to dedicate myself to this for a few weeks, but when I have progress on this I'll let you know. If you want to make progress on this, go ahead, I don't want to be a bother in this task. |
@RaulMurillo thanks for the update. I am currently focused on logarithmic number systems, so I was happy to delegate the task, :-) When you get time, the posit community will thank you. I am working on Priest and faithfully rounded number systems to try to solve the validation problem of big posits. But that is adjacent, so won't get in the way. |
I see that in #384 part of this has been merged. What is the status of these specializations? I'm specially interested in the Posit16 case in case it is incomplete. Thanks! |
Hello @Ravenwater @theo-lemurian . I tested the specialized posit<16,2> that was merged into main and the results of my application are incorrect (and different from the generic posit<16,2> that does work properly). Could you give me some pointers to where I should have a look at how to specialize the posit<16,2> format and how to run some tests to check what is failing? I guess I should only modify https://github.com/stillwater-sc/universal/blob/main/include/universal/number/posit/specialized/posit_16_2.hpp ? |
Can you share the failures that you are seeing in the specialized posit<16,2>? If you want to fix/extend these specialized implementations, the architecture is that of a standard template specialization for the specific template arguments, in this case The functionality is always in a single include file, in this case, https://github.com/stillwater-sc/universal/blob/main/include/universal/number/posit/specialized/posit_16_2.hpp. Then there are the regression tests. For posits, you can find these here: https://github.com/stillwater-sc/universal/tree/main/static/posit/specialized When you find the incorrect behavior, you can try to fix it and add regression tests to posit_16_2.cpp so that we close the hole. |
Unfortunately I cannot share the application I'm running, but it is quite large and I haven't pinned exactly when/how it fails. In any case, it seems like a generalized/common output error because I'm getting practically random output results. To fix this I think it would be better to run and check the regression tests. Could you point me to how I can build and run the tests that you mention in https://github.com/stillwater-sc/universal/tree/main/static/posit/specialized ? I can't seem to find any documentation on that. The CMakeLists in that directory doesn't make much sense to me and I guess there is another top-level one. |
The CMake you want to look at is the top level in the root. The build of Universal allows you to enable and disable specific sets of tests. If you just want to run the posit tests, issue the following cmake configuration commands:
now you will have all the posit regression tests enabled, and nothing else. Simple make and make test will then build and run the regression tests. The specialized posit_16_2 regression test executable will sit in the build directory under specialized, so when you modify the posit_16_2.cpp, and rebuild, you can focus your testing on just that executable, much quicker than constantly running the full regression suite. |
Perfect. I will try this and see if I can fix it. Thanks @theo-lemurian ! |
@davidmallasen I had a typo in the cmake command line, which I have fixed in the message above. Here is the output you should see when you want to build just the posit regression tests:
when the build is done, the executable you are looking for will be './static/posit/specialized/fast_posit_16_2`
|
Hello @Ravenwater ,
I'm in the main branch with commit hash |
The regression tests are the same and up to date for general and specialized, we just swap out the arithmetic type. Given that the bit pattern checks are passing for both general and specialized arithmetic, I assume that the arithmetic type is tested. I am wondering if we have a specific problem with the storage format. A posit32 fits nicely in a standard unsigned int, but a posit16 needs to sit in an uint16_t. If we aggregate that in vectors and matrices, we need to be certain that the compiler environment doesn't straddle past the uint16_t. Are there analytical test cases in your application that we can use to test memory alignment is honored? |
@davidmallasen got some new info. As you know the I modified the regression test to do an exhaustive enumeration of the state space and this popped out:
The specialized versions appear not to have the same exception behavior as the general version. Can you think of a mechanism that you can use in your app to catch different exceptions? Typically boiler place I use is this:
|
@davidmallasen oh, this is so embarrassing! I went through all the history of this issue that @RaulMurillo started. Raul offered to do the implementation as the posit<16,2> is now the standard, and I had implemented the posit<16,1> of the previous standard. I created the posit<16,2> file and regression test and was dependent on Raul to do the implementation. The specialized posit<16,2> is a verbatim copy of the posit<16,1> except for the template parameters. What we are seeing is the result of the specialized posit<16,2> actually implementing a posit<16,1> behavior. The way the regression tests are written is that I enumerate the bit encodings, derive the double floating-point value, do a reference computation with the double values, then use the double to posit conversion to create the golden value, and then do the posit operation natively and compare the result to the golden value. So if the type implements any of these consistently, which in our case is doing it as a posit<16,1>, the regression test will pass. Ok, so the solution is now known: we need to implement the specialized posit<16,2>. |
Hello @Ravenwater . Thanks! This makes sense now, although I have a couple of comments:
I'll work on trying to change the |
Also @Ravenwater do you have any documentation on the algorithm and notation you are following here? It differs from what I'm used to (I follow the scheme in section II-A of out PERCIVAL paper). |
I also started to try to fix the posit<16,2>. I am working in the v3.74 branch. I can go and focus on the bugs you have just highlighted, and you can maybe finish the posit<16,2> implementation. |
@davidmallasen the specialized posits are following the softposit algorithms with the proper C++ skeleton around it to create a plug-in type. |
@davidmallasen This commit is a new baseline to work from: cac65dc I discovered that the fast Next step for me is to bring in a reference posit we can use to put this system back together. |
In this commit: 025e51f I have added a you can bring in this type via:
has the same interface as the regular |
Hello @Ravenwater . Thanks for all the info and the work so far! It looks like a plan to me, |
@davidmallasen correct, just the specialized posit<32,2>. I'll look at the commit history on that file to see when it changed and see if it has a quick fix. |
@Ravenwater I don't understand the rationale behind changing the
decode_regime except for the different shift parameter (m)? I looked at the code in softposit, but there are no comments there to help guide what is going on.
|
I am postulating here, but the algorithm that John and Cerlane developed accumulates the fraction bits and leaves the MSB set to 0 as not to trigger any signed/unsigned reinterpretation. As the exponent field is now potentially 2 bits, we need to shift away the exponent field by adding one more to the left shift. One problem that I haven't reverse engineered is how they are dealing with the situation when the encoding doesn't have a full exponent field, like, 0b0.000'0000'0000'001.0 Maybe it is just as simple as the fact that if you don't have enough bits for a full exponent, your fraction bits are zero too, and thus 'over shifting' does not matter. |
P.S. commit fb0d0ed contains an addition test case that brings in both |
The exponent field is after the regime, and in that function we are basically extracting the regime and returning the remaining bits (exponent and fraction), so this shouldn't be taken into account here? It is the same as in the
Since the values of the bits that are to the right of the LSB should be zero, I think this is accomplished when doing the shifts? I will pull and check that new commit, thanks |
I just realized that Softposit has a positX_2 code that could help us! I'll continue tomorrow looking at this. https://gitlab.com/cerlane/SoftPosit/-/blob/master/source/s_addMagsPX2.c |
I got the add for positive values working, but we need the subtraction routine to work as well. I also restructured quite a bit of the regression tests across all the number systems, so quite a few files have changed. If you have time tomorrow, @davidmallasen take a look at operator-=() and see how that works. |
Ah this makes sense @Ravenwater . I didn't realize that when adding two numbers with different signs, it's calling the -= operator and I didn't change that one. I'll have a look at it. I only tackled the += first to have something small working and then fill in the rest. |
Could you have a look at #404 @Ravenwater ? I think with this we could try all the exhaustive testing on the specialized posit<16,2> with the oracle posit<16,2> since the rest of the operations should be correct. I'm not sure about the |
@davidmallasen I got the regime and exponent field algorithms figured out, but I haven't been able to understand the rounding algorithm that SoftPosit implements. I checked in the code as I need a second pair of eyes on this to try to figure out how this is supposed to work. Here is the results of the regression tests
It appears that the bitNPlusOne calculation is not correct, and causing these failures, but how to fix it is stumping me. |
@davidmallasen @RaulMurillo finally, got it figured out. we now have a fast posit<16,2>
|
Hello @Ravenwater . Great news, thanks for debugging that! I've been out of office for a couple of weeks but I'll check this soon to corroborate that it's also working on our end. In the end did you use the posit oracle to check the bit patterns or was that too slow? |
@davidmallasen the posit oracle was very useful in debugging and generating hypotheses. Fundamentally, I had to reverse-engineer the algorithm that John and Cerlean had created. They created algorithms for es=0,1, and 2, and these take different shortcut to encode/decode, and round. The debugging allowed me to rediscover that (it has been 5 years since I wrote any of the posit code). Once I knew what was going on, I realized that the rounding was using the shortcuts/interpretation of the es=1 algorithm, and thus I needed to rip and replace it with the es=2 algorithms. Given the amount of time I spent on this, you better use this type, :-) |
Thanks for the info and for all of the time you put into this @Ravenwater . We definitely will be using this, since with initial tests it improves performance by around 2.5x. However, with the application I have the results are not the same than with the non-specialized posit<16,2>, although they are similar. I'll try to check why this is happening, but if the exhaustive tests are succeeding it might be some bit-level difference between the non-specialized and the specialized algorithms. |
The verification of both implementations happens with the same test, which follows posit arithmetic rules. If there is something inconsistent, the most likely place to look would be the special cases, NaR, NaN assignment, divide by zero. Now that we have the |
Perfect! Using the |
Since the latest standard for posits (https://github.com/posit-standard/Posit-Standard-Community-Feedback) fixes the exponent size to 2 bits, it would be desirable to have fast specialization of posit types with 2 exponent bits (at least, 8, 16, 32, and even 64 bits) for performance in software experimentation.
I don't know how difficult it would be to adapt the current specialized/posit_8_0.hpp and specialized/posit_16_1.hpp (specialized/posit_32_2.hpp will be the same I guess) to the new standard.
The text was updated successfully, but these errors were encountered: