Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid type promotions due to hard-coded tolerances #438

Merged
merged 2 commits into from
Aug 26, 2024

Conversation

jmert
Copy link
Contributor

@jmert jmert commented Aug 25, 2024

...in the AlefieldPotraShi solver.

I ran into these constants causing a type promotion from Float32 to Float64 in a code I'm writing. Changing these locally avoids the error for me.

Unfortunately, I don't have an easy/isolated reproducer which can be turned into a test case — until the error arose, I had assumed a non-bracketing method was being used because I only provide an initial guess, so the implementation must get here on its own.

Copy link

codecov bot commented Aug 25, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 86.24%. Comparing base (5794200) to head (4b432e6).
Report is 1 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master     #438   +/-   ##
=======================================
  Coverage   86.24%   86.24%           
=======================================
  Files          38       38           
  Lines        2413     2413           
=======================================
  Hits         2081     2081           
  Misses        332      332           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@jmert
Copy link
Contributor Author

jmert commented Aug 25, 2024

Before changing anything and (maybe wastefully) re-running CI again, it looks like the pattern

λ = oftype(rtol, 0.7)

(and similar) can work instead of T(0.7) / oneunit(T). Would that be a preferred patch?

@jverzani
Copy link
Member

If I recall, the units need to be stripped for rtol. It seems like oftype might do this as well.

@jverzani
Copy link
Member

So sure, that seems like a more direct change.

(Thanks much for this.)

...in the `AlefieldPotraShi` solver.
@jmert
Copy link
Contributor Author

jmert commented Aug 26, 2024

If I recall, the units need to be stripped for rtol. It seems like oftype might do this as well.

My understanding is that oftype will preserve the full type information, but in this case, I maybe shorthanded my explanation too much.

The "outer" update_states call should make use of the type of rtol = options.xreltol which I think comes from src/convergence.jl's default_tolerances() where it is annotated as being unitless, and then the next two changes for calculateΔ functions make use of the same rtol definition (passed via its arguments), as far as I can tell with very brief readings of the code.

At any rate, switching to what I've just pushed does still pass the unitful tests locally.

@jmert
Copy link
Contributor Author

jmert commented Aug 26, 2024

I've failed to find a proper test case for my particular problem, but along the way I've discovered another apparent mishandling of unitful numbers — the outstanding issue is that the following patch does not solve is that the Order0 case work for all test cases in the in the "Test composability with other pages" > "find zero(s) with Unitful" > inner loops over orders:

(Edit: Changed the diff to cover two test case loops rather than just one)

diff --git a/src/convergence.jl b/src/convergence.jl
index c7379b4..4746638 100644
--- a/src/convergence.jl
+++ b/src/convergence.jl
@@ -205,7 +205,7 @@ function is_small_Δx(
     state::AbstractUnivariateZeroState,
     options,
 )
-    δ = abs(state.xn1 - state.xn0)
+    δ = _unitless(abs(state.xn1 - state.xn0))
     δₐ, δᵣ = options.xabstol, options.xreltol
     Δₓ = max(_unitless(δₐ), _unitless(abs(state.xn1)) * δᵣ)
     Δₓ = sqrt(sqrt(sqrt((abs(_unitless(Δₓ)))))) # faster than x^(1/8)
diff --git a/test/test_composable.jl b/test/test_composable.jl
index b0ff7bf..bd6fede 100644
--- a/test/test_composable.jl
+++ b/test/test_composable.jl
@@ -29,12 +29,14 @@ using ForwardDiff
         y0 = 16m
         y(t) = -g * t^2 + v0 * t + y0
 
-        for order in orders
+        @testset "$order" for order in orders
             @test find_zero(y, 1.8s, order) ≈ 1.886053370668014s
+            @test find_zero(y, 1.8f0s, order) isa typeof(1.88f0s)
         end
 
-        for M in [Roots.Bisection(), Roots.A42(), Roots.AlefeldPotraShi()]
+        @testset "$M" for M in [Roots.Bisection(), Roots.A42(), Roots.AlefeldPotraShi()]
             @test find_zero(y, (1.8s, 1.9s), M) ≈ 1.886053370668014s
+            @test find_zero(y, (1.8f0s, 1.9f0s), M) isa typeof(1.88f0s)
         end
 
         xrts = find_zeros(y, 0s, 10s)

@jverzani
Copy link
Member

Thanks! Am I waiting for this new diff to be included before merging?

@jverzani
Copy link
Member

Thanks again!! If this is ready to merge just let me know.

@jmert
Copy link
Contributor Author

jmert commented Aug 26, 2024

Thanks! Am I waiting for this new diff to be included before merging?

I've added a slightly tweaked version of what I described above. With just the additional cases in the two loops, it's revealed that the change in is_small_Δx is necessary, but there was also a Float32 -> Float64 promotion happening due to the constant g being a unitful Float64 (which caused my proposed tests to fail in some cases). By changing g to be instead a unitful Rational, the new tests pass even without the change in is_small_Δx, which I assume just means that the change in types just caused a slightly different code path to be taken. Therefore, to directly test the fix in is_small_Δx, I just dug into the internals and added an explicit test.

I still haven't discovered a way to directly test my original problem and fixes (to the tolerances in AlefeldPotraShi), but at any rate, the pair of commits should be ready to go.

@jmert
Copy link
Contributor Author

jmert commented Aug 26, 2024

Thanks again!! If this is ready to merge just let me know.

Yes, good to go.

@jverzani jverzani merged commit 53670e8 into JuliaMath:master Aug 26, 2024
14 checks passed
@jmert jmert deleted the patch-1 branch August 26, 2024 17:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants