Refactor optimizer/targeter considering ease use in Python #119
Labels
Kind: Improvement
This is a proposed improvement
Priority: normal
Status: Design
Issue at Design phase of the quality assurance process
Topic: Optimization
Milestone
High level description
Requirements
Test plans
Load targeting information from YAML: Create test cases to verify that the optimizer can successfully load targeting information from a YAML file, including the target and targeting methods. This should include both valid and invalid YAML files, as well as edge cases like empty files and missing fields.
Test Gauss Newton optimization: Create test cases to verify that the optimizer produces correct results using the Gauss Newton optimization method. This should include a variety of different target and targeting methods, as well as different starting values and convergence criteria. Verify that the optimizer produces reasonable results in a reasonable amount of time.
Test Levenberg-Marquart optimization: Create test cases to verify that the optimizer produces correct results using the Levenberg-Marquart optimization method. This should include a variety of different target and targeting methods, as well as different starting values and convergence criteria. Verify that the optimizer produces reasonable results in a reasonable amount of time.
Test optimizer with real-world scenarios: Create test cases that use the optimizer to solve real-world problems, such as satellite rendezvous and orbit determination. Verify that the optimizer produces reasonable results that match expected outcomes. This can include comparing against ground truth data where available.
Test optimizer with noisy data: Create test cases that simulate noisy data to verify that the optimizer is robust and can handle noisy inputs. This can include adding random noise to the inputs or introducing simulated sensor errors. Verify that the optimizer is able to converge to reasonable results despite the noise.
Test optimizer with large datasets: Create test cases that use large datasets to verify that the optimizer is able to handle large inputs and produce reasonable results in a reasonable amount of time. This can include simulating long-duration missions or using large datasets from real-world scenarios.
Design
The current architecture of the LM optimization is the cleanest and this should be modeled for both.
In that vain, the design should switch to implementing a
Problem
as defined by argmin: https://argmin-rs.org/ .Things to consider is that PyO3 does not support generics in the interface (which is fair). So any generics need to be made concrete with a wrapping structure.
Algorithm demonstration
N/A
API definition
There will now only be one external API
optimize(...)
where one of the arguments will be an enum specifying the method used (Gauss with finite differencing or hyperduals, LM, etc.). The implementation won't be exposed but will live in their respective functions to make things maintainable.The text was updated successfully, but these errors were encountered: