Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add OffsetScaling predictor #87

Closed
odow opened this issue Aug 28, 2024 · 5 comments · Fixed by #89
Closed

Add OffsetScaling predictor #87

odow opened this issue Aug 28, 2024 · 5 comments · Fixed by #89

Comments

@odow
Copy link
Collaborator

odow commented Aug 28, 2024

It probably just needs to be:

function OffsetScaling(offset, factor)
    return Affine(LinearAlgebra.Diagonal(1 ./ factor), -offset)
end

But @pulsipher thinks this is useful in #82.

This was referenced Aug 28, 2024
@pulsipher
Copy link

pulsipher commented Aug 28, 2024

Admittedly, this is a simple affine transformation, but it is one that is almost always needed when embedding an ML model into an optimization problem. So, automating this adds a convenience factor.

Moreover, I find it best practice to ship trained ML models with the preprocessing layers that encode and decode the inputs and outputs, respectively. See https://www.tensorflow.org/guide/keras/preprocessing_layers#benefits_of_doing_preprocessing_inside_the_model_at_inference_time. Supporting these types of layers helps to simplify the workflow and reduce the chance of modelling errors. For instance, if I train a Keras or PyTorch NN model and embed the normalization as a layer for inference, I would be ideal to then just have MathOptAI just read in that model such that I wouldn't need to worry about normalizing the variables. Otherwise, I would have manually look up the scaling values in Keras or PyTorch and then input these manually as transformations in MathOptAI.

@odow
Copy link
Collaborator Author

odow commented Aug 28, 2024

What PyTorch normalization layers do you want support for?

@odow
Copy link
Collaborator Author

odow commented Aug 28, 2024

Let's follow (F)lux and call this Scale(scale::Vector{T}, bias::Vector{T}).

@pulsipher
Copy link

Let's follow (F)lux and call this Scale(scale::Vector{T}, bias::Vector{T}).

That works, I have mostly used Keras in the past, so I am not sure what the equivalent layer is in PyTorch.

@odow odow closed this as completed in #89 Aug 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants