You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Normalize image preprocessor is missed in KotlinDL.
Normalize a tensor image with mean and standard deviation. This transform does not support PIL Image. Given mean: (mean1,...,mean[n]) and std: (std1,..,std[n]) for n channels, this transform will normalize each channel of the input torch.*Tensor i.e., output[channel] = (input[channel] - mean[channel]) / std[channel]
The main question is: should it be image preprocessing or tensor preprocessing thing and also could it be implemented with multik library (Kotlin analogue of NumPy).
The desired PR addressing this issue should include:
Implementation (you can take inspiration from the implementation of Cropping as reference)
Documentation
JUnit tests in dataset module
An example with usage of this image preprocessor
P.S. If you want to take this ticket, please leave the comment below
P.P.S Read the Contributing Guidelines.
P.P.P.S. First usage of Multik will be merged in the ONNX PR during next week
The Normalize image preprocessor is missed in KotlinDL.
Normalize a tensor image with mean and standard deviation. This transform does not support PIL Image. Given mean: (mean1,...,mean[n]) and std: (std1,..,std[n]) for n channels, this transform will normalize each channel of the input torch.*Tensor i.e., output[channel] = (input[channel] - mean[channel]) / std[channel]
The main question is: should it be image preprocessing or tensor preprocessing thing and also could it be implemented with multik library (Kotlin analogue of NumPy).
The desired PR addressing this issue should include:
Cropping
as reference)dataset
moduleP.S. If you want to take this ticket, please leave the comment below
P.P.S Read the Contributing Guidelines.
P.P.P.S. First usage of Multik will be merged in the ONNX PR during next week
The reference implementation could be taken from torchvision.transforms
The text was updated successfully, but these errors were encountered: