-
Notifications
You must be signed in to change notification settings - Fork 160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CREPE model Tensorflow on Android #79
Comments
I'm using it right now in an iOS/Android React Native app using TFLite. You can read up on how to convert the Keras model to TFLite here: https://www.tensorflow.org/lite/convert I just had to write a small conversion script that builds the Keras model and loads the weights from the h5 files provided in the repo, and then I used the tf.lite.TFLiteConverter.from_keras_model(...) API. |
Thats great I’ll check it out, thanks for the help
Remy
…On Thu, 14 Apr 2022 at 3:09 PM, martingasser ***@***.***> wrote:
I'm using it right now in an iOS/Android React Native app using TFLite.
You can read up on how to convert the Keras model to TFLite here:
https://www.tensorflow.org/lite/convert
I just had to write a small conversion script that builds the Keras model
and loads the weights from the h5 files provided in the repo, and then I
used the tf.lite.TFLiteConverter.from_keras_model(...) API.
—
Reply to this email directly, view it on GitHub
<#79 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ALCW7IE644XZLO6FEC6KWM3VE7ABJANCNFSM5MVT37AA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Hey, were you able to load a model bigger than the tiny one? The bigger model is the more latence I have during the inference |
Of course, bigger models need more computation. On a low-end Android device (Samsung Galaxy A12), I can only run the tiny model in real time. On an iPhone 11, I can use the "small" model without problems. Which device are you using? |
Hi,
I use the CREPE model on a web browser which works pretty fine, but is there a way to integrate it in Android or flutter web/mobile using Tensorflow library?
the model works just nice when the mic is used from the web browser (with ml5 js for example). I wish to connect another source, like phone mic, is this possible?
Could you please help me to understand?
I believe it should be possible, I do not want to re-train the model it already works fine. I just need to understand how to modify the input audio data from another source to fit the model.
Thanks for your help
Remy
The text was updated successfully, but these errors were encountered: