-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discussion: ImageClassifier #83
Comments
To add to this list: shall we allow relative path for loading custom Teachable Machine models? |
Hi @jackbdu, yes, I agree we should support loading of local model files! The neural network module has support for this (see this old tutorial that I believe should still work with ml5 1.0). I'll note that the p5 web editor opens up some additional complexity given how "local" files are stored via an S3 bucket. I would suggest opening a new issue to track this funcitonality specifically. @OrpheasK, do you know what the current, expected behavior is for loading a model not from a URL? |
@shiffman I believe this is currently not supported (the ImageClassifier only handles URLs beyond the four model names) but I can look into this! I have only used a URL to load TM models but I can see how it makes sense to load locally as well, especially if it's a functionality supported by other modules. |
I was also wondering what it would entail for the p5 web editor. A temperoray solution (live demo, source code) that worked for me for both local servers and GitHub pages (not for p5 web editor) is |
I'm opening this thread as a place for us to discuss ongoing work on the
ImageClassifier
model. Some of the tasks are:topk
variable). Should it be when they create the model? When they callclassify
/classifyStart
? Or both? In the new version of the detector models we eliminated the ability to passoptions
to thedetect
function. Following that example we would supportml5.imageClassifier('doodlenet', { topk: 5 })
but notmyClassifier.classify(image, 5)
. Right now passing the option toml5.imageClassifier()
only works formobilenet
.ml5Specs
in themodel.json
file for custom models. That property would only be set if the user trained their model using the ml5FeatureExtractor
, which we haven't ported over here. But users may have trained models on the old version so I guess we should keep it?224
? Is our range of normalized values correct? Can we infer these values from the model in case the user has a custom trained model which doesn't match our expectations?The text was updated successfully, but these errors were encountered: