You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 27, 2020. It is now read-only.
How can machine learning support people's existing creative practices? Expand people's creative capabilities?
Dream up and design the inputs and outputs of a real-time machine learning system for interaction and audio/visual performance. This could be an idea well beyond the scope of what you can do in a weekly exercise.
Create your own p5+ml5 sketch that trains a model with real-time interactive data. This can be a prototype of the aforementioned idea or a simple exercise that builds on this week's code examples. Here are some exercise suggestions:
Try to invent more elegant and intuitive interaction for collecting real-time data beyond clicking buttons?
Train a model using several PoseNet keypoints or even the full PoseNet skeleton. You can build off of the example we started in class.
Can you design a system with multiple outputs? For example, what if you train a model to output a red, green, and blue value?
What other real-time inputs might you consider beyond mouse position, image pixels, or face/pose tracking? Could you use real-time sensor data?
What other real-time outputs might you consider beyond color or sound modulation? Could the output be a physical computing device?
Complete a blog post with your response, real-time ML system, and documentation of your code exercise and link from the homework wiki.