Skip to content
This repository has been archived by the owner on Aug 27, 2020. It is now read-only.

Latest commit

 

History

History

05_diy_neural

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

DIY Neural Network

Session A: Real-Time Data

Objectives:

  • Evaluate and critique models trained in week 4 assignment.
  • Compare training and inference for real-time interaction with more traditional machine learning pipelines.

Pose Tracking

Face Tracking

p5.js Oscillators

Related projects that map gesture to sound

Session B: Training the Model

Objectives:

  • Revisit and examine the concepts of classification and regression as applied to real-time interaction.

Examples

Assignment 5 Due Sunday October 6 at 6pm

  1. Watch Machine Learning for Human Creative Practice, Dr. Rebecca Fiebrink at Eyeo 2018. Write a response to the following question posed by Dr. Fiebrink:
    • How can machine learning support people's existing creative practices? Expand people's creative capabilities?
  2. Dream up and design the inputs and outputs of a real-time machine learning system for interaction and audio/visual performance. This could be an idea well beyond the scope of what you can do in a weekly exercise.
  3. Create your own p5+ml5 sketch that trains a model with real-time interactive data. This can be a prototype of the aforementioned idea or a simple exercise that builds on this week's code examples. Here are some exercise suggestions:
    • Try to invent more elegant and intuitive interaction for collecting real-time data beyond clicking buttons?
    • Train a model using several PoseNet keypoints or even the full PoseNet skeleton. You can build off of the example we started in class.
    • Can you design a system with multiple outputs? For example, what if you train a model to output a red, green, and blue value?
    • What other real-time inputs might you consider beyond mouse position, image pixels, or face/pose tracking? Could you use real-time sensor data?
    • What other real-time outputs might you consider beyond color or sound modulation? Could the output be a physical computing device?
  4. Complete a blog post with your response, real-time ML system, and documentation of your code exercise and link from the homework wiki.