- Learn steps to construct a vanilla neural network and train a classification model with ml5.js.
- Understand Neural Network architecture
- What is a Perceptron?
- What is a multi-layered perceptron?
- Activation Functions
- Understand the terminology of the training process
- Training
- Learning Rate
- Epochs
- Batch size
- Loss
- Understand the difference between training and inference
- Week 4 slides
- What is a Neural Network? by 3Blue1Brown
- Invented in 1957 by Frank Rosenblatt at the Cornell Aeronautical Laboratory (original paper), a perceptron is the simplest neural network possible: a computational model of a single neuron. A perceptron consists of one or more inputs, a processor, and a single output. (More to this history
- NOC Neural Network videos - 10.1 to 10.3 cover the "Perceptron", a model of a single neuron. The Perceptron forms the basis of modern multi-layer deep learning networks.
- Perceptron p5.js code
- Perceptron Slides
- NOC Neural Network chapter 10 - written explanation of Perceptron and accompanying code in 10.1 to 10.4.
- MARtLET by Michelle Nagai
- From the Waters by Anne Hege
- This Is Not A Theremin by Guillermo Montecinos and Sofía Suazo
- Eye Conductor by Andreas Refsgaard
- Machine Learning for Human Creative Practice, talk from Dr. Rebecca Fiebrink at Eyeo 2018.
- ml5.js: Train Your Own Neural Network (35 min)
- ml5.js: Save Neural Network Training Data (12 min)
- ml5.js: Save Neural Network Trained Model (11 min)
- ml5: Neural Network Regression (10 min)
- Revisit and examine the concepts of classification and regression as applied to real-time interaction.
- 🎥 ml5.js: Pose Classification with PoseNet and ml5.neuralNetwork()
- 🎥 ml5.js: Pose Regression with PoseNet and ml5.neuralNetwork()
- Hand pose tracking + Neural Network, demo video
- ml5 Playful Examples by Andreas Refsgaard
- Creatability
- Watch Machine Learning for Human Creative Practice, Dr. Rebecca Fiebrink at Eyeo 2018. Write a response to the following question posed by Dr. Fiebrink:
- How can machine learning support people's existing creative practices? Expand people's creative capabilities?
- Dream up and design the inputs and outputs of a real-time machine learning system for interaction and audio/visual performance. This could be an idea well beyond the scope of what you can do in a weekly exercise.
- Create your own p5+ml5 sketch that trains a model with real-time interactive data. This can be a prototype of the aforementioned idea or a simple exercise where you run this week's code examples with your own data. Here are some exercise suggestions:
- Try to invent more elegant and intuitive interaction for collecting real-time data beyond clicking buttons?
- What other real-time inputs might you consider beyond mouse position, image pixels, or face/pose/hand tracking? Could you use real-time sensor data?
- What other real-time outputs might you consider beyond color or sound modulation? Could the output be a physical computing device? Multiple outputs like R,G,B values?
- Improve the handPose example we built in class https://editor.p5js.org/yining/sketches/dX-aN-8E7
- Can you add more keypoints from the hand to the data collection? (All the keypoints?)
- Can you add more classification categories?
- Can you create an interface for training and showing the results of model's prediction?
- Can you turn this into a regression model?
- Complete a blog post with your response, real-time ML system, and documentation of your code exercise and link from the homework wiki.