Skip to content

Commit

Permalink
Added engery based model paper
Browse files Browse the repository at this point in the history
  • Loading branch information
shagunsodhani committed Mar 17, 2020
1 parent 038f118 commit 2f47d10
Show file tree
Hide file tree
Showing 2 changed files with 77 additions and 0 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ I am trying a new initiative - a-paper-a-week. This repository will hold all tho

## List of papers

* [Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One](https://shagunsodhani.com/papers-I-read/Your-Classifier-is-Secretly-an-Energy-Based-Model,-and-You-Should-Treat-it-Like-One)
* [Massively Multilingual Neural Machine Translation in the Wild - Findings and Challenges](https://shagunsodhani.com/papers-I-read/Massively-Multilingual-Neural-Machine-Translation-in-the-Wild-Findings-and-Challenges)
* [Observational Overfitting in Reinforcement Learning](https://shagunsodhani.com/papers-I-read/Observational-Overfitting-in-Reinforcement-Learning)
* [Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML](https://shagunsodhani.com/papers-I-read/Rapid-Learning-or-Feature-Reuse-Towards-Understanding-the-Effectiveness-of-MAML)
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
---
layout: post
title: Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One
comments: True
excerpt:
tags: ['2019', 'ICLR 2020', 'Adversarial Robustness', 'Energy-Based Models', 'Generative Models', 'Hybrid Models', 'Out of Distribution', 'Outlier Detection', 'Out of Distribution Detection', AI, Adversarial, Calibration, EBM, ICLR, Robustness]

---

## Introduction

* The paper proposed a framework for joint modeling of labels and data by interpreting a discriminative classifier *p(y\|x)* as an energy-based model *p(x, y)*.

* Joint modeling provides benefits like improved calibration (i.e., the predictive confidence should align with the miss classification rate), robustness, and out of order distribution.

* [Link to the paper](https://arxiv.org/abs/1912.03263)

## Motivation

* Consider a standard classifier $f_{\theta}(x)$ which produces a k-dimensional vector of logits.

* $p_{\theta}(y \| x) = softmax(f_{\theta}(x)[y])$

* Uisng concepts from energy based models, we write $p_{\theta}(x, y) = \frac{exp(-E_{\theta}(x, y))}{Z_{\theta}}$ where $E_{\theta}(x, y) = -f_{\theta}(x)[y]$

* $p_{\theta}(x) = \sum_{y}{ \frac{exp(-E_{\theta}(x, y))}{Z_{\theta}}}$

* $E_{\theta}(x) = -LogSumExp_y(f_{\theta}(x)[y])$

* Note that in the standard discriminative setup, shiting the logits $f_{\theta}(x)$ does not affect the model but it affects $p_{\theta}(x)$.

* Computing $p_{\theta}(y \| x)$ using $p_{\theta}(x, y)$ and $p_{\theta}(x)$ gives back the same softmax parameterization as before.

* This reinterpreted classifier is referred to as a Joint Energy-based Model (JEM).

## Optimization

* The log-liklihood of the data can be factoized as $log p_{\theta}(x, y) = log p_{\theta}(x) + log p_{\theta}(y \| x)$.

* The second factor can be trained using the standard CE loss. In contrast, the first factor can be trained using a sampler based on Stochastic Gradient Langevin Dynamics.

## Results

### Hybrid Modelling

* Datasets: CIFAR10, CIFAR100, SVHN.

* Metrics: Inception Score, Frechet Inception Distance

* JEM outperforms generative, discriminative, and hybrid models on both generative and discriminative tasks.

### Calibration

* A calibrated classifier is the one where the predictive confidence aligns with the misclassification rate.

* Dataset: CIFAR100

* JEM improves calibration while retaining high accuracy.

### Out of Distribution (OOD) Detection

* One way to detect OOD samples is to learn a density model that assigns a higher likelihood to in-distribution examples and lower likelihood to out of distribution examples.

* JEM consistently assigns a higher likelihood to in-distribution examples.

* The paper also proposes an alternate metric called *approximate mass* to detect OOD examples.

* The intuition is that a point could have likelihood but be impossible to sample because its surroundings have a very low density.

* On the other hand, the in-distribution data points would lie in a region of high probability mass.

* Hence the norm of the gradient of log density could provide a useful signal to detect OOD examples.

### Robustness

* JEM is more robust to adversarial attacks as compared to discriminative classifiers.

0 comments on commit 2f47d10

Please sign in to comment.