Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

> Chapter 7:

Training on Complex and Scarce Datasets

The first task to develop new recognition models is to gather and prepare the training dataset. Building pipelines to let the data flow properly during heavy training phases used to be an art, but TensorFlow recent features made it quite straightforward to fetch and pre-process complex data, as demonstrated in the first notebooks of this Chapter 7. Oftentimes, however, training data can simply be unavailable. The remaining notebooks tackle these scenarios, presenting a variety of solutions.

📓 Notebooks

(Reminder: Notebooks are better visualized with nbviewer: click here to continue on nbviewer.jupyter.org.)

📄 Additional Files

  • cityscapes_utils.py: utility functions for the Cityscapes dataset (code presented in notebook 6.4).
  • fcn.py: functional implementation of the FCN-8s architecture (code presented in notebook 6.5).
  • keras_custom_callbacks.py: custom Keras callbacks to monitor the trainings of models (code presented in notebooks 4.1 and 6.2).
  • plot_utils.py: utility functions to display results (code presented in notebook 6.2).
  • renderer.py: object-oriented pipeline to render images from 3D models (code presented in notebook 7.3).
  • synthia_utils.py: utility functions for the SYNTHIA dataset (code presented in notebook 7.4).
  • tf_losses_and_metrics.py: custom losses and metrics to train/evalute CNNs (code presented in notebooks 6.5 and 6.6).
  • tf_math.py: custom mathematical functions reused in other scripts (code presented in notebooks 6.5 and 6.6).