As promised, this week's tea talk will be give by Guillaume Desjardins. Hope to see you all there.
Date, Time: Thursday July 22nd, 15h00 (Tomorrow!) Place: LISA lab (AA3256)
Title: Why you should use Parallel Tempering to train your models.
Abstract:
Recent work has shown that tempering methods are better suited to train Restricted and (deep) Boltzmann Machines, than standard stochastic maximum likelihood (SML aka. PCD) or Contrastive Divergence. Using tempering in the negative phase of SML allows the negative Markov chain to sample from multi-modal distributions. This in turn results in better mixing of the chain, increased robustness to learning rates and faster convergence. I will start by giving a brief overview of parallel tempering (PT) and tempered transitions and present current results for RBM training. The remainder of the talk will then focus on ways to improve the efficiency of our algorithm. Parallel chains can be exploited to naturally form negative mini-batches via the "Virtual Swap" or "Information Retrieval" methods. The overhead of PT can also be minimized by using dynamic chains, which spawn as needed and whose temperatures are adapted to maintain a certain level of cross-temperature state swaps.
Cheers, Aaron
Afficher les réponses par date