I made some nicer video demos of my models trained on MNIST and TFD. A README.txt is provided in the archive which says the following:
These are results from VAEs that have been trained initially using the normal
one step approach, and then fine-tuned using collaborative guidance to shape
the long-term behavior of the Markov chains constructed by feeding them back
into themselves.
The MNIST model was trained with 24x the normal KLd penalty.
The TFD models were trained with 1x and 15x the normal KLd penalty.
For both datasets I generated: a video of unrolled chain behavior, a set of
independent samples generated by passing the isotropic Gaussian prior through
the learned conditional p(x | z), and a plot of how the learned system
comprising q(z | x) and p(x | z) balances between KLd and log-likelihood.
The chain videos show multiple independent runs of chains with restarts every
100 steps. All nine chains were initialized with the same random example from
the training set at each restart.
The MNIST behavior seems pretty good. TFD with 15x KLd regularization seems to
stay near the real distribution well-enough, but lacks any real detail. TFD
with 1x KLd regularization (i.e. the standard free-energy) gets weird.
I thought some people might find this interesting. I've attached the demos and a copy of the slides from my talk.