Welcome the return of the tea talk series (well for now anyway)! This week we have Guillaume Desjardins letting us about some of his resent insights into subspace auto-encoders. Hope to see you there.
When: Thursday, April 19th (tomorrow): 15h00 Where: LISA Lab (AA3256)
Title: Subspace Auto-Encoders
In an ssRBM, each hidden unit h_i is invariant to the subspace spanned by the filters being pooled over. This appears to be a more powerful (and general ?) form of invariance than traditional sum, max or mean pooling. Additionally, it has the benefit that while h_i gains a degree of invariance, knowledge about the particular configuration of the input remains accessible through the slab variables, which can be interpreted as coordinates within this subspace.
This view motivates a novel approach to pooling in the context of auto-encoders. I will thus present the subspace auto-encoder framework, which replaces slab variables by linear projections onto filter sets and where hidden units are made sensitive to the norm of these projections. I will discuss its relationship to existing methods in the literature, as well as issues encountered so far. Time allowing, I will also discuss various alternative formulations which have been discussed with Pascal and Aaron.
Cheers, Aaron