Hello LISA members. Thank you for listening :)
A few clarifications on questions and things said today.
There was indeed transductive learning in the video classification as suggested by Yoshua.
This was somewhat inevitable when using the leave-one-out procedure. Even if the learning of slow features and the sampling were done on half the set, the SVM leave-one-out procedure trains and tests on all videos which will automatically contain the videos used for training the features.
There is also the possibility of some sort of transductive learning on the first architecture (image classification). The sampling was done on a training set but the splits of the SVM were random. This allows for the possibility that a test image signature is computed with some filters learned on the very same image.
There were 10 examples per class on the Maryland "in-the-wild" data set. Explaining the factor of 10 in the results.
One last little detail, my master's degree was specialized in applied mathematics. At
least it is entitled that way :) I believe that I mentioned just
mathematics.
Have a good evening LISA members !