Hello all, this Friday, we will have the visit of *Gaël Varoquaux* from INRIA, who will be giving a talk at 2 p.m. in AA-3195.
Gaël is famous for creating the highly successful scikit-learn software and for his past work on brain imaging and neuroscience. But he will be talking primarily about his current research interests, which are machine learning approaches applicable to more general types of data.
Xavier Bouthillier is coordinating his Friday visit, please get in touch with him (xavier.bouthillier@gmail.com) if you want to set aside time to meet the speaker.
*When:* this Friday, august 24th, at 2 p.m.
*Where:* Room 3195, Pavillon André Aisenstadt
*Title:* Simple representations for learning: factorizations and similarities
*Abstract*
Real-life data seldom comes in the ideal form for statistical learning. This talk will focus on high-dimensional problems for signals and discrete entities: when dealing with many, correlated, signals or entities, it is useful to extract representations that capture these correlations.
Matrix factorization models provide simple but powerful representations. They are used for recommender systems across discrete entities such as users and products, or to learn good dictionaries to represent images. However they entail large computing costs on very high-dimensional data, databases with many products or high-resolution images. I will present an algorithm to factorize huge matrices based on stochastic subsampling that gives up to 10-fold speed-ups [1].
With discrete entities, the explosion of dimensionality may be due to variations in how a smaller number of categories are represented. Such a problem of "dirty categories" is typical of uncurated data sources. I will discuss how encoding this data based on similarities recovers a useful category structure with no preprocessing. I will show how it interpolates between one-hot encoding and techniques used in character-level natural language processing.
[1] Stochastic subsampling for factorizing huge matrices A Mensch, J Mairal, B Thirion, G Varoquaux IEEE Transactions on Signal Processing 66 (1), 113-128
[2] Similarity encoding for learning with dirty categorical variables. P Cerda, G Varoquaux, B Kégl Machine Learning (2018): 1-18
Afficher les réponses par date
Hi all,
The talk will be streamed and recorded: https://mila.bluejeans.com/809027115/webrtc
You can reserve a time-slot to meet him at the following link: https://calendar.google.com/calendar/selfsched?sstoken=UUNKME5Da1BIQVRHfGRlZ...
Thank you! Xavier Bouthillier
On Wed, Aug 22, 2018 at 3:32 PM Pascal Vincent pascal20100@gmail.com wrote:
Hello all, this Friday, we will have the visit of *Gaël Varoquaux* from INRIA, who will be giving a talk at 2 p.m. in AA-3195.
Gaël is famous for creating the highly successful scikit-learn software and for his past work on brain imaging and neuroscience. But he will be talking primarily about his current research interests, which are machine learning approaches applicable to more general types of data.
Xavier Bouthillier is coordinating his Friday visit, please get in touch with him (xavier.bouthillier@gmail.com) if you want to set aside time to meet the speaker.
*When:* this Friday, august 24th, at 2 p.m.
*Where:* Room 3195, Pavillon André Aisenstadt
*Title:* Simple representations for learning: factorizations and similarities
*Abstract*
Real-life data seldom comes in the ideal form for statistical learning. This talk will focus on high-dimensional problems for signals and discrete entities: when dealing with many, correlated, signals or entities, it is useful to extract representations that capture these correlations.
Matrix factorization models provide simple but powerful representations. They are used for recommender systems across discrete entities such as users and products, or to learn good dictionaries to represent images. However they entail large computing costs on very high-dimensional data, databases with many products or high-resolution images. I will present an algorithm to factorize huge matrices based on stochastic subsampling that gives up to 10-fold speed-ups [1].
With discrete entities, the explosion of dimensionality may be due to variations in how a smaller number of categories are represented. Such a problem of "dirty categories" is typical of uncurated data sources. I will discuss how encoding this data based on similarities recovers a useful category structure with no preprocessing. I will show how it interpolates between one-hot encoding and techniques used in character-level natural language processing.
[1] Stochastic subsampling for factorizing huge matrices A Mensch, J Mairal, B Thirion, G Varoquaux IEEE Transactions on Signal Processing 66 (1), 113-128
[2] Similarity encoding for learning with dirty categorical variables. P Cerda, G Varoquaux, B Kégl Machine Learning (2018): 1-18
lisa_seminaires@iro.umontreal.ca