Deep learning algorithms have great successes on different benchmarks in the recent years. Deep learning algorithms rely on combining layers of latent factors into hierarchies. The higher levels of the hiearchy can represent more abstract or higher-level features. Nevertheless it is still a weakly-understood phenomenon that under what conditions the learning dynamics during the training of deep learning algorithms give rise to those higher-level features. Motivated from the cultural learning and effects of intermediate hints learned from other individuals on learning high level abstractions, we investigated the performance of several machine learning algorithms on a binary artificial dataset. As a result of our experiments, we observed that without any prior knowledge all the models including the common deep learning algorithms. An interesting characteristic of the problem is that, it is a combination of two nonlinear sub-tasks. We were able to solve this task by providing intermediate level hints to the architecture and by changing the architecture. Our findings suggest that deep learning algorithms can have difficulty to learn high level abstract tasks due to the inherent optimization issues.
We considered a novel activation function for deep neural networks called
norm pooling. The unique characteristic of that activation function is that it is a pooling activation function that is learned as part of the learning process of the norm-pooling. The learning of
values in
can help to generalize into other types of norm pooling methods.
Finally I propose new approaches and problems for the future work, specifically the tasks that involves sequential learning and structured prediction such as natural language processing. The problems addressed in this thesis are mainly a consequence of difficult optimization and might be solved via cognitively inspired algorithms with mathematical foundations in the machine learning literature.
Best,
--
Caglar GULCEHRE