Skip to main content
Thesis defences

MCS Thesis Examination: Adeetya Patel

Scaling Local Learning for Supervised and Self-Supervised Learning


Date & time
Tuesday, February 7, 2023
11 a.m. – 12:30 p.m.
Cost

This event is free

Organization

Department of Computer Science and Software Engineering

Contact

Leila Kosseim

Where

Online

Abstract

    Traditional neural network training methods optimize a monolithic objective function jointly for all the components. This can lead to various inefficiencies in terms of potential parallelization. Local learning is an approach to model-parallelism that removes the standard end-to-end learning setup and utilizes local objective functions to permit parallel learning amongst model components in a deep network. Recent works have demonstrated that variants of local learning can lead to efficient training of modern deep networks. However, in terms of how much computation can be distributed, these approaches are typically limited by the number of layers in a network. Hence, the first study explores how local-learning can be applied at the level of splitting layers or modules into sub-components, adding a notion of width-wise modularity to the existing depth-wise modularity associated with local learning. We investigate local-learning penalties that permit such models to be trained efficiently. Our experiments on various datasets demonstrate that introducing width-level modularity can lead to computational advantages over existing methods and opens new opportunities for improved model-parallel distributed training. The second study focuses on adapting existing local-learning frameworks to self-supervised learning tasks, specifically using the SimCLR method. However, existing local-learning frameworks lack in performance due to task-relevant information collapse in early layers. To address the issue, we propose modifying the local objective functions layerwise to gradually increase the problem difficulty with depth. We found that our method was able to maintain the similar performance as the end-to-end trained model while also increasing parallelization.

Examining Committee

  • Dr. Essam Mansour (Chair) 
  • Dr. Eugene Belilovsky (Supervisor)
  • Dr. Ching Suen (Examiner)
  • Dr. Essam Mansour (Examiner)
     
Back to top

© Concordia University