•   When: Wednesday, March 24, 2021 from 11:00 AM to 12:00 PM
  •   Speakers: Aditya Devarakonda, Research Scientist, Institute for Data-Intensive Engineering and Science, Johns Hopkins University.
  •   Location: ZOOM
  •   Export to iCal

Abstract:

First-order methods are one of the most widely used classes of optimization methods for solving various machine learning problems. These methods often solve an optimization problem by iteratively sampling a few data points or features from the input data, computing gradients from the subsampled data, and updating the solution. However, in a parallel setting, these methods require interprocess communication at every iteration. Our work introduces a new communication-avoiding technique that re-organizes these methods into forms that communicate every 's' iterations instead of every iteration, where 's' is a tuning parameter. This talk will introduce the derivation and the theoretical and practice tradeoffs of the new methods in distributed-memory parallel settings.

Bio:

Aditya Devarakonda is a research scientist in the Institute for Data-Intensive Engineering and Science at Johns Hopkins University. He received his Ph.D. and M.S. in Computer Science from the University of California, Berkeley where he was an NSF Graduate Research Fellow. Before joining Berkeley, he received a B.S. in Computer Engineering from Rutgers University. His research focuses on designing communication-avoiding algorithms for high performance computing and machine learning applications.

Posted 3 years, 8 months ago