Skip to content

Tuning the untunable: Lessons for tuning expensive deep learning functions

Photo of Lena Andriushchenko
Hosted By
Lena A.
Tuning the untunable: Lessons for tuning expensive deep learning functions

Details

We want to invite you to participate in ODSC Webinar!

During this webinar, Patrick Hayes, CTO & Co-Founder of SigOpt, walks through a variety of methods for training models with lengthier training cycles before diving deep on this multitask optimization functionality. The rest of the talk will focus on how this type of method works and explain the ways in which deep learning experts are deploying it today. Finally, we will talk through implications of early findings in this area of research and next steps for exploring this functionality further. This is a particularly valuable and interesting talk for anyone who is working with large data sets or complex deep learning models.

To access this webinar, please register using the link below:
https://attendee.gotowebinar.com/register/7311596589281404161

Date: Jan 10th
Time: 11 am - 13 pm PT

Agenda Detail:

Session: Tuning the untunable: Lessons for tuning expensive deep learning functions

Speaker: Patrick Hayes

Abstract:
Tuning models with lengthy training cycles, typically found in deep learning, can be extremely expensive to train and tune. In certain instances, this high cost may even render tuning infeasible for a particular model. Even if tuning is feasible, it is often extremely expensive. Popular methods for tuning these types of models, such as evolutionary algorithms, typically require several orders of magnitude the time and compute as other methods. And techniques like parallelism often come with a degradation of performance trade-off that results in the use of many more expensive computational resources. This leaves most teams with few good options for tuning particular expensive deep learning functions.

But new methods related to task sampling in the tuning process create the chance for teams to dramatically lower the cost of tuning these models. This method referred to as multitask optimization, combines “strong anytime performance” from bandit-based methods with “strong eventual performance” of Bayesian optimization. As a result, this process can unlock tuning for some deep learning models that have particularly lengthy training and tuning cycles.

ODSC Links:
• Get free access to more talks like this at LearnAI:
https://learnai.odsc.com/
• Facebook: https://www.facebook.com/OPENDATASCI/
• Twitter: https://twitter.com/odsc & @odsc
• LinkedIn: https://www.linkedin.com/company/open-data-science/
• East Conference Apr 30 - May 3: https://odsc.com/boston

Photo of Open Data Science (Hosted by ODSC) group
Open Data Science (Hosted by ODSC)
See more events