Building Deep Learning Applications Using PyTorch on AWS



Are you looking to accelerate going from prototyping to production for your PyTorch-based models? AWS offers services to build and scale ML models using Amazon SageMaker for a fully managed experience, Amazon Elastic Kubernetes Service (EKS) for containerized workloads, and AWS ParallelCluster to run Amazon EC2 clusters to easily train and deploy your workloads.

 

Distributed ML training using PyTorch on High-Performance Computing (HPC) clusters


June 29, 7:00 AM - 8:30 AM PDT | 10:00 AM - 11:30 AM EDT

 

By using PyTorch Fully Sharded Data Parallel (FSDP) library for distributed training with powerful Amazon EC2 instances and AWS ParallelCluster, you can easily implement distributed training architectures to accelerate training for large ML models. Attend this hands-on workshop to learn about best practices when deploying distributed training architectures on AWS using EC2 and PyTorch FSDP library.

 

Learning objective:

  • Learn how to get started with AWS on PyTorch
  • Learn how to create a distributed training architecture using AWS ParallelCluster
  • Learn about PyTorch Fully Sharded Data Parallel (FSDP) library for distributed training

Who should attend: 

Data Scientists, Developers, ML Practitioners, MLOps, and Software Engineers

Speaker and Presenter Information

Pierre Yves, Specialist SA Manager, Frameworks AI/ML, AWS

 

Shubha Kumbadakone, Specialist BD, Frameworks AI/ML, AWS

 

Shashank Prasanna, Sr. Developer Advocate AI/ML, AWS

Relevant Government Agencies

Other Federal Agencies, Federal Government, State & Local Government


Event Type
Webcast


This event has no exhibitor/sponsor opportunities


When
Wed, Jun 29, 2022, 7:00am - 9:00am PT


Cost
Complimentary:    $ 0.00


Website
Click here to visit event website


Organizer
Amazon Web Services (AWS)


Contact Event Organizer



Return to search results