Build for the next wave of AI with purpose-built TPUs
As generative AI models grow in complexity, the computational demands for both training and inference are pushing traditional systems to their limit. Join Powering AI inference at scale: a deep dive into Ironwood TPUs to learn about the specialized hardware and software engineered to solve these challenges.
We will cover how to:
- Accelerate the entire AI workflow: see how Ironwood's architecture is purpose-built for both massive-scale training and high-throughput production serving to gain a strategic advantage
- Solve for inference at scale: understand Ironwood's inference-first design, engineered to remove technical bottlenecks for your most complex, high-volume models
- Enable sustainable scale: learn how a 2x improvement in performance-per-watt addresses the economic challenges of large-scale AI, maximizing your infrastructure investment
- Integrate seamlessly with your ecosystem: discover how the co-designed software stack makes Ironwood's power accessible to your teams' existing workflows in JAX, PyTorch, and vLLM
Speaker and Presenter Information
Leo Leung
Director, Cloud Compute
Google Cloud
Rose Zhu
Sr. Product Manager
Google Cloud
Relevant Government Agencies
Other Federal Agencies, Federal Government, State & Local Government
Event Type
Webcast
This event has no exhibitor/sponsor opportunities
When
Thu, Dec 11, 2025, 1:00pm - 1:40pm
ET
Cost
Complimentary: $ 0.00
Website
Click here to visit event website
Organizer
Google Cloud







