CS 153: Frontier Systems
The complete AI infrastructure stack: from energy and silicon to foundation models and applications
View Full SyllabusCourse Modules
Module 1: Infrastructure Foundation
Weeks 1-3 • Energy, Silicon, and Distributed Systems
Start Module →Module 2: Training at Scale
Weeks 4-5 • Distributed Training and Neural Networks
Start Module →Module 3: Foundation Models
Weeks 6-9 • Transformers, LLMs, and Alignment
Start Module →Module 4: Production Systems
Weeks 10-12 • Deployment, Applications, and Scientific AI
Start Module →Module 5: Business & Future
Weeks 13-15 • Economics, Business Models, and Future Frontiers
Start Module →Getting Started
Prerequisites
- Basic programming knowledge (Python preferred)
- Undergraduate-level understanding of computer science concepts
- Familiarity with linear algebra and calculus helpful but not required
- Curiosity about how large-scale AI systems work
Learning Approach
This course follows a bottom-up approach: starting with energy and data centers, moving through hardware and distributed systems, then building up to foundation models and applications.
Each week includes 2-4 hours of video content, reading materials, and optional hands-on exercises. Total time commitment: 5-10 hours per week for 15 weeks.
Industry Speakers
Learn from leaders at:
- NVIDIA - Jensen Huang on GPU architecture and AI infrastructure
- OpenAI - Sam Altman and Andrej Karpathy on foundation models and AGI
- Google - Amin Vahdat on cloud infrastructure and distributed systems
- Microsoft - Satya Nadella on enterprise AI platforms
- Tesla - Ashok Elluswamy on autonomous systems
- Cloudflare - Matthew Prince on edge computing
- Midjourney - David Holz on generative AI
- a16z - Ben Horowitz on AI business models
About This Course
Stanford CS 153: Frontier Systems provides a comprehensive exploration of the full AI infrastructure stack. This self-study course reconstructs Stanford's curriculum through open-source materials, public talks, and research papers, making world-class AI education accessible to everyone.
Learning Objectives
By the end of this course, you will:
- Understand the full AI infrastructure stack from energy/silicon through foundation models to applications
- Gain practical knowledge of how frontier AI systems are built, trained, and deployed at scale
- Learn about the business models and economics that make modern AI possible
- Develop intuition for scaling laws, distributed systems, and production ML systems
- Understand the future trajectory of AI infrastructure and applications
Tools You'll Learn
PyTorch
Deep learning framework for research and production
HuggingFace Transformers
Pre-trained models and easy-to-use APIs
Weights & Biases
Experiment tracking and model management
Ray
Distributed computing and parallel processing
vLLM
Fast LLM serving and inference optimization