CS 153: Frontier Systems

The complete AI infrastructure stack: from energy and silicon to foundation models and applications

View Full Syllabus
15 Weeks
30+ Videos
30+ Readings
15 Labs

Course Modules

Getting Started

Prerequisites

  • Basic programming knowledge (Python preferred)
  • Undergraduate-level understanding of computer science concepts
  • Familiarity with linear algebra and calculus helpful but not required
  • Curiosity about how large-scale AI systems work

Learning Approach

This course follows a bottom-up approach: starting with energy and data centers, moving through hardware and distributed systems, then building up to foundation models and applications.

Each week includes 2-4 hours of video content, reading materials, and optional hands-on exercises. Total time commitment: 5-10 hours per week for 15 weeks.

Industry Speakers

Learn from leaders at:

  • NVIDIA - Jensen Huang on GPU architecture and AI infrastructure
  • OpenAI - Sam Altman and Andrej Karpathy on foundation models and AGI
  • Google - Amin Vahdat on cloud infrastructure and distributed systems
  • Microsoft - Satya Nadella on enterprise AI platforms
  • Tesla - Ashok Elluswamy on autonomous systems
  • Cloudflare - Matthew Prince on edge computing
  • Midjourney - David Holz on generative AI
  • a16z - Ben Horowitz on AI business models

About This Course

Stanford CS 153: Frontier Systems provides a comprehensive exploration of the full AI infrastructure stack. This self-study course reconstructs Stanford's curriculum through open-source materials, public talks, and research papers, making world-class AI education accessible to everyone.

Learning Objectives

By the end of this course, you will:

Tools You'll Learn

PyTorch

Deep learning framework for research and production

HuggingFace Transformers

Pre-trained models and easy-to-use APIs

Weights & Biases

Experiment tracking and model management

Ray

Distributed computing and parallel processing

vLLM

Fast LLM serving and inference optimization