Student Workflow#
This guide explains the actual day-to-day workflow for building your ML framework with TinyTorch.
The Core Workflow#
TinyTorch follows a simple three-step cycle:
graph LR
A[Edit Modules<br/>modules/NN_name/] --> B[Export to Package<br/>tito module complete N]
B --> C[Validate with Milestones<br/>Run milestone scripts]
C --> A
style A fill:#e3f2fd
style B fill:#f0fdf4
style C fill:#fef3c7
Step 1: Edit Modules#
Work on module notebooks in modules/:
# Example: Working on Module 03 (Layers)
cd modules/03_layers
jupyter lab layers_dev.ipynb
Each module is a Jupyter notebook that you edit interactively. You’ll:
Implement the required functionality
Add docstrings and comments
Run and test your code inline
See immediate feedback
Step 2: Export to Package#
Once your module implementation is complete, export it to the main TinyTorch package:
tito module complete MODULE_NUMBER
This command:
Converts your source files to the
tinytorch/packageValidates NBGrader metadata
Makes your implementation available for import
Example:
tito module complete 03 # Export Module 03 (Layers)
After export, your code is importable:
from tinytorch.layers import Linear # YOUR implementation!
Step 3: Validate with Milestones#
Run milestone scripts to prove your implementation works:
cd milestones/01_1957_perceptron
python 01_rosenblatt_forward.py # Uses YOUR Tensor (M01)
python 02_rosenblatt_trained.py # Uses YOUR layers (M01-M07)
Each milestone has a README explaining:
Required modules
Historical context
Expected results
What you’re learning
See Milestones Guide for the full progression.
Module Progression#
TinyTorch has 20 modules organized in three tiers:
Foundation (Modules 01-07)#
Core ML infrastructure - tensors, autograd, training loops
Milestones unlocked:
M01: Perceptron (after Module 07)
M02: XOR Crisis (after Module 07)
Architecture (Modules 08-13)#
Neural network architectures - data loading, CNNs, transformers
Milestones unlocked:
M03: MLPs (after Module 08)
M04: CNNs (after Module 09)
M05: Transformers (after Module 13)
Optimization (Modules 14-19)#
Production optimization - profiling, quantization, benchmarking
Milestones unlocked:
M06: Torch Olympics (after Module 18)
Capstone Competition (Module 20)#
Apply all optimizations in the Torch Olympics Competition
Typical Development Session#
Here’s what a typical session looks like:
# 1. Work on a module
cd modules/05_autograd
jupyter lab autograd_dev.ipynb
# Edit your implementation interactively
# 2. Export when ready
tito module complete 05
# 3. Validate with existing milestones
cd ../milestones/01_1957_perceptron
python 01_rosenblatt_forward.py # Should still work!
# 4. Continue to next module or milestone
TITO Commands Reference#
The most important commands you’ll use:
# Export module to package
tito module complete MODULE_NUMBER
# Check module status (optional capability tracking)
tito checkpoint status
# System information
tito system info
# Join community and benchmark
tito community join
tito benchmark baseline
For complete command documentation, see TITO CLI Reference.
Checkpoint System (Optional)#
TinyTorch includes an optional checkpoint system for tracking progress:
tito checkpoint status # View completion tracking
This is helpful for self-assessment but not required for the core workflow. The essential cycle remains: edit → export → validate.
Notebook Platform Options#
TinyTorch notebooks work with multiple platforms, but important distinction:
Online Notebooks (Viewing & Exploration)#
Jupyter/MyBinder: Click “Launch Binder” on any notebook page - great for viewing
Google Colab: Click “Launch Colab” for GPU access - good for exploration
Marimo: Click “🍃 Open in Marimo” for reactive notebooks - excellent for learning
⚠️ Important: Online notebooks are for viewing and learning. They don’t have the full TinyTorch package installed, so you can’t:
Run milestone validation scripts
Import from
tinytorch.*modulesExecute full experiments
Use the complete CLI tools
Local Setup (Required for Full Package)#
To actually build and experiment, you need a local installation:
# Clone and setup locally
git clone https://github.com/mlsysbook/TinyTorch.git
cd TinyTorch
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
pip install -e . # Install TinyTorch package
Why local?
✅ Full
tinytorch.*package available✅ Run milestone validation scripts
✅ Use
titoCLI commands✅ Execute complete experiments
✅ Export modules to package
✅ Full development workflow
Note for NBGrader assignments: Submit .ipynb files (not Marimo’s .py format) to preserve grading metadata.
Community & Benchmarking#
Join the Community#
After completing setup, join the global TinyTorch community:
# Join with optional information
tito community join
# View your profile and progress
tito community profile
# Update your information
tito community update
Privacy: All information is optional. Data is stored locally in .tinytorch/ directory. See Community Guide for details.
Benchmark Your Progress#
Validate your setup and track performance:
# Quick baseline benchmark (after setup)
tito benchmark baseline
# Full capstone benchmarks (after Module 20)
tito benchmark capstone --track all
Baseline Benchmark: Quick validation that your setup works correctly - your “Hello World” moment!
Capstone Benchmark: Full performance evaluation across speed, compression, accuracy, and efficiency tracks.
See Community Guide for complete community and benchmarking features.
Instructor Integration#
TinyTorch supports NBGrader for classroom use. See the Instructor Guide for complete setup and grading workflows.
For now, focus on the student workflow: building your implementations and validating them with milestones.
What’s Next?#
Start with Module 01: See Getting Started
Follow the progression: Each module builds on previous ones
Run milestones: Prove your implementations work
Build intuition: Understand ML systems from first principles
The goal isn’t just to write code - it’s to understand how modern ML frameworks work by building one yourself.