Deeper Technical 6 Weeks

ML-Oriented Track

Applied AI + ML Fundamentals. Understand how models work, fine-tuning, and deeper system design.

PyTorch Transformers Fine-tuning (LoRA) RAG Systems AI Agents

Who This Track Is For

Companies that want Lead AI Engineers who can:

Build LLM applications

RAG, Agents

Understand model internals

Fine-tuning, evaluation

Make architecture decisions

When to fine-tune vs prompt

Lead ML/AI teams

Technical credibility

Bridge product & research

Cross-functional collaboration

Target Companies

Type Examples Why ML Matters
Big Tech Google, Apple, Amazon Expect ML depth
AI-First Startups Mistral, Aleph Alpha Building models
Automotive AI BMW, Bosch Custom models for edge
Research-Adjacent DeepMind, Anthropic ML fundamentals required

Skills Matrix

Core (Must Have)

Python for ML Critical
RAG Systems Critical
LLM APIs & Prompting Critical
PyTorch Basics High
Fine-tuning (LoRA) High
AI Agents High
LLM Evals High

Supporting (Good to Have)

Transformers Architecture Medium
Training Basics Medium
MLOps Medium
Classical ML Low

6-Week Curriculum

1

Python + ML Foundations

Solid Python for ML, understand how models work

Day 1-2: Python for ML (10 hours)

  • NumPy (arrays, broadcasting)
  • Pandas (DataFrames)
  • Async/await
  • Type hints + Pydantic

Day 3-4: PyTorch Fundamentals (10 hours)

import torch
import torch.nn as nn

# Tensors
x = torch.tensor([1, 2, 3])
y = x.cuda()  # GPU

# Autograd
x = torch.tensor([2.0], requires_grad=True)
y = x ** 2
y.backward()
print(x.grad)  # 4.0

# Neural Network
class SimpleNet(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc = nn.Linear(10, 1)

    def forward(self, x):
        return self.fc(x)

Day 5-6: Transformers Architecture (8 hours)

Understand conceptually:

  • Self-attention mechanism
  • Multi-head attention
  • Positional encoding
  • Encoder vs Decoder
  • Why transformers scale

Day 7: LLM APIs + Prompting (6 hours)

Set up OpenAI, Anthropic APIs. Practice 10+ prompt patterns.

Week 1 Deliverables

  • [ ] PyTorch basics working
  • [ ] Trained simple model (MNIST)
  • [ ] Can explain transformer architecture
  • [ ] LLM APIs set up
2

RAG Systems + Embeddings Deep Dive

Build RAG with understanding of how embeddings work

Day 1-2: Embeddings Theory + Practice (8 hours)

  • Word embeddings (Word2Vec concept)
  • Sentence embeddings (how they're trained)
  • Contrastive learning basics
  • Embedding dimensions and similarity
from sentence_transformers import SentenceTransformer
from sklearn.metrics.pairwise import cosine_similarity

model = SentenceTransformer('all-MiniLM-L6-v2')
embeddings = model.encode(["This is a test", "Another sentence"])
sim = cosine_similarity([embeddings[0]], [embeddings[1]])

Day 3-4: Vector Databases + Retrieval (8 hours)

Understand: HNSW algorithm, ANN vs exact search, metadata filtering, hybrid search

Day 5-6: LangChain RAG (8 hours)

from langchain.text_splitter import RecursiveCharacterTextSplitter

# Understand WHY these parameters
splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,      # Why 1000?
    chunk_overlap=200,    # Why overlap?
    separators=["\n\n", "\n", " ", ""]  # Priority order
)

Week 2 Deliverables

  • [ ] Understand how embeddings work (can explain)
  • [ ] RAG with multiple retrieval strategies
  • [ ] Benchmark different chunking approaches
3

Fine-tuning + AI Agents

Know when and how to fine-tune, build agents

Day 1-2: Fine-tuning Fundamentals (10 hours)

Use Case Prompting Fine-tuning
General tasks Yes No
Domain terminology Maybe Yes
Cost optimization No Yes (smaller model)
# LoRA: Low-Rank Adaptation
# Instead of updating all weights W, learn small matrices A and B
# W' = W + BA where B is (d x r) and A is (r x k), r << d

from peft import LoraConfig, get_peft_model

config = LoraConfig(
    r=8,                    # Rank (smaller = fewer params)
    lora_alpha=32,          # Scaling factor
    target_modules=["q_proj", "v_proj"],
    lora_dropout=0.1,
)
# Only ~0.1% of parameters are trainable!

Day 3-4: LangChain Agents (8 hours)

Build agents with tools: search, calculator, database queries.

Day 5-6: LangGraph Workflows (8 hours)

Stateful agents with planning, execution, review nodes.

Week 3 Deliverables

  • [ ] Fine-tuned small model on custom task
  • [ ] Understand when fine-tuning makes sense
  • [ ] Agent with 3+ tools
  • [ ] LangGraph workflow with state
4

Evaluation + ML System Design

Measure quality, design ML systems

Day 1-2: LLM Evaluation (8 hours)

RAGAS for RAG, DeepEval for general LLM evaluation.

Day 3-4: ML System Design (10 hours)

User Query
    |
    v
+------------------+
| Query Analysis   | <-- Intent, entities, query rewrite
+--------+---------+
         |
    +----+----+
    v         v
+-------+ +-------+
|Vector | |Keyword| <-- Hybrid retrieval
|Search | |Search |
+---+---+ +---+---+
    |         |
    +----+----+
         v
+------------------+
|   Reranking      | <-- Cross-encoder
+--------+---------+
         v
+------------------+
|  LLM Generation  | <-- With context
+--------+---------+
         v
+------------------+
|   Evaluation     | <-- Faithfulness check
+------------------+

Practice Questions

  • "Design a document search system for legal documents"
  • "Design a chatbot with RAG for customer support"
  • "Design a recommendation system using embeddings"

Week 4 Deliverables

  • [ ] Evaluation pipeline for RAG
  • [ ] ML system design practice (3 designs)
  • [ ] Production API with monitoring
  • [ ] Can explain trade-offs in interviews
5

Portfolio + Advanced ML Topics

Polish portfolio, learn advanced topics for interviews

Day 1-2: Advanced ML Concepts (8 hours)

Attention Mechanism

Attention(Q,K,V) = softmax(QK^T / sqrt(d_k)) V

Loss Functions

Cross-entropy, MSE, Contrastive

Optimization

Adam, LR scheduling, Mixed precision

Scaling

Data/Model parallelism, Checkpointing

Day 3-5: Capstone Project (15 hours)

Option A

Domain-Specific RAG + Fine-tuning

Option B

Multi-Agent Research System

Option C

ML Pipeline Demo

Week 5 Deliverables

  • [ ] Capstone project complete
  • [ ] 3 portfolio projects
  • [ ] Blog posts published
  • [ ] Demo videos
6

Job Search + Interview Prep

Apply and prepare for ML-depth interviews

Interview Questions Bank

ML Fundamentals
  • 1. "Explain how transformers work"
  • 2. "How does LoRA reduce training cost?"
  • 3. "When would you fine-tune vs use RAG?"
System Design
  • 1. "Design a RAG system for customer support"
  • 2. "Design an ML serving system"

Week 6 Deliverables

  • [ ] 25+ applications
  • [ ] Interview prep complete
  • [ ] Mock interviews done
  • [ ] Network connections made

Success Metrics

By Week 6, You Should:

Can explain transformers architecture
Can implement RAG from scratch
Understand when to fine-tune vs RAG
Built evaluation pipeline
3 portfolio projects with ML depth
Confident in ML system design

Ready to Start?

Choose your track and begin your journey