VIP Cheatsheet: Transformers & Large Language Models

https://github.com/afshinea/stanford-cme-295-transformers-large-language-models/tree/main/en

Transformers: self-attention, architecture, variants, optimization techniques (sparse attention, low-rank attention, flash attention)
LLMs: prompting, finetuning (SFT, LoRA), preference tuning, optimization techniques (mixture of experts, distillation, quantization)
Applications: LLM-as-a-judge, RAG, agents, reasoning models (train-time and test-time scaling from DeepSeek-R1)

Blogbook : PHP | Javascript | Laravel | VueJs | Python | TensorFlow