Hook line: Cut through the jargon—get plain-English definitions for every AI term that trips beginners up.
Primary CTA: Browse the Glossary → (jump to the A-to-Z index)
Table of Contents
- Why a Glossary Still Matters in 2025
- How We Curate & Update Definitions
- AI Lingo at a Glance (Top-20 Cheat-Sheet)
- Concept Clusters (Foundation Models, Tokens, Embeddings …)
- Micro-Posts: “AI in 60 Seconds” Series
- Visual Aids & Code Mini-Blocks
- Common Misconceptions (and Clarifications)
- Beginner FAQ
- Browse the Glossary → A-to-Z Index
- Next Steps & Deep-Dive Links
1 Why a Glossary Still Matters
LLM release notes move faster than most blog update cycles. Newcomers face an alphabet soup of RAG, RLHF, LoRA, MoE, KV-cache. Without a single, trustworthy reference they’ll bounce or buy the wrong tool. mysideproject.works keeps the glossary:
Benefit | What You Get |
---|---|
Up-to-date | Monthly refresh aligned to OpenAI, Anthropic & Google model drops |
Beginner-friendly | Zero math unless essential; everyday analogies |
Action-linked | Each term deep-links to tutorials, templates, or prompt playbooks |
AdSense-ready | Short, scannable entries → high viewability without fluff |
2 How We Curate & Update Definitions
- Source Radar – release notes, academic abstracts, Discord trend scraping.
- Plain-English Draft – writer converts spec jargon into 80-word lay summary.
- Expert Pass – Suraj or guest PhD reviews for accuracy.
- Link Mapping – term → related tutorial/tool → internal links for EEAT.
- Update Log – change history appended; deprecated terms flagged.
3 AI Lingo at a Glance (Top-20 Cheat-Sheet)
Term | TL;DR (≤15 words) |
---|---|
Token | Smallest chunk an LLM reads—≈ 4 characters or 0.75 word. |
Context Window | Max tokens model remembers per prompt + reply. |
Embedding | Numeric fingerprint of text for semantic search. |
RAG | Retrieval-Augmented Generation—pull docs → feed LLM → cite. |
RLHF | Reinforcement Learning from Human Feedback—fine-tunes model behaviour. |
LoRA | Low-Rank Adaptation—lightweight finetuning method. |
KV-Cache | Key-Value cache that speeds repeated inference calls. |
MoE | Mixture of Experts—routes tokens through specialised subnetworks. |
Temperature | Randomness dial; 0 = deterministic, 1 = creative chaos. |
Top-p | Probabilistic nucleus sampling threshold; trims unlikely tokens. |
(Full cheat-sheet lives at the top of the glossary page for instant scanning.)
4 Concept Clusters
4.1 Foundation Models
Definitions for GPT-4o, Claude 3 Opus, Gemini 1.5 Pro, Llama 3, plus links to model cards and supported tools.
4.2 Prompt Anatomy
Terms like system prompt, few-shot, chain-of-thought, role priming, output contract—each cross-linked to the Prompt Engineering Playbooks.
4.3 Data & Training
Fine-tuning, LoRA, Q-LoRA, distillation, synthetic data, cosine similarity—links to tutorials on dataset prep.
4.4 Deployment & Scaling
KV-cache, batching, quantisation, GPUs vs TPUs, serverless inference—points to “Deploy on a Budget” guide.
5 Micro-Posts: “AI in 60 Seconds” Series
Each micro-post = <200 words, 1 graphic, 1 “Why it matters”.
Slug pattern: /glossary/60s/<term>
Great for drip content, social snippets, and SERP-feature capture.
6 Visual Aids & Code Mini-Blocks
- Token Visualiser GIF – shows “ChatGPT” →
['Chat', 'G', 'PT']
. - Embedding Scatter Plot – Python snippet plots sentence clusters.
- RAG Flow Diagram – arrows: query → search index → context → LLM.
(SVGs optimised for WebP; lazy-loaded for CLS health.)
7 Common Misconceptions
Myth | Reality | Quick Proof |
---|---|---|
“Tokens = words” | Avg word ≈ 1.33 tokens | Tokeniser demo table |
“Higher temperature always better” | >0.8 often destroys structure | Comparison GIF |
“Finetuning beats prompting” | For ≤500 examples, prompting cheaper | Cost table |
8 Beginner FAQ
Q: How many tokens is 1 000 English words?
A: Roughly 750 tokens—use our calculator tool embedded below the entry.
Q: Is embedding the same as vector DB?
A: No. Embedding = vector generation; vector DB = storage/query layer.
Q: Do I need RAG for small websites?
A: Below ~50 KB of text, plain prompts with full context are cheaper and simpler.
(Full FAQ lives under /glossary/faq
.)
9 Browse the Glossary → A-to-Z Index
- Filter: Category, Model Vendor, Difficulty
- Jump Letters: A • B • C … Z
- Export: CSV / JSON for dev use
10 Next Steps & Deep-Dive Links
- Read “Token Limit” → flows into ChatGPT Review.
- Check “RAG” → pairs with Customer-Support Chatbot Tutorial.
- **Download the Free AI Side-Project Starter Playbook **—cheat-sheets for top glossary terms.
- Request a Term – ping
#glossary-request
in Discord; new entries ship every Monday.
Master the lingo, master the build.
Browse the Glossary →