All playbooks / LLM Fundamentals

Playbook · LLM Fundamentals

What are foundation models, and how have they changed AI engineering?

The interviewer is not asking for a textbook definition. They want to know whether you understand why the engineering stack changed once large pre-trained models became usable product primitives instead of research artifacts.

Senior High frequency 8 min read Free
Practical answer framework for AI engineer interview loops.

01Interview Context

The interviewer is not asking for a textbook definition. They want to know whether you understand why the engineering stack changed once large pre-trained models became usable product primitives instead of research artifacts.

02The 90-second answer

Foundation models are broad pre-trained models that can be adapted through prompting, retrieval, fine-tuning, and tool use. They changed AI engineering because the bottleneck moved away from training one narrow model per task and toward building reliable systems around a general model that is powerful but imperfect.

03What changed in the engineering stack

Before foundation models, many teams trained separate models for classification, ranking, extraction, or summarization. Once the base model already had broad language capability, the hard engineering work moved up the stack. The new questions became: how do you ground it with fresh data, enforce structured output, evaluate regressions, manage cost and latency, and keep unsafe behavior under control?

That shift changed team design too. The center of gravity moved from model training infrastructure toward product systems: prompt versioning, retrieval, evals, observability, routing, and guardrails. My short version is that foundation models compressed model-building work and expanded systems work.

04Weak vs Strong Answer

Weak answer

"Foundation models are very large models trained on lots of data, and now people use them for chatbots."

Strong answer

"The important shift is architectural. We spend less time training a bespoke model for every task and more time building evals, retrieval, guardrails, and cost controls around a strong general model that still needs product discipline."

05Tradeoffs that matter

The useful comparison is not just open versus closed models. It is also prompt-only versus retrieval versus fine-tuning.

Approach When it works Risk
Prompt only Fast MVP with low operational overhead Brittle on domain-specific tasks
RAG Fresh knowledge and source attribution Retrieval errors become product errors
Fine-tuning Stable behavior or domain adaptation Data quality, cost, and rollback complexity

06Follow-up questions to expect

  1. When would you choose fine-tuning over RAG on top of a foundation model?
  2. Why did foundation models make prompt engineering and evaluation more important?
  3. What new production risks did foundation models introduce?
Next playbook

Explain the AI product lifecycle from ideation to production.

8 min · Production AI