Challenge questions
Hands-on questions backed by a real recipe. The candidate works inside a fresh VM and you see what they actually did.
A Challenge drops the candidate into a real Linux VM provisioned from a recipe you pick. They get a terminal (and optionally VS Code via a stack), do the work, and submit when they think it's done. Strongest signal for engineering depth.
When to use#
- "Fix this broken Postgres replica" - pick a Puzzle recipe.
- "Wire up the missing route in this Express app" - pick a Base recipe and clone the repo via the GitHub stack.
- "Investigate why these pods won't schedule" - pick a k8s recipe with kubectl + k9s pre-installed.
Form fields#
| Field | Type | Description |
|---|---|---|
| Question Namerequired | Short title shown in the test outline and the candidate panel. | |
| Instructionrequired | The text the candidate reads. Rich text. Replaces the generic Question Body label for this type. | |
| Reciperequired | The recipe to boot for this challenge. The picker shows two kinds (Puzzle, Base); see below. | |
| Stacks | Optional stacks layered on top of the recipe (GitHub clone, VS Code, Docker Image, etc.). Same catalog as the workspace flow. | |
| Difficulty | enum | Easy, Medium, Hard, Expert. |
| Time | minutes | Time budget shown on the question header. |
| Tags | multi | Searchable labels for the bank. |
Puzzle vs Base recipes#
The recipe picker badges each entry as one of two kinds:
- Puzzle (
summit_*recipes): intentionally misconfigured. Boot them and something is broken on purpose - the candidate's job is to find and fix it. - Base: standard well-formed recipes (Python, Node, Postgres, etc.). Use these when you want a clean machine the candidate builds something on top of, often paired with a GitHub stack to clone a starter repo.
Stacks attached to the challenge#
The stacks UI inside the question modal mirrors the workspace AddBox flow: pick the stacks, fill in their config (a GitHub repo URL, a Docker image list, a Bash script body). The chosen stacks run on every candidate's machine when their interview starts.
Pass condition#
For auto-graded challenges, attach a pass-condition recipe that runs a verification command on submission. The candidate's environment is graded the moment they hit Submit; the result is stored against their interview item.
# example pass-condition shape
pass_condition:
type: command
cmd: pg_isready -h localhost -p 5432
expect_exit: 0
timeout_seconds: 5