AI dev environments on real machines
Pair Claude, open models, or any AI agent with a real Linux VM, VS Code, and your repo. Bring your own account. Run anything. Break it safely.
Available now
3 categories, 15+ recipesClaude
Anthropic Claude with a real Linux machine.
Bring your own Claude account. Sign in, clone your repo, and pair Claude with VS Code on a real Linux machine. Side machines, full root, real failure modes.
ExploreOpen Models
Run Llama, Qwen, DeepSeek, Gemma, Granite and more.
One-click recipes for the top open-weight models on Ollama. Spin up a GPU-ready machine, pull a model, start serving it. No quotas, no API keys, no rate limits.
ExploreAI Personal Assistants
OpenClaw, OpenCode, OpenFang, ZeroClaw, and more.
A growing catalog of open-source AI coding agents and assistants, pre-wired to your Ollama models. Browser-based control, full shell access, hosted on EasyEnv.
ExploreComing soon
In active developmentGemini
Coming soonGoogle Gemini with the same first-class machine experience as Claude.
Codex
Coming soonOpenAI Codex agent paired with a real Linux machine and your repo.
Why EasyEnv AI
The AI agent runs the system, not a chat window
Real Linux, real root
Every AI agent runs inside a real VM. Root access, systemd, nested Docker and K8s. Not a stubbed sandbox.
Bring your own account
Sign in with your own Claude, OpenAI, or Hugging Face account. Your usage, your keys, your data.
Side machines on tap
Wire a database, a worker, and an app machine together on a private VPN. The agent operates across the whole topology.
Ephemeral by default
Spin a machine up for an hour, destroy it when done. Break things on purpose. Production stays untouched.
Pick your AI. Get a real machine.
Start with Claude, open models, or one of the personal assistants. Spin up an environment in seconds and destroy it when you're done.
The catalog
Every AI in one place
See it run
An AI agent that
actually runs the system
Watch a full agent loop: spin up a workspace, SSH into a machine, point Claude at the repo, fix the failing tests, tear it down.
- Real shell, real services, not a stubbed sandbox
- Side machines wired in on a private VPN
- Ephemeral: spin it up, tear it down, no cleanup
What people build
Three workflows that fit on EasyEnv AI
Refactor a tangled service
Claude on a real machine, your repo cloned in, full test suite runnable. The agent edits files, runs tests, fixes regressions until it's green.
- Live test runs
- Full repo context
- Multi-file edits
Run a private LLM for your team
Spin up an Ollama box with DeepSeek, Llama, Qwen, or Gemma. Pair Open WebUI as the chat front-end. No API keys, no quotas, your data never leaves the box.
- 11+ open models
- GPU-ready
- Chat UI included
Hand a ticket to an autonomous agent
OpenClaw or OpenCode on a machine wired to your Ollama box. Drop a Jira ticket in, walk away, come back to a pull request.
- Hands-off operation
- Pull-request output
- Bring your own model
Compare
How does EasyEnv AI stack up?
A side-by-side look at the AI dev environments engineers reach for, and where EasyEnv AI fits.
FAQ
Questions, answered
EasyEnv AI is a catalog of AI dev environments. Each one is a real Linux machine in the browser with an AI agent already wired in: Claude, an open-source model on Ollama, or an open-source agent like OpenClaw. Pick one, get a machine, get to work.
Claude (sign in with your own Anthropic account), open-weight models on Ollama (DeepSeek, Llama 3.1/3.2, Qwen 2.5, Gemma, Granite, TinyLlama, and more), and open-source agents like OpenClaw, OpenCode, OpenFang, ZeroClaw, plus Open WebUI as a chat front-end. Gemini and Codex are on the way.
For Claude, yes. You sign in with your own Anthropic account and we never see your conversations. For open models and open-source agents, no. You run them on your EasyEnv machine, no keys, no quotas.
Inside a real Linux VM that you control. Root access, systemd, nested Docker and K8s. The agent runs shell commands, edits files in your repo, runs tests, and ships changes. Not a stubbed sandbox.
Yes. Workspaces are clusters. Wire a database box, a worker box, and an app box together on a private VPN, and the AI agent operates across the whole topology, not just one container.
Yes. Each workspace runs in an isolated environment. Only people you explicitly invite can access it. We never train any model on your code, and we never share workspace contents with third parties.
Workspaces are ephemeral by default. Spin one up, experiment, destroy it. No cleanup, no leftover state. Perfect for spikes, risky changes, and one-off agent runs.
There is a free tier to try things out. Paid plans scale with workspace count and resources. See /pricing for current numbers.
More from EasyEnv
Looking for something else?
EasyEnv Workspace
Pre-configured dev environments for your whole team.
Spin up Postgres, Redis, Kubernetes, Kafka, anything your stack needs. One workspace, every machine ready in seconds.
EasyEnv Interview
Live, hands-on technical interviews on real machines.
Move past whiteboards. Run interviews in real environments with shared terminals, video, and full session replay.
EasyEnv Academy
Hands-on courses on real Linux machines with an AI tutor.
Learn Python, Go, Kubernetes, and more. Every lesson runs on a real Linux machine, in 11 languages, with a tutor that reads your code.
