The 8-billion-parameter Llama 3.1 checkpoint, served locally through Ollama. Requires roughly 16 GB of RAM and is suited to longer-context reasoning workloads on GPU-capable hosts.
Every EasyEnv recipe spins up in seconds on a real Linux VM-not a stripped-down sandbox. The Ollama Long Context Llama 3.1:8b recipe is provisioned by an open Ansible role, so the machine that boots for you is reproducible, inspectable, and matches what you would get in production.
$ easyenv workspace create --recipe ollama_llama3.1_8b --name ollama_llama3.1_8b-demo
Provisioning Ollama Long Context Llama 3.1:8b...
Workspace ready in ~45s
$ easyenv workspace ssh ollama_llama3.1_8b-demo
Connected. You're on the machine.Builds and deploys AI applications, integrates LLMs into products, and manages self-hosted AI infrastructure
Develops and deploys machine learning models and AI systems for production environments
Builds and maintains data pipelines, ETL processes, and data infrastructure for analytics
Self-hosted LLM inference with Ollama and OpenWebUI for private AI workloads
Self-hosted AI document assistant powered by AnythingLLM and Ollama
GPU-accelerated machine learning workstation with Python, Ollama, and object storage