Resume in 18ms.
Each host keeps a pool of paused VMs hot in memory. We unpause, remap the network, hand the box back. Five to twenty milliseconds wall-clock — faster than the first token of any LLM response.
Real Linux microVMs for AI agents and the people who build them. Snap a box, stop it, pay nothing while it's off — then resume in 18 milliseconds with disk, IP, and SSH keys intact. Not a container that vanishes when the task ends. Not a VPS that bills around the clock.
Free during beta · no card · ssh in 90 seconds
The same four verbs in the CLI, the SDK, the dashboard, and the REST API. Every state change appends to an event log you can replay, audit, or rewind from any tag.
$ froglet new --template ubuntu-24.04 my-agentcreating sandbox my-agent (template ubuntu-24.04)ready ssh root@my-agent.froglet.shcold start 312ms$ froglet volume attach my-agent models /modelsattached models → /models hot-mountno reboot · 127ms$ froglet snap my-agent --tag pre-deploysnapshot captured sha256:7a9c…412ms · 184 MB$ froglet clone --from pre-deploy my-agent-replayready my-agent-replaywarm resume 18ms
Most platforms have one start path — cold, every time. Froglet routes each boot through one of three. The fastest one wins nine times out of ten.
From the warm pool. Each host keeps a handful of paused VMs hot in memory. Allocation is an unpause plus a network remap. This is how nine out of ten of your boots actually happen.
Restore a saved blob. Pages fault in on access — we don't load the whole RAM image, just the working set. A 2 GB box maps 30–80 MB before it starts running.
Full kernel boot. A microvm-tuned Linux 6.x with no firmware to walk through. The only path you rarely see — reserved for brand-new template builds and the first boot of a fresh image.
E2B, Daytona, and Modal solved fire-and-forget — spin one up, run a task, tear it down. A regular VPS solves always-on — and bills you for it 24 hours a day. Froglet is the middle. A real Linux box your agent can actually live in. Paused when there’s nothing to do. Back in 18ms when there is.
Each host keeps a pool of paused VMs hot in memory. We unpause, remap the network, hand the box back. Five to twenty milliseconds wall-clock — faster than the first token of any LLM response.
Every sandbox runs its own KVM-isolated guest kernel. Not a syscall filter. Not gVisor. Not someone else’s host kernel one CVE away. The boundary you actually want around an agent with shell access.
Pause for a minute, an hour, a week. Pay only for snapshot storage while it’s off. Resume with the same disk, the same IP, the same SSH keys, the same hostname — the exact machine, not a restored copy.
Anywhere an agent needs more than a request-response. Anywhere a workflow pauses overnight and wants to pick up exactly where it left off — same disk, same process tree, same address.
Your agent edits files, runs scripts, breaks things. Give it a box it can live in. Snapshot before risky moves, replay from any tag. The agent thinks in days; the bill thinks in seconds.
Tool-calls that exec shell. A real Linux user-space — apt, pip, node, anything from a Dockerfile — instead of a 200-syscall container. The model gets the box it expects, you get the kernel boundary you want.
Spin a box per pull request, hand back an SSH-ready hostname, pause when idle. Eighteen milliseconds later a reviewer drops by and the environment is right where they left it.
If you've shipped a VM in production, you'll recognize every layer. We didn't invent a runtime — we picked battle-tested components and tuned the whole stack for one number: how fast a paused box becomes a running one.
Rust-based VMM. Live snapshot and restore. Smaller attack surface than QEMU, in a memory-safe language.
The Linux kernel's own virtualization. Not a syscall translator. Not gVisor. Not a sandbox-shaped pile of seccomp filters.
Pages fault in on access. A 2 GB sandbox maps 30–80 MB at resume — the rest pays for itself as work actually happens.
50 sandboxes from one 2 GB base image = ~3 GB of RAM total, not 100 GB. The host page cache does the work.
POSIX filesystems mountable on many sandboxes. Hot-attached via virtio-fs at runtime — no reboot, no shutdown window.
No DHCP. No IP roundtrips. CID assigned at schedule time. The sandbox talks to its host on a private socket.
IP and MAC reserved before boot, remapped at resume. Your sandbox keeps its address across stop and start — SSH bookmarks survive.
Whole-device assignment to one sandbox. No driver shim, no sharing. Your CUDA stack sees the card as if you owned the host.
Volumes are POSIX filesystems you can mount on as many sandboxes as you want. Cache models, share datasets, build shared workspaces for an agent swarm. Files survive sandbox deletion. Files survive the entire host they were attached to.
Mount the same /models volume on fifty boxes. They all see the same files. They all see each other’s writes. POSIX semantics, not eventual-consistency object storage.
ch-remote add-fs at runtime. Attach a volume mid-job and the new mount appears under your chosen path. No reboot. No shutdown window. No state lost on the running process tree.
JuiceFS over Cloudflare R2 for the bytes, Redis for the metadata. You see a real POSIX tree. We see object-storage economics at any scale you can grow into.
SOC2 was an input, not an afterthought. We picked Bearer + scopes over OAuth-shaped ceremony, append-only audit over ad-hoc logging, and tag-based invalidation so forgetting a user actually means forgetting them.
One tenant per workspace. owner, admin, member, viewer map to scope bundles. Layer per-member grants or denials on top. The dashboard, the API, and the CLI all read from the same scope check — least-privilege without writing your own policy engine.
Bring *.agents.yourcompany.com and your sandboxes get certs minted at the edge. The hostname your agent prints is the URL your customer hits. Built on Cloudflare for SaaS — set the CNAME, verify the TXT, ship.
Every permission decision, every sandbox lifecycle event, every secret touch — appended to an immutable ledger. Cache entries are classified at write time; restricted reads fire audit hooks. GDPR Article 17 erasure is one tag invalidation away.
REST. Bearer auth. Per-key scopes — grant sandboxes:read, sandboxes:write, volumes:attach à la carte. The CLI and SDKs are thin wrappers around the same endpoints. Switch between them mid-script.
# Spawn an Ubuntu 24.04 box, size s (2 vCPU · 2 GB · 20 GB)$ curl https://api.froglet.sh/v1/sandboxes \ -H "Authorization: Bearer fl_..." \ -d '{ "template": "ubuntu-24.04", "size": "s" }'{ "id": "sb_a7f3c2", "slug": "calm-otter-2412", "state": "provisioning", "ssh": "ssh root@calm-otter-2412.froglet.sh", "ports": [] }# Hot-attach a shared volume — no reboot, mount appears at /models$ curl -X POST https://api.froglet.sh/v1/sandboxes/sb_a7f3c2/volumes \ -H "Authorization: Bearer fl_..." \ -d '{ "volume_id": "vol_models", "mount_path": "/models" }'{ "attached": true, "tag": "models", "elapsedMs": 127 }# Run something inside it$ curl https://api.froglet.sh/v1/sandboxes/sb_a7f3c2/exec \ -H "Authorization: Bearer fl_..." \ -d '{ "cmd": "pip install vllm && python serve.py" }'{ "exitCode": 0, "durationMs": 8423 }# Snapshot it, stop it, walk away$ curl -X POST https://api.froglet.sh/v1/sandboxes/sb_a7f3c2/snapshot \ -H "Authorization: Bearer fl_..." \ -d '{ "label": "pre-deploy" }'{ "snapshotId": "snap_8c1d", "sizeBytes": 192937984, "hash": "sha256:9fb1…" }
Free during beta. No card. SSH the moment it boots — your first box is ready before your terminal stops blinking.