froglet.
Public beta · v0.1fsn1 · all systems normal

Stop your sandbox.
Resume in 18ms.

Real Linux microVMs for AI agents and the people who build them. Snap a box, stop it, pay nothing while it's off — then resume in 18 milliseconds with disk, IP, and SSH keys intact. Not a container that vanishes when the task ends. Not a VPS that bills around the clock.

Free during beta · no card · ssh in 90 seconds

90 seconds to your first box
$npm i -g @froglet/cli
or brew install froglet
warm resume
18ms
p50 · from paused
cold from snapshot
312ms
p50 · userfaultfd
api create → ssh
100ms
p50 · first byte
fleet
18 hosts
fsn1 · all green
The whole loop

Boot. Mount. Snap. Clone.

The same four verbs in the CLI, the SDK, the dashboard, and the REST API. Every state change appends to an event log you can replay, audit, or rewind from any tag.

~/projects/agent · froglet · zsh02:24:37
$ froglet new --template ubuntu-24.04 my-agent
creating sandbox my-agent (template ubuntu-24.04)
ready ssh root@my-agent.froglet.shcold start 312ms
$ froglet volume attach my-agent models /models
attached models → /models hot-mountno reboot · 127ms
$ froglet snap my-agent --tag pre-deploy
snapshot captured sha256:7a9c…412ms · 184 MB
$ froglet clone --from pre-deploy my-agent-replay
ready my-agent-replaywarm resume 18ms
Three ways to start

Every boot is fast. Most never see disk.

Most platforms have one start path — cold, every time. Froglet routes each boot through one of three. The fastest one wins nine times out of ten.

warm resume
18ms
p50 · from paused

From the warm pool. Each host keeps a handful of paused VMs hot in memory. Allocation is an unpause plus a network remap. This is how nine out of ten of your boots actually happen.

cold from snapshot
312ms
p50 · userfaultfd

Restore a saved blob. Pages fault in on access — we don't load the whole RAM image, just the working set. A 2 GB box maps 30–80 MB before it starts running.

cold from scratch
500ms
p50 · microvm 6.x

Full kernel boot. A microvm-tuned Linux 6.x with no firmware to walk through. The only path you rarely see — reserved for brand-new template builds and the first boot of a fresh image.

Why froglet

Sandboxes either die or bill 24/7. There's a third option.

E2B, Daytona, and Modal solved fire-and-forget — spin one up, run a task, tear it down. A regular VPS solves always-on — and bills you for it 24 hours a day. Froglet is the middle. A real Linux box your agent can actually live in. Paused when there’s nothing to do. Back in 18ms when there is.

(01)

Resume in 18ms.

Each host keeps a pool of paused VMs hot in memory. We unpause, remap the network, hand the box back. Five to twenty milliseconds wall-clock — faster than the first token of any LLM response.

(02)

Real kernel. Not a container.

Every sandbox runs its own KVM-isolated guest kernel. Not a syscall filter. Not gVisor. Not someone else’s host kernel one CVE away. The boundary you actually want around an agent with shell access.

(03)

Stop it. Come back to the same box.

Pause for a minute, an hour, a week. Pay only for snapshot storage while it’s off. Resume with the same disk, the same IP, the same SSH keys, the same hostname — the exact machine, not a restored copy.

Where this fits

Built for boxes that come back.

Anywhere an agent needs more than a request-response. Anywhere a workflow pauses overnight and wants to pick up exactly where it left off — same disk, same process tree, same address.

(01)

Agent runtimes

Your agent edits files, runs scripts, breaks things. Give it a box it can live in. Snapshot before risky moves, replay from any tag. The agent thinks in days; the bill thinks in seconds.

(02)

Code execution for LLMs

Tool-calls that exec shell. A real Linux user-space — apt, pip, node, anything from a Dockerfile — instead of a 200-syscall container. The model gets the box it expects, you get the kernel boundary you want.

(03)

Per-PR preview environments

Spin a box per pull request, hand back an SSH-ready hostname, pause when idle. Eighteen milliseconds later a reviewer drops by and the environment is right where they left it.

Under the hood

Real Linux primitives. All the way down.

If you've shipped a VM in production, you'll recognize every layer. We didn't invent a runtime — we picked battle-tested components and tuned the whole stack for one number: how fast a paused box becomes a running one.

VMMCloud Hypervisor

Rust-based VMM. Live snapshot and restore. Smaller attack surface than QEMU, in a memory-safe language.

IsolationKVM

The Linux kernel's own virtualization. Not a syscall translator. Not gVisor. Not a sandbox-shaped pile of seccomp filters.

Memory restoreuserfaultfd

Pages fault in on access. A 2 GB sandbox maps 30–80 MB at resume — the rest pays for itself as work actually happens.

Base image sharevirtio-fs

50 sandboxes from one 2 GB base image = ~3 GB of RAM total, not 100 GB. The host page cache does the work.

Persistent volumesJuiceFS over R2

POSIX filesystems mountable on many sandboxes. Hot-attached via virtio-fs at runtime — no reboot, no shutdown window.

Control channelAF_VSOCK

No DHCP. No IP roundtrips. CID assigned at schedule time. The sandbox talks to its host on a private socket.

NetworkingStatic IPAM

IP and MAC reserved before boot, remapped at resume. Your sandbox keeps its address across stop and start — SSH bookmarks survive.

GPU passthroughVFIO

Whole-device assignment to one sandbox. No driver shim, no sharing. Your CUDA stack sees the card as if you owned the host.

Persistent storage

Filesystems that outlive the box.

Volumes are POSIX filesystems you can mount on as many sandboxes as you want. Cache models, share datasets, build shared workspaces for an agent swarm. Files survive sandbox deletion. Files survive the entire host they were attached to.

one volume
50sandboxes

Mount the same /models volume on fifty boxes. They all see the same files. They all see each other’s writes. POSIX semantics, not eventual-consistency object storage.

hot-attach
127ms

ch-remote add-fs at runtime. Attach a volume mid-job and the new mount appears under your chosen path. No reboot. No shutdown window. No state lost on the running process tree.

backed by
R2jfs

JuiceFS over Cloudflare R2 for the bytes, Redis for the metadata. You see a real POSIX tree. We see object-storage economics at any scale you can grow into.

Built for teams

Workspaces, scopes, and an audit trail your security team won't hate.

SOC2 was an input, not an afterthought. We picked Bearer + scopes over OAuth-shaped ceremony, append-only audit over ad-hoc logging, and tag-based invalidation so forgetting a user actually means forgetting them.

(01)

Workspaces, roles, scope overlays.

One tenant per workspace. owner, admin, member, viewer map to scope bundles. Layer per-member grants or denials on top. The dashboard, the API, and the CLI all read from the same scope check — least-privilege without writing your own policy engine.

(02)

Custom domains, your TLS.

Bring *.agents.yourcompany.com and your sandboxes get certs minted at the edge. The hostname your agent prints is the URL your customer hits. Built on Cloudflare for SaaS — set the CNAME, verify the TXT, ship.

(03)

Audit log shaped for SOC2.

Every permission decision, every sandbox lifecycle event, every secret touch — appended to an immutable ledger. Cache entries are classified at write time; restricted reads fire audit hooks. GDPR Article 17 erasure is one tag invalidation away.

The API

POST a box. Mount a volume. Snapshot it.

REST. Bearer auth. Per-key scopes — grant sandboxes:read, sandboxes:write, volumes:attach à la carte. The CLI and SDKs are thin wrappers around the same endpoints. Switch between them mid-script.

REST · v1 · api.froglet.shcurl
# Spawn an Ubuntu 24.04 box, size s (2 vCPU · 2 GB · 20 GB)
$ curl https://api.froglet.sh/v1/sandboxes \ -H "Authorization: Bearer fl_..." \ -d '{ "template": "ubuntu-24.04", "size": "s" }'
{ "id": "sb_a7f3c2", "slug": "calm-otter-2412", "state": "provisioning", "ssh": "ssh root@calm-otter-2412.froglet.sh", "ports": [] }
# Hot-attach a shared volume — no reboot, mount appears at /models
$ curl -X POST https://api.froglet.sh/v1/sandboxes/sb_a7f3c2/volumes \ -H "Authorization: Bearer fl_..." \ -d '{ "volume_id": "vol_models", "mount_path": "/models" }'
{ "attached": true, "tag": "models", "elapsedMs": 127 }
# Run something inside it
$ curl https://api.froglet.sh/v1/sandboxes/sb_a7f3c2/exec \ -H "Authorization: Bearer fl_..." \ -d '{ "cmd": "pip install vllm && python serve.py" }'
{ "exitCode": 0, "durationMs": 8423 }
# Snapshot it, stop it, walk away
$ curl -X POST https://api.froglet.sh/v1/sandboxes/sb_a7f3c2/snapshot \ -H "Authorization: Bearer fl_..." \ -d '{ "label": "pre-deploy" }'
{ "snapshotId": "snap_8c1d", "sizeBytes": 192937984, "hash": "sha256:9fb1…" }
90 seconds from now

Get a sandbox.

Free during beta. No card. SSH the moment it boots — your first box is ready before your terminal stops blinking.

$npm i -g @froglet/cli