Plan S - Starter
- VRAM
- 24 GB
- RAM
- 32 GB
- vCPU
- 8 vCPU
- Storage
- 100 GB
Spin up A100, H100, H200 and L40 GPUs in seconds. Deploy inference endpoints with per-token billing. All in EU-based infrastructure, with visible pricing from the first click.
Operational truths
Same hardware you would rent from a hyperscaler, at a fraction of the complexity. Every plan ships with a browser-ready JupyterLab and SSH access.
Set minimum VRAM, CPU, RAM, storage and a price ceiling. We find the best available fit without the catalog dance.
No quotas, no tickets, no sales calls. Self-serve from the first click.
Browse the catalog or use filters for VRAM, CPU, RAM, storage and a price ceiling. See the €/hour before you commit.
A clean provisioning flow without hidden steps. Your environment is reachable from the browser in under a minute, with SSH and JupyterLab pre-wired.
Live metrics, visible cost and a dashboard that tells you exactly what's running. Per-minute billing, stop any time.
$ curl -X POST https://api.clodei.com/v1/instances \ -H "Authorization: Bearer $CLODEI_TOKEN" \ -d '{"tier":"L","region":"eu-west","duration_h":2}'
{ "instance_id": "i-a1b2c3d4", "status": "provisioning", "jupyter_url": "https://i-a1b2c3d4.clodei.com/lab", "price_per_hour": "€0.45", "estimated_ready_at": "2026-04-11T14:32:17Z" }
Automate everything with a REST API, a Python SDK and webhooks. No dashboards required once your workflow is dialled in.
Full CRUD over instances, plans, metrics and billing.
Typed client with async support and streaming logs.
Subscribe to instance lifecycle, billing events and alerts.
Whether you're fine-tuning a 70B model or running a weekend image-gen project, Clodei gives you the right GPU for the job.
Multi-GPU training runs with fast interconnect. Bring your own dataset and checkpoint.
vLLM-backed serving with continuous batching. Scale to zero when idle.
Stable Diffusion, SDXL, Flux and friends. Low-latency GPUs with plenty of VRAM.
Notebook-ready environments for ML research, numerical simulations and scientific compute.
Real on-demand prices verified against each provider's public pricing page. Clodei's rate is live from our backend.
| Provider | GPU | Price | Provision | EU | MIG | Per-min |
|---|---|---|---|---|---|---|
ClodeiLive | A100 80GB (lowest €/h) | €1.69/h | ~60s | Yes | Yes | Yes |
AWS | p4de.24xlarge (A100 80GB, per-GPU pro-rata) | €4.62/h | ~3–5 min | Yes | No | No |
Lambda Labs | gpu_1x_a100_sxm4 (A100 80GB) | €1.69/h | ~2 min | No | No | Yes |
RunPod | A100 80GB SXM (Community) | €1.89/h | ~1–2 min | No | No | Yes |
CoreWeave | A100 80GB SXM (on-demand) | €2.07/h | ~2 min | Yes | No | Yes |
Competitor rates reflect published on-demand pricing for a single NVIDIA A100 80GB equivalent as of the verification date. The Clodei rate is fetched live from our backend.
Every decision on Clodei is documented, auditable and explained in plain language. No surprises.
Infrastructure in European data centres with GDPR-native terms. No offshore data transfers unless you explicitly opt in.
MIG-partitioned GPUs, Cloudflare Access for SSH, encrypted volumes. Your workloads never share a kernel with another tenant.
Per-minute billing, no egress fees, and a live cost widget on every instance. Know what you're paying before and after launch.
Still unclear? Reach us at hello@clodei.com — a real human replies.
Per-minute billing with a 1-minute minimum. The €/hour rate shown in the catalog is the rate you pay, pro-rated. No egress fees, no hidden platform surcharges.
Most instances are reachable from the browser in under 60 seconds. If the underlying pool is warm, the same request lands in under 20 seconds.
Yes. Every instance ships with a pre-configured JupyterLab accessible from the browser. You also get SSH access with a clodei user and your configured SSH keys.
Today we run in EU-West. Additional EU regions are on the roadmap. The inference product will launch EU-only first.
Single-instance availability is best-effort today. We publish monthly uptime numbers on status.clodei.com and a formal SLA is coming with the enterprise tier.
We're rolling it out to waitlist members during Q3 2026. Join the waitlist below and we'll reach out when it's your turn.
Pay per 1M tokens, not per GPU hour. vLLM under the hood, MIG isolation, an OpenAI-compatible API. Built for teams that don't want to run their own inference cluster.