Skip to content
On-demand GPU & AI cloud

Your AI infrastructure,
on-demand.

Spin up A100, H100, H200 and L40 GPUs in seconds. Deploy inference endpoints with per-token billing. All in EU-based infrastructure, with visible pricing from the first click.

Operational truths

Uptime
99.9
Provisioning
~60s
Region
EU-based
Egress
Zero fees
GPU catalog

Pick a tier or build your own.

Same hardware you would rent from a hyperscaler, at a fraction of the complexity. Every plan ships with a browser-ready JupyterLab and SSH access.

Starter

Plan S - Starter

VRAM
24 GB
RAM
32 GB
vCPU
8 vCPU
Storage
100 GB
From
€0.38/ hour
Launch
Standard

Plan M - Standard

VRAM
48 GB
RAM
64 GB
vCPU
16 vCPU
Storage
200 GB
From
€0.63/ hour
Launch
Power

Plan L - Professional

VRAM
80 GB
RAM
128 GB
vCPU
32 vCPU
Storage
500 GB
From
€1.69/ hour
Launch
Max

Plan XL - Enterprise

VRAM
160 GB
RAM
256 GB
vCPU
64 vCPU
Storage
1000 GB
From
€3.38/ hour
Launch
Custom

Need something else?

Set minimum VRAM, CPU, RAM, storage and a price ceiling. We find the best available fit without the catalog dance.

Configure
How it works

From idea to running job in three steps.

No quotas, no tickets, no sales calls. Self-serve from the first click.

01

Pick your GPU

Browse the catalog or use filters for VRAM, CPU, RAM, storage and a price ceiling. See the €/hour before you commit.

02

Launch in seconds

A clean provisioning flow without hidden steps. Your environment is reachable from the browser in under a minute, with SSH and JupyterLab pre-wired.

03

Run and monitor

Live metrics, visible cost and a dashboard that tells you exactly what's running. Per-minute billing, stop any time.

clodei — deploy.sh
$ curl -X POST https://api.clodei.com/v1/instances \
    -H "Authorization: Bearer $CLODEI_TOKEN" \
    -d '{"tier":"L","region":"eu-west","duration_h":2}'
201 Created
{
  "instance_id": "i-a1b2c3d4",
  "status": "provisioning",
  "jupyter_url": "https://i-a1b2c3d4.clodei.com/lab",
  "price_per_hour": "€0.45",
  "estimated_ready_at": "2026-04-11T14:32:17Z"
}
Developer-first

Deploy programmatically.

Automate everything with a REST API, a Python SDK and webhooks. No dashboards required once your workflow is dialled in.

  • REST API

    Full CRUD over instances, plans, metrics and billing.

  • Python SDK

    Typed client with async support and streaming logs.

  • Webhooks

    Subscribe to instance lifecycle, billing events and alerts.

Read the API docs
Workloads

Built for the work you actually do.

Whether you're fine-tuning a 70B model or running a weekend image-gen project, Clodei gives you the right GPU for the job.

LLM training & fine-tuning

Multi-GPU training runs with fast interconnect. Bring your own dataset and checkpoint.

High-throughput inference

vLLM-backed serving with continuous batching. Scale to zero when idle.

Image & video generation

Stable Diffusion, SDXL, Flux and friends. Low-latency GPUs with plenty of VRAM.

Research & experimentation

Notebook-ready environments for ML research, numerical simulations and scientific compute.

How we stack up

Same hardware. Better everything else.

Real on-demand prices verified against each provider's public pricing page. Clodei's rate is live from our backend.

Prices verified on April 11, 2026
ProviderGPUPriceProvisionEUMIGPer-min
ClodeiLive
A100 80GB (lowest €/h)€1.69/h~60sYesYesYes
AWS
p4de.24xlarge (A100 80GB, per-GPU pro-rata)€4.62/h~3–5 minYesNoNo
Lambda Labs
gpu_1x_a100_sxm4 (A100 80GB)€1.69/h~2 minNoNoYes
RunPod
A100 80GB SXM (Community)€1.89/h~1–2 minNoNoYes
CoreWeave
A100 80GB SXM (on-demand)€2.07/h~2 minYesNoYes

Competitor rates reflect published on-demand pricing for a single NVIDIA A100 80GB equivalent as of the verification date. The Clodei rate is fetched live from our backend.

Built on trust

EU-hosted. GDPR-native. Transparent by design.

Every decision on Clodei is documented, auditable and explained in plain language. No surprises.

EU data sovereignty

Infrastructure in European data centres with GDPR-native terms. No offshore data transfers unless you explicitly opt in.

Per-tenant isolation

MIG-partitioned GPUs, Cloudflare Access for SSH, encrypted volumes. Your workloads never share a kernel with another tenant.

Transparent billing

Per-minute billing, no egress fees, and a live cost widget on every instance. Know what you're paying before and after launch.

FAQ

Questions answered, plainly.

Still unclear? Reach us at hello@clodei.com — a real human replies.

Per-minute billing with a 1-minute minimum. The €/hour rate shown in the catalog is the rate you pay, pro-rated. No egress fees, no hidden platform surcharges.

Most instances are reachable from the browser in under 60 seconds. If the underlying pool is warm, the same request lands in under 20 seconds.

Yes. Every instance ships with a pre-configured JupyterLab accessible from the browser. You also get SSH access with a clodei user and your configured SSH keys.

Today we run in EU-West. Additional EU regions are on the roadmap. The inference product will launch EU-only first.

Single-instance availability is best-effort today. We publish monthly uptime numbers on status.clodei.com and a formal SLA is coming with the enterprise tier.

We're rolling it out to waitlist members during Q3 2026. Join the waitlist below and we'll reach out when it's your turn.

Coming soon

Serverless inference, EU-native.

Pay per 1M tokens, not per GPU hour. vLLM under the hood, MIG isolation, an OpenAI-compatible API. Built for teams that don't want to run their own inference cluster.

  • Pay per 1M tokens
  • MIG-isolated
  • OpenAI-compatible

Your email is stored to contact you about inference availability. No marketing emails.

ClodeiCLODEI

On-demand GPU & AI cloud for engineers who ship. EU-hosted, per-minute billing, visible pricing.

All systems operational

© 2026 Clodei. All rights reserved.