Last updated 2026-05-10

Privacy Policy

Dynoyard runs your AI workloads on cloud GPUs. Your prompts and the model's completions are processed by your own dedicated inference endpoint, run on hardware Dynoyard provisions on your behalf.

What we collect

What we don't do

Where your data lives

Inference endpoints run on cloud GPUs we provision (TensorDock, Hyperstack, RunPod) in regions you select (e.g. eu-fra, us-east). The control plane runs on Railway. The customer-facing edge router runs on Cloudflare Workers. All data in transit is TLS-encrypted.

Multi-tenancy

Hobby and Standard tiers run shared inference pools — multiple customers share one GPU instance using vLLM continuous batching. Authentication is per-app (your sk-dyno-... key never reaches the upstream model). Prompts from different customers do not mix in the model's context window.

Pro tier uses a 5-customer mini-pool with the same isolation guarantees. Performance tier is dedicated hardware — no neighbors.

Retention

Subprocessors

We use the following subprocessors:

Compliance

SOC 2 Type I report in progress (target Q3 2026). We can sign a Data Processing Agreement (DPA) on request — email [email protected].

Your rights

You can export, correct, or delete your data at any time. Contact us at [email protected] for any privacy request — we respond within 7 days.

Changes

We'll email you 30 days before any material change. The current version is always at dynoyard.app/privacy.

Questions? [email protected].