Deploy secure applications with hardware-guaranteed privacy using TEE technology. Built for confidential AI, private cloud compute, and secure data processing.
Docker support means no code changes required. Package your existing applications and deploy them securely in minutes.
Purpose-built for confidential AI with TEE GPU support. Run private AI models on NVIDIA GPUs with hardware-guaranteed confidentiality.
Open source, independently audited by security experts, with secure services out-of-the-box.
Every application comes with cryptographic attestation and a Trust Center for real-time verification.
Deploy confidential applications with just a few clicks - no complex setup required
Copy your existing docker-compose.yml file - no modifications needed

Select your TEE hardware and deployment options

Your app runs with hardware-guaranteed confidentiality

Complete Transparency. Every deployed application comes with a comprehensive Trust Center for full verification.
Review the exact source code running in your TEE environment
Detailed specs of the TEE hardware running your application
Complete network topology and security settings
Cryptographic proof of execution environment integrity

Independent security audit by zkSecurity team. Review our comprehensive security analysis and recommendations.

zkSecurity Team Audit
Comprehensive security analysis and vulnerability assessment
Compare dstack with traditional cloud providers and other confidential computing solutions
Open-source confidential computing
AWS / GCP / Azure
Cloud providers
Confidential Containers
CNCF project
Others
Alternative solutions
Everything you need to know about confidential orchestration
dstack is Phala's confidential orchestration layer — it manages GPU runners, scheduling, and attestation for AI workloads. Think of it as a trustless Kubernetes that can launch and verify fine-tuning, inference, or agent jobs across Phala's decentralized GPU network.
Traditional orchestrators (like Docker Swarm or Kubernetes) manage containers but can't prove what actually ran. dstack extends orchestration with TEE attestation, encrypted storage, and verifiable job logs — so every AI job can be proven to run securely and untampered.
dstack supports any containerized AI workload — from fine-tuning LLMs (PyTorch / TensorFlow) to serving inference APIs or autonomous agents. It handles both CPU and GPU nodes (H100 / H200 / A100 / A10) and integrates with Phala's confidential compute runtime for full isolation.
Yes. dstack's core runner and job orchestration framework are open source and being upstreamed to the Phala ecosystem. Developers can self-host it or use Phala Cloud's managed control plane.
You can start any containerized workload with a single command or YAML file. For example: dstack run --gpu H200 --image unslothai/unsloth:latest --mount data:/mnt/data train.py. This spins up a verified GPU node, mounts your encrypted dataset, and runs inside a TEE.
Yes. dstack is framework-agnostic — it runs jobs that use PyTorch, TensorFlow, JAX, or even custom CUDA code. It automatically configures environment variables for CUDA, NCCL, and secure communication between GPUs.
Absolutely. Any OCI-compatible container can be used. You can publish your own image (with dependencies, model code, etc.) and dstack will pull and execute it inside an enclave. Images can also be signed and verified to prevent tampering.
All data volumes and environment secrets are encrypted. Decryption keys are only unsealed inside the TEE after successful remote attestation. This ensures that neither operators nor other workloads can access your raw data.
Every dstack job produces a Phala Attestation Bundle — a JSON report signed by the enclave hardware. It includes the code hash, image ID, node signature, and timestamps. You can verify it programmatically or attach it to your output artifacts for auditing.
Yes. dstack supports distributed GPU training and multi-node clusters using PyTorch DDP or TensorFlow's collective ops. It automatically provisions encrypted interconnects between enclaves, so gradients never leave the secure boundary.
dstack checkpoints encrypted model states periodically. If a node fails, a new enclave can resume from the last checkpoint — preserving both progress and confidentiality.
dstack streams secure logs to your dashboard, but filters sensitive data. You can see real-time metrics (GPU usage, loss curves, throughput) while ensuring no raw dataset or intermediate tensor is ever exposed.
Join the open-source community building the future of secure computing. Get started with confidential AI and private cloud applications.