Centralized training infrastructure exposes datasets and model IP. Phala enables consortium learning and regulated-industry training with hardware isolation.

Traditional cloud infrastructure exposes sensitive information to operators and administrators.
More Information
Hardware-enforced isolation prevents unauthorized access while maintaining computational efficiency.
More Information
End-to-end encryption protects data in transit, at rest, and critically during computation.
More Information
Cryptographic verification ensures code integrity and proves execution in genuine TEE hardware.
More InformationDeploy a fully optimized system and upgrade your current setup.
Load training datasets directly from private sources inside TEEs. Your sensitive data never leaves the secure enclave during the entire training pipeline.
Training gradients stay encrypted end-to-end. Hardware attestation proves your model updates never leaked.
Every training run generates cryptographic proof that your data remained confidential throughout the process.
Train on distributed TEE clusters worldwide. Scale your confidential training workloads across secure data centers with hardware-level isolation.
Launch distributed pre-training jobs on confidential GPU clusters. Slurm and Kubernetes templates with TEE attestation and sealed checkpoint storage.
# Deploy distributed training on TEE cluster
docker run -d \
--name phala-training \
--gpus all \
--device=/dev/tdx_guest \
-v $(pwd)/data:/data \
-v $(pwd)/checkpoints:/checkpoints \
-e WORLD_SIZE=8 \
-e RANK=0 \
-e MASTER_ADDR=10.0.1.100 \
-e MASTER_PORT=29500 \
-e MODEL_CONFIG=/data/llama-70b.json \
-e TRAINING_DATA=/data/consortium/*.jsonl \
-e CHECKPOINT_DIR=/checkpoints \
phalanetwork/training:latest
# Monitor training progress
docker logs -f phala-training
# Training output from sealed environment
# Epoch 1/10: Loss 2.134 | Throughput 1.2M tok/s
# Epoch 2/10: Loss 1.876 | Throughput 1.2M tok/s
# Checkpoint saved: /checkpoints/epoch-2.bin
# Attestation signed: 0x8a9b7c6d...Generate cryptographic proofs of your training process. Verify cluster attestation, dataset hashes, and reproducible build IDs for auditors and consortium partners.
# Get cluster attestation and training lineage
curl -X POST https://cloud-api.phala.network/api/v1/training/verify \
-H "Content-Type: application/json" \
-d '{
"job_id": "train-consortium-llama-70b",
"verify_cluster_attestation": true,
"verify_dataset_hashes": true,
"verify_checkpoint_lineage": true
}'
# Attestation proves sealed training
{
"verified": true,
"cluster_size": 8,
"tee_type": "Intel TDX",
"dataset_hashes": [
"0x8a9b7c6d...",
"0x1a2b3c4d..."
],
"checkpoint_lineage": "llama-70b-base -> epoch-10.bin",
"reproducible_build_id": "0xfe7d8c9b...",
"timestamp": "2025-01-15T14:30:00Z"
}Meeting the highest compliance requirements for your business
Discover how Phala Network enables privacy-preserving AI across different use cases
Everything you need to know about Confidential Training
GPU TEE overhead is typically 5-15% compared to bare metal. Memory encryption happens at hardware speed with Intel TDX/AMD SEV. High-speed RDMA interconnect keeps gradient synchronization efficient even across encrypted enclaves.
TEE infrastructure adds 10-20% premium over standard GPU instances. However, consortium learning splits costs across partners while maintaining data custody—often more economical than each party training separately.
Each partner's data stays in separate sealed storage. Training orchestrator coordinates gradient updates without exposing raw data cross-party. Remote attestation proves proper isolation before any party sends datasets.
No. Hardware memory encryption in TEEs prevents any operator access to runtime state. Gradients are computed and synchronized inside encrypted enclaves with cryptographic proofs of isolation.
Gradients are computed inside TEEs and never leave in plaintext. Differential privacy techniques can be applied within the enclave. Only final model checkpoints are exported with signed attestation lineage.
Tensor parallelism, data parallelism, pipeline parallelism, and hybrid strategies. Phala's confidential scheduler supports PyTorch FSDP, DeepSpeed, and Megatron-LM inside TEE clusters.
Yes, minimal changes required. Wrap your training code in our confidential container and configure attestation policies. Standard frameworks (PyTorch, TensorFlow, JAX) run as-is inside TEEs.
Enclave-safe telemetry exports training metrics without exposing sensitive data. TensorBoard and Weights & Biases integrations available with differential privacy filters for metric publishing.
Train large-scale AI models on sensitive datasets with multi-GPU TEE clusters and hardware-enforced encryption.
Deploy on Phala