Private Inference
Privasys · AI/ML
TDXContainer
Attestation verified
About
Private Inference runs large language models and retrieval pipelines inside Confidential VMs with hardware-encrypted memory. Model weights and user data never leave the trust boundary. Clients verify the inference service through a standard RA-TLS connection.
Verify this application
Connect to this application using any of our RA-TLS verification libraries. The attestation evidence is embedded in the TLS certificate and verified during the standard handshake.
# Python example
from ratls_client import RaTlsClient
client = RaTlsClient("service.example.com", 443)
response = client.get("/")Source code
This application is open source. Inspect the code, audit the builds, and verify that what runs inside the enclave matches what is published.