This repository provides minimal "Hello, World!" examples for several popular AI and numerical computing frameworks, each isolated in its own folder with a Dockerfile
and app.py
.
These examples are designed to:
- ✅ Dockerized for reproducible, isolated testing
- ✅ Designed for Intel® TDX (Trust Domain Extensions) compatibility
- ✅ Tested on the iExec Confidential Computing infrastructure
⚠️ While some frameworks include asconify.sh
script for SGX compatibility, execution support is currently verified only for TDX. See notes below.
ai-frameworks-hello-world/
├── tensorflow/
│ ├── app.py
│ └── Dockerfile
├── pytorch/
│ ├── app.py
│ └── Dockerfile
├── scikit/
│ ├── app.py
│ ├── Dockerfile
│ └── sconify.sh (not working on SGX)
├── openvino/
│ ├── app.py
│ ├── Dockerfile
│ └── sconify.sh (not working on SGX)
├── numpy/
│ ├── app.py
│ ├── Dockerfile
│ └── sconify.sh (not working on SGX)
Each app.py
file contains a minimal script that initializes and runs a simple function with the corresponding framework (e.g., tensor addition, model inference, classification, etc.).
Framework | Version Range | Status on TDX | Notes on SGX |
---|---|---|---|
TensorFlow | 2.19.0 | ✅ Supported | – |
PyTorch | 2.7.0+cu126 | ✅ Supported | – |
Scikit-Learn | 1.6.1 | ✅ Supported | ✅ Supported, sconify.sh included |
OpenVINO | 2024.6.0 | ✅ Supported | sconify.sh included, but execution issues on SGX |
NumPy | 2.0.2 | ✅ Supported | ✅ Supported, sconify.sh included |
All examples are compatible and tested with Intel® TDX.
📖 To learn how to build and run these containers inside a TDX guest environment, follow this guide:
👉 Deploying Confidential iApp with TDX
To build and run a specific framework's container:
cd tensorflow # or pytorch, scikit, etc.
docker build -t hello-tensorflow .
docker run --rm hello-tensorflow -e IEXEC_OUT=/iexec_out -v ./iexec_out:/iexec_out
For frameworks with sconify.sh
, you can attempt to prepare the image for SCONE/SGX:
# Please edit the docker image name in the sconify.sh file beforehand
./sconify.sh # ⚠️ May not work due to SGX limitations
-
Python 3 is used in all images
-
Designed for experimentation and integration in confidential computing pipelines
-
All containers are based on compatible and minimal Linux images to ensure reproducibility and compatibility with Intel TDX. The following base images are used across different frameworks:
- python:3.9-bullseye
- debian:bullseye-slim for compatibility with SCONE and Intel SGX tooling
⚠️ Some frameworks include sconify.sh scripts for SGX compatibility, but execution may not succeed on SGX for openvino, scikit, and numpy due to runtime limitations.
Feel free to open issues or PRs to add:
- More frameworks (e.g., ONNX, XGBoost)
- SGX fixes
- Extended examples