Skip to content

Commit 1621b41

Browse files
committed
additional features section and misc updates
1 parent 8ea634b commit 1621b41

File tree

1 file changed

+65
-47
lines changed

1 file changed

+65
-47
lines changed

docs/open-source-self-hosting.mdx

Lines changed: 65 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ description: "You can self-host Trigger.dev on your own infrastructure."
1111
<img src="/images/self-hosting.png" alt="Self-hosting architecture" />
1212
</Frame>
1313

14-
The self-hosting guide comes in two parts. The first part is a simple setup where you run everything on one server. In the second part, the webapp and worker components are split on two separate machines.
14+
The self-hosting guide covers two alternative setups. The first options uses a simple setup where you run everything on one server. With the second option, the webapp and worker components are split on two separate machines.
1515

1616
You're going to need at least one Debian (or derivative) machine with Docker and Docker Compose installed. We'll also use Ngrok to expose the webapp to the internet.
1717

@@ -55,7 +55,7 @@ Should the burden ever get too much, we'd be happy to see you on [Trigger.dev cl
5555

5656
You will also need a way to expose the webapp to the internet. This can be done with a reverse proxy, or with a service like Ngrok. We will be using the latter in this guide.
5757

58-
## Part 1: Single server
58+
## Option 1: Single server
5959

6060
This is the simplest setup. You run everything on one server. It's a good option if you have spare capacity on an existing machine, and have no need to independently scale worker capacity.
6161

@@ -167,55 +167,104 @@ DEPLOY_REGISTRY_NAMESPACE=<your_dockerhub_username>
167167
3. Log in to Docker Hub both locally and your server. For the split setup, this will be the worker machine. You may want to create an [access token](https://hub.docker.com/settings/security) for this.
168168

169169
```bash
170-
docker login -u <your_dockerhub_username>
170+
docker login -u <your_dockerhub_username> docker.io
171171
```
172172

173-
4. Restart the services
173+
4. Ensure the `docker-provider` container is logged in as well:
174+
175+
```bash
176+
docker exec -ti \
177+
trigger-docker-provider-1 \
178+
docker login -u <your_dockerhub_username> docker.io
179+
```
180+
181+
5. Restart the services
174182

175183
```bash
176184
./stop.sh && ./start.sh
177185
```
178186

179-
5. You can now deploy v3 projects using the CLI with these flags:
187+
6. You can now deploy v3 projects using the CLI with these flags:
180188

181189
```
182190
npx trigger.dev@latest deploy --self-hosted --push
183191
```
184192

185-
## Part 2: Split services
193+
## Option 2: Split services
186194

187195
With this setup, the webapp will run on a different machine than the worker components. This allows independent scaling of your workload capacity.
188196

189197
### Webapp setup
190198

191-
All steps are the same as in Part 1, except for the following:
199+
All steps are the same as for a single server, except for the following:
192200

193-
1. Run the start script with the `webapp` argument
201+
1. **Startup.** Run the start script with the `webapp` argument
194202

195203
```bash
196204
./start.sh webapp
197205
```
198206

199-
2. Tunnelling is now _required_. Please follow the tunnelling section from above.
207+
2. **Tunnelling.** This is now _required_. Please follow the [tunnelling](/open-source-self-hosting#tunnelling) section.
200208

201209
### Worker setup
202210

203-
1. Copy your `.env` file from the webapp to the worker machine
211+
1. **Environment variables.** Copy your `.env` file from the webapp to the worker machine:
204212

205213
```bash
206214
# an example using scp
207215
scp -3 root@<webapp_machine>:docker/.env root@<worker_machine>:docker/.env
208216
```
209217

210-
2. Run the start script with the `worker` argument
218+
2. **Startup.** Run the start script with the `worker` argument
211219

212220
```bash
213221
./start.sh worker
214222
```
215223

216-
2. Tunnelling is _not_ required for the worker components.
224+
3. **Tunnelling.** This is _not_ required for the worker components.
225+
226+
4. **Registry setup.** Follow the [registry setup](/open-source-self-hosting#registry-setup) section but run the last command on the worker machine - note the container name is different:
227+
228+
```bash
229+
docker exec -ti \
230+
trigger-worker-docker-provider-1 \
231+
docker login -u <your_dockerhub_username> docker.io
232+
```
233+
234+
## Additional features
235+
236+
### Large payloads
237+
238+
By default, payloads over 512KB will be offloaded to S3-compatible storage. If you don't provide the required env vars, runs with payloads larger than this will fail.
239+
240+
For example, using Cloudflare R2:
241+
242+
```bash
243+
OBJECT_STORE_BASE_URL="https://<bucket>.<account>.r2.cloudflarestorage.com"
244+
OBJECT_STORE_ACCESS_KEY_ID="<r2 access key with read/write access to bucket>"
245+
OBJECT_STORE_SECRET_ACCESS_KEY="<r2 secret key>"
246+
```
247+
248+
Alternatively, you can increase the threshold:
217249

218-
## Checkpoint support
250+
```bash
251+
# size in bytes, example with 5MB threshold
252+
TASK_PAYLOAD_OFFLOAD_THRESHOLD=5242880
253+
```
254+
255+
### Version locking
256+
257+
There are several reasons to lock the version of your Docker images:
258+
- **Backwards compatibility.** We try our best to maintain compatibility with older CLI versions, but it's not always possible. If you don't want to update your CLI, you can lock your Docker images to that specific version.
259+
- **Ensuring full feature support.** Sometimes, new CLI releases will also require new or updated platform features. Running unlocked images can make any issues difficult to debug. Using a specific tag can help here as well.
260+
261+
By default, the images will point at the latest versioned release via the `v3` tag. You can override this by specifying a different tag in your `.env` file. For example:
262+
263+
```bash
264+
TRIGGER_IMAGE_TAG=v3.0.4
265+
```
266+
267+
### Checkpoint support
219268

220269
<Warning>
221270
This requires an _experimental Docker feature_. Successfully checkpointing a task today, does not
@@ -225,14 +274,14 @@ scp -3 root@<webapp_machine>:docker/.env root@<worker_machine>:docker/.env
225274
Checkpointing allows you to save the state of a running container to disk and restore it later. This can be useful for
226275
long-running tasks that need to be paused and resumed without losing state. Think fan-out and fan-in, or long waits in email campaigns.
227276

228-
The checkpoints will be pushed to the same registry as the deployed images. Please see the [Registry setup](#registry-setup) section for more information.
277+
The checkpoints will be pushed to the same registry as the deployed images. Please see the [registry setup](#registry-setup) section for more information.
229278

230-
### Requirements
279+
#### Requirements
231280

232281
- Debian, **NOT** a derivative like Ubuntu
233282
- Additional storage space for the checkpointed containers
234283

235-
### Setup
284+
#### Setup
236285

237286
Underneath the hood this uses Checkpoint and Restore in Userspace, or [CRIU](https://github.com/checkpoint-restore/criu) in short. We'll have to do a few things to get this working:
238287

@@ -289,25 +338,6 @@ echo "FORCE_CHECKPOINT_SIMULATION=0" >> .env
289338
./stop.sh worker && ./start.sh worker
290339
```
291340

292-
## Large payloads
293-
294-
By default, payloads over 512KB will be offloaded to S3-compatible storage. If you don't provide the required env vars, runs with payloads larger than this will fail.
295-
296-
For example, using Cloudflare R2:
297-
298-
```bash
299-
OBJECT_STORE_BASE_URL="https://<bucket>.<account>.r2.cloudflarestorage.com"
300-
OBJECT_STORE_ACCESS_KEY_ID="<r2 access key with read/write access to bucket>"
301-
OBJECT_STORE_SECRET_ACCESS_KEY="<r2 secret key>"
302-
```
303-
304-
Alternatively, you can increase the threshold:
305-
306-
```bash
307-
# size in bytes, example with 5MB threshold
308-
TASK_PAYLOAD_OFFLOAD_THRESHOLD=5242880
309-
```
310-
311341
## Updating
312342

313343
Once you have everything set up, you will periodically want to update your Docker images. You can easily do this by running the update script and restarting your services:
@@ -354,18 +384,6 @@ git stash pop
354384
./stop.sh && ./start.sh
355385
```
356386

357-
## Version locking
358-
359-
There are several reasons to lock the version of your Docker images:
360-
- **Backwards compatibility.** We try our best to maintain compatibility with older CLI versions, but it's not always possible. If you don't want to update your CLI, you can lock your Docker images to that specific version.
361-
- **Ensuring full feature support.** Sometimes, new CLI releases will also require new or updated platform features. Running unlocked images can make any issues difficult to debug. Using a specific tag can help here as well.
362-
363-
By default, the images will point at the latest versioned release via the `v3` tag. You can override this by specifying a different tag in your `.env` file. For example:
364-
365-
```bash
366-
TRIGGER_IMAGE_TAG=v3.0.4
367-
```
368-
369387
## CLI usage
370388

371389
This section highlights some of the CLI commands and options that are useful when self-hosting. Please check the [CLI reference](/cli-introduction) for more in-depth documentation.

0 commit comments

Comments
 (0)