Skip to content

Commit 155608d

Browse files
committed
v3: add 3MB limit on batch and single payloads
1 parent f8b0b37 commit 155608d

File tree

5 files changed

+79
-3
lines changed

5 files changed

+79
-3
lines changed

apps/webapp/app/env.server.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -208,6 +208,7 @@ const EnvironmentSchema = z.object({
208208
MAXIMUM_LIVE_RELOADING_EVENTS: z.coerce.number().int().default(1000),
209209
MAXIMUM_TRACE_SUMMARY_VIEW_COUNT: z.coerce.number().int().default(25_000),
210210
TASK_PAYLOAD_OFFLOAD_THRESHOLD: z.coerce.number().int().default(524_288), // 512KB
211+
TASK_PAYLOAD_MAXIMUM_SIZE: z.coerce.number().int().default(3_145_728), // 3MB
211212
});
212213

213214
export type Environment = z.infer<typeof EnvironmentSchema>;

apps/webapp/app/routes/api.v1.tasks.$taskId.batch.ts

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ import { authenticateApiRequest } from "~/services/apiAuth.server";
77
import { logger } from "~/services/logger.server";
88
import { BatchTriggerTaskService } from "~/v3/services/batchTriggerTask.server";
99
import { HeadersSchema } from "./api.v1.tasks.$taskId.trigger";
10+
import { env } from "~/env.server";
1011

1112
const ParamsSchema = z.object({
1213
taskId: z.string(),
@@ -43,6 +44,12 @@ export async function action({ request, params }: ActionFunctionArgs) {
4344

4445
const { taskId } = ParamsSchema.parse(params);
4546

47+
const contentLength = request.headers.get("content-length");
48+
49+
if (!contentLength || parseInt(contentLength) > env.TASK_PAYLOAD_MAXIMUM_SIZE) {
50+
return json({ error: "Request body too large" }, { status: 413 });
51+
}
52+
4653
// Now parse the request body
4754
const anyBody = await request.json();
4855

apps/webapp/app/routes/api.v1.tasks.$taskId.trigger.ts

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@ import type { ActionFunctionArgs } from "@remix-run/server-runtime";
22
import { json } from "@remix-run/server-runtime";
33
import { TriggerTaskRequestBody } from "@trigger.dev/core/v3";
44
import { z } from "zod";
5+
import { env } from "~/env.server";
56
import { authenticateApiRequest } from "~/services/apiAuth.server";
67
import { logger } from "~/services/logger.server";
78
import { parseRequestJsonAsync } from "~/utils/parseRequestJson.server";
@@ -36,7 +37,7 @@ export async function action({ request, params }: ActionFunctionArgs) {
3637

3738
const contentLength = request.headers.get("content-length");
3839

39-
if (!contentLength || parseInt(contentLength) > 10 * 1000 * 1000) {
40+
if (!contentLength || parseInt(contentLength) > env.TASK_PAYLOAD_MAXIMUM_SIZE) {
4041
return json({ error: "Request body too large" }, { status: 413 });
4142
}
4243

docs/v3/limits.mdx

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,3 +37,13 @@ If you add them dynamically using code make sure you add a `deduplicationKey` so
3737
If you're creating schedules for your user you will definitely need to request more schedules from us.
3838

3939
<Snippet file="v3/soft-limit.mdx" />
40+
41+
## Task payloads and outputs
42+
43+
| Limit | Details |
44+
| ---------------------- | --------------------------------------------- |
45+
| Single trigger payload | Must not exceed 3MB |
46+
| Batch trigger payload | The total of all payloads must not exceed 3MB |
47+
| Task outputs | Must not exceed 10MB |
48+
49+
Payloads and outputs that exceed 512KB will be offloaded to object storage and a presigned URL will be provided to download the data when calling `runs.retrieve`. You don't need to do anything to handle this in your tasks however, as we will transparently upload/download these during operation.

docs/v3/triggering.mdx

Lines changed: 59 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -539,7 +539,7 @@ export async function create() {
539539

540540
## Large Payloads
541541

542-
We recommend keeping your task payloads as small as possible. We currently have a hard limit on task payloads above 10MB.
542+
We recommend keeping your task payloads as small as possible. We currently have a hard limit on task payloads above 3MB.
543543

544544
If your payload size is larger than 512KB, instead of saving the payload to the database, we will upload it to an S3-compatible object store and store the URL in the database.
545545

@@ -560,5 +560,62 @@ if (run.payloadPresignedUrl) {
560560

561561
<Note>
562562
We also use this same system for dealing with large task outputs, and subsequently will return a
563-
corresponding `outputPresignedUrl`
563+
corresponding `outputPresignedUrl`. Task outputs are limited to 10MB.
564564
</Note>
565+
566+
If you need to pass larger payloads, you'll need to upload the payload to your own storage and pass a URL to the file in the payload instead. For example, uploading to S3 and then sending a presigned URL that expires in URL:
567+
568+
<CodeGroup>
569+
570+
```ts /yourServer.ts
571+
import { myTask } from "./trigger/myTasks";
572+
import { s3Client, getSignedUrl, PutObjectCommand, GetObjectCommand } from "./s3";
573+
import { createReadStream } from "node:fs";
574+
575+
// Upload file to S3
576+
await s3Client.send(
577+
new PutObjectCommand({
578+
Bucket: "my-bucket",
579+
Key: "myfile.json",
580+
Body: createReadStream("large-payload.json"),
581+
})
582+
);
583+
584+
// Create presigned URL
585+
const presignedUrl = await getSignedUrl(
586+
s3Client,
587+
new GetObjectCommand({
588+
Bucket: "my-bucket",
589+
Key: "my-file.json",
590+
}),
591+
{
592+
expiresIn: 3600, // expires in 1 hour
593+
}
594+
);
595+
596+
// Now send the URL to the task
597+
const handle = await myTask.trigger({
598+
url: presignedUrl,
599+
});
600+
```
601+
602+
```ts /trigger/myTasks.ts
603+
import { task } from "@trigger.dev/sdk/v3";
604+
605+
export const myTask = task({
606+
id: "my-task",
607+
run: async (payload: { url: string }) => {
608+
// Download the file from the URL
609+
const response = await fetch(payload.url);
610+
const data = await response.json();
611+
612+
// Do something with the data
613+
},
614+
});
615+
```
616+
617+
</CodeGroup>
618+
619+
### Batch Triggering
620+
621+
When using `batchTrigger` or `batchTriggerAndWait`, the total size of all payloads cannot exceed 3MB. This means if you are doing a batch of 100 runs, each payload should be less than 30KB.

0 commit comments

Comments
 (0)