Skip to content

Commit 39fa099

Browse files
authored
jobs/downloads/process_log: Decrease batch size (#9195)
On our production servers we've been seeing warnings like: > Failed to fill temp_downloads table: error encoding message to server: value too large to transmit It looks like this error is returned by `tokio-postgres`, which can't handle the amount of data we throw at it when using a batch size of 10k rows. This commit decreases the batch size in the hope that this makes the background job succeed. If not, we might have to further decrease it. A potentially better solution for this would be to use a `COPY` query, but that wasn't available yet in `diesel` when the code was originally written.
1 parent e669bdb commit 39fa099

File tree

1 file changed

+4
-5
lines changed

1 file changed

+4
-5
lines changed

src/worker/jobs/downloads/process_log.rs

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -268,11 +268,10 @@ fn create_temp_downloads_table(conn: &mut impl Conn) -> QueryResult<usize> {
268268
fields(message = "INSERT INTO temp_downloads ...")
269269
)]
270270
fn fill_temp_downloads_table(downloads: DownloadsMap, conn: &mut impl Conn) -> QueryResult<()> {
271-
// Postgres has a limit of 65,535 parameters per query, so we have to
272-
// insert the downloads in batches. Since we fill four columns per
273-
// [NewDownload] we can only insert 16,383 rows at a time. To be safe we
274-
// use a maximum batch size of 10,000.
275-
const MAX_BATCH_SIZE: usize = 10_000;
271+
// `tokio-postgres` has a limit on the size of values it can send to the
272+
// database. To avoid hitting this limit, we insert the downloads in
273+
// batches.
274+
const MAX_BATCH_SIZE: usize = 5_000;
276275

277276
let map = downloads
278277
.into_vec()

0 commit comments

Comments
 (0)