-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
feat: Add taskbroker + worker + scheduler #3738
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Add a taskbroker, scheduler and worker pod. While these pods aren't doing much right now, in a subsequent release they will and the cron/worker containers will be removed/replaced by these new containers. Refs getsentry/taskbroker#267
Codecov ReportAll modified and coverable lines are covered by tests ✅
✅ All tests successful. No failed tests found. Additional details and impacted files@@ Coverage Diff @@
## master #3738 +/- ##
=======================================
Coverage 99.45% 99.45%
=======================================
Files 3 3
Lines 183 183
=======================================
Hits 182 182
Misses 1 1 ☔ View full report in Codecov by Sentry. |
# This volume stores task data that is inflight | ||
# It should persist across restarts. If this volume is | ||
# deleted, up to ~2048 tasks will be lost. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This begs a question. If the volume if deleted, then how should one recover 2048 lost tasks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You would have to reset offsets in Kafka.
taskworker: | ||
<<: *sentry_defaults | ||
command: run taskworker --concurrency=4 --rpc-host=taskbroker:50051 --num-brokers=1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is this concurrency=4
coming from?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No I mean, why does it has to be exactly 4?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't have to be 4. In saas, we typically run either 16 or 32 concurrent workers. If left undefined the default is 1
which is will limit throughput of a self-hosted install as tasks are used extensively in ingest. 4 felt like a decent balance of providing more processing power without consuming a ton of memory.
docker-compose.yml
Outdated
ports: | ||
- "$SENTRY_BIND:50051/tcp" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This must not be exposed.
ports: | |
- "$SENTRY_BIND:50051/tcp" |
Add a taskbroker, scheduler and worker pod. While these pods aren't doing much right now, in a subsequent release they will and the cron/worker containers will be removed/replaced by these new containers.
Refs getsentry/taskbroker#267
TODO