Skip to content

feat: Use S3 node store with garage #3498

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 14 commits into
base: master
Choose a base branch
from
Draft

feat: Use S3 node store with garage #3498

wants to merge 14 commits into from

Conversation

BYK
Copy link
Member

@BYK BYK commented Dec 31, 2024

Note

This patch may or may not make it to the main branch so please do not rely on this yet. You are, however, free to use it as a blueprint for your own, custom S3 or S3-like variations.

Enables S3 node store using Garage and sentry-nodestore-s3 by @stayallive

This should alleviate all the issues stemming from (ab)using PostgreSQL as the node store.

  • We should implement the 90-day retention through S3 lifecycle options: https://garagehq.deuxfleurs.fr/
  • We should find a good size for the node store size and make it variable (currently hard-coded at 100G)
  • We should have a proper migration path for existing installs

Copy link

codecov bot commented Dec 31, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 98.06%. Comparing base (8c1653d) to head (8d7c1ff).
Report is 59 commits behind head on master.

✅ All tests successful. No failed tests found.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #3498   +/-   ##
=======================================
  Coverage   98.06%   98.06%           
=======================================
  Files           3        3           
  Lines         207      207           
=======================================
  Hits          203      203           
  Misses          4        4           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@aldy505
Copy link
Collaborator

aldy505 commented Dec 31, 2024

Any reason why you didn't use SeaweedFS per what you said yesterday?


if [[ $($garage bucket list | tail -1 | awk '{print $1}') != 'nodestore' ]]; then
node_id=$($garage status | tail -1 | awk '{print $1}')
$garage layout assign -z dc1 -c 100G "$node_id"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this 100G be a variable somewhere?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think we should add a new GARAGE_STORAGE_SIZE env var to .env. That said not sure if that makes much sense as we would not honor any changes to that after the initial installation. Unless this actually reserves 100G, I think leaving it hard-coded to a "good enough" value and then documenting how to change this if needed would be a better option.

Thoughts?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aldy505 added the env var regardless. Do you think 100G is a good size for the average self-hosted operator?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if 100G will be immediately allocated by Garage. If it's not (meaning the current storage space won't be modified) then I think it's good. If it's not, I think it's better to just allocate it to 25G space.

Comment on lines 91 to 104
SENTRY_NODESTORE = "sentry_nodestore_s3.S3PassthroughDjangoNodeStorage"
SENTRY_NODESTORE_OPTIONS = {
"delete_through": True,
"write_through": False,
"read_through": True,
"compression": False, # we have compression enabled in Garage itself
"endpoint_url": "http://garage:3900",
"bucket_path": "nodestore",
"bucket_name": "nodestore",
"retry_attempts": 3,
"region_name": "garage",
"aws_access_key_id": "<GARAGE_KEY_ID>",
"aws_secret_access_key": "<GARAGE_SECRET_KEY>",
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Docs) should we provide ways for the user to offload these to actual S3 or something?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe something under the "experimental" part?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably would be better if we put it on the Experimental -> External Storage page. Will backlog that.

BYK and others added 2 commits December 31, 2024 15:23
@BYK
Copy link
Member Author

BYK commented Dec 31, 2024

@aldy505

Any reason why you didn't use SeaweedFS per what you said yesterday?

Well I started with that and realized 3 things:

  1. It really is not geared towards single-node setups and have nodes with different roles. This makes is more challenging to scale up or set up in our setup
  2. It has this paid admin interface. Not a deal breaker but it is clear that it is geared towards more "professional" setups
  3. Its S3 API interface support is not really great

Garage fits the bill much better as it is explicitly created for smaller setups like this, easy to expand without specialized roles, doesn't have any paid thing in it, and has much more decent and familiar S3 interface support.

@doc-sheet
Copy link
Contributor

It really is not geared towards single-node setups and have nodes with different roles. This makes is more challenging to scale up or set up in our setup

when I tried seaweedfs last time (and I still use it for sourcemap/profile storage tbh) it had single node ability via weed server command.
Like

weed server -filter=true -s3=true -master=true -volume=true

Some of them enabled by default.

@doc-sheet
Copy link
Contributor

doc-sheet commented May 25, 2025

I think garage/minio simpler for small setups, seaweedfs looks necessary for mid to high setups because all other services I know keep files as is.

And thousands of thousands small files like profiles not ideal to store on most popular filesystems i guess.

@aldy505
Copy link
Collaborator

aldy505 commented Jun 4, 2025

I think garage/minio simpler for small setups, seaweedfs looks necessary for mid to high setups because all other services I know keep files as is.

@doc-sheet Hey, I'm going to work on this PR. I'd think seaweed is better for self-hosted Sentry. One thing I don't like about Garage is that we need to specify the storage allocation beforehand, if we set it to 100GB, there might be some people that have more data than 100GB, I don't want that to cause any issues.

That said, since you said you've used seaweed before: How was your experience? How does it compare to MinIO or Ceph?

And thousands of thousands small files like profiles not ideal to store on most popular filesystems i guess.

Yeah if we set up an object storage, we might as well move filestore & profiles there too. But let's focus on nodestore first.

@doc-sheet
Copy link
Contributor

How was your experience? How does it compare to MinIO or Ceph?

It is a bit strange sometimes. But it is fine.

It has multiple options for filer store.
I didn't try leveldb storage aiming to fault tolerance.

At first I tried redis it worked for several months and then... I just lost all data.
It was there physically but wasn't available from API (s3 or web) - each list call returned different results.

I don't know if issue was in redis or weed itself. i suspect bug with ttl could be the reason too.

But after that incident I wiped cluster and started new one with scylla as a filer backend and it works fine for almost a year already despite that ttl bug.

Seaweedfs have multiple versions like

  • 3.89
  • 3.89_full
  • 3.89_large_disk
  • 3.89_large_disk_full

I suggest to use large_disk always. Documentation is not clear but it is easy to reach that limit
https://github.com/seaweedfs/seaweedfs/wiki/FAQ#how-to-configure-volumes-larger-than-30gb

I don't know difference between full and normal and just use _large_disk_full builds :)

Also I don't use s3 auth - I was too lazy to set it up.

Other than all that I have no problems and barely touched it after initial setup. It just works.
I added some volumes but not removed any yet.

As for minio and ceph.
I never used ceph.

But minio was the reason to look for alternatives.

Tons of profiles from js-sdk stored as different files started to affect my monitoring script and soon it might start to affect minio performance too.

And it is not that easy to scale minio. And probably impossible to optimize for small-files storage. At least in my low-cost setup.

@doc-sheet
Copy link
Contributor

doc-sheet commented Jun 4, 2025

let's focus on nodestore first.

If seaweedfs would control ttl then there is another catch.
I'm not sure if it is possible to control ttl with s3-api already.

weed have it's own settings for collections and it creates collection for each s3-bucket.
https://github.com/seaweedfs/seaweedfs/wiki/S3-API-FAQ#setting-ttl

But if sentry itself would cleanup old data I guess there is no difference.

@aldy505
Copy link
Collaborator

aldy505 commented Jun 5, 2025

How was your experience? How does it compare to MinIO or Ceph?

It is a bit strange sometimes. But it is fine.

It has multiple options for filer store. I didn't try leveldb storage aiming to fault tolerance.

At first I tried redis it worked for several months and then... I just lost all data. It was there physically but wasn't available from API (s3 or web) - each list call returned different results.

I don't know if issue was in redis or weed itself. i suspect bug with ttl could be the reason too.

But after that incident I wiped cluster and started new one with scylla as a filer backend and it works fine for almost a year already despite that ttl bug.

Seaweedfs have multiple versions like

  • 3.89
  • 3.89_full
  • 3.89_large_disk
  • 3.89_large_disk_full

I suggest to use large_disk always. Documentation is not clear but it is easy to reach that limit https://github.com/seaweedfs/seaweedfs/wiki/FAQ#how-to-configure-volumes-larger-than-30gb

I don't know difference between full and normal and just use _large_disk_full builds :)

Also I don't use s3 auth - I was too lazy to set it up.

Other than all that I have no problems and barely touched it after initial setup. It just works. I added some volumes but not removed any yet.

Good to know about Seaweed

As for minio and ceph. I never used ceph.

But minio was the reason to look for alternatives.

Tons of profiles from js-sdk stored as different files started to affect my monitoring script and soon it might start to affect minio performance too.

And it is not that easy to scale minio. And probably impossible to optimize for small-files storage. At least in my low-cost setup.

Ah so everyone has the same experience with minio.

let's focus on nodestore first.

If seaweedfs would control ttl then there is another catch. I'm not sure if it is possible to control ttl with s3-api already.

weed have it's own settings for collections and it creates collection for each s3-bucket. https://github.com/seaweedfs/seaweedfs/wiki/S3-API-FAQ#setting-ttl

But if sentry itself would cleanup old data I guess there is no difference.

The sentry cleanup job only cleans up the one on filesystem. If we're using S3, it won't clean up anything. We need to configure S3 data cleanup on our own.

@doc-sheet
Copy link
Contributor

doc-sheet commented Jun 6, 2025

Looks like i missed that seaweedfs now have an ability to control ttl with s3 api. And I even linked to correct section of FAQ. :)

I'd like to look into new integraton with seaweedfs.

Nad by the way I like the idea of expanding sentry images.

I am myself install some packages and modules.

Like maybe an extra step in install to build user provided Dockerfiles.

@aldy505
Copy link
Collaborator

aldy505 commented Jun 7, 2025

Nad by the way I like the idea of expanding sentry images.

I am myself and install some packages and modules.

Like maybe an extra step in install to build user provided Dockerfiles.

Yes but I don't think people would go to non-default setup if they don't need anything.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

4 participants