Has anyone use S3fs to make an S3 bucket a mount point on a EC2 instance for Velociraptor #2749
-
Hello, I in the middle of building out the architecture for our Velociraptor infrastructure and wanted to know if anyone has installed Velociraptor on a dedicated mount point ( in this case a S3 bucket ) and had Velociraptor function as designed? The end goal is to have the S3 bucket mounted to the EC2 and have the artifacts sent to Open Search from the S3 bucket to then get analyzed in Timesketch. Also, instead mounting /opt/velociraptor, has anyone just mounted the /opt/velociraptor/clients directory and still had velociraptor function as designed? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
I'm not sure this will work well. Velociraptor writes by appending to files all the time and as far as I know an S3 object can not be appended to once finalized. We have the S3 backups artifacts that automatically upload collections to S3 once completed and that's how most people implement that architecture (push to timesketch via S3). But I don't think having velociraptor use S3 for storage directly is going to work well. Additionally, at scale we really need fast storage so velociraptor can store data coming from clients quickly so it can serve more clients. S3 is not going to be fast enough. We can use efs which is also slow and we have a lot of tricks to get performance up with efs so it might work but I'm not sure if it's good enough for S3. |
Beta Was this translation helpful? Give feedback.
I'm not sure this will work well. Velociraptor writes by appending to files all the time and as far as I know an S3 object can not be appended to once finalized.
We have the S3 backups artifacts that automatically upload collections to S3 once completed and that's how most people implement that architecture (push to timesketch via S3). But I don't think having velociraptor use S3 for storage directly is going to work well.
Additionally, at scale we really need fast storage so velociraptor can store data coming from clients quickly so it can serve more clients. S3 is not going to be fast enough. We can use efs which is also slow and we have a lot of tricks to get performance up with efs so…