-
-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
blog: Add IPFS experiment post #497
Conversation
Any progress on this? |
nope, i've been distracted :) I'm still continuously mirroring haiku / repos / release images to I'll try and finish this post up |
778ea18
to
98c1f21
Compare
Updated some of the language based on some of @nielx 's feedback. Tried to make the language a little more common and less tech. I think the article is a little long? Might need trimmed down a bit. |
Is this ready to merge? |
|
||
With the addition of package management in 2013, Haiku's amount of data to manage has been growing. | ||
|
||
In ~2018 I moved our Haiku package repositories (and nightly images, and release images) to S3 object storage. This helped to reduce the large amount of data we were lugging around on our core infrastructure, and offloaded it onto an externally mananged service which we could progmatically manage. All of our CI/CD could securely and pragmatically build artifacts into these S3 buckets. We found a great vendor which let us host a lot of data with unlimited egress (outbound) bandwidth for an amazing price. This worked great through 2021, however the vendor recently began walking back their "unlimited egress bandwidth" position. In late April 2021, they shutdown our buckets resulting in a repo + nightly outage of ~24 hours while we negotiated with their support team. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In ~2018 I moved our Haiku package repositories (and nightly images, and release images) to S3 object storage. This helped to reduce the large amount of data we were lugging around on our core infrastructure, and offloaded it onto an externally mananged service which we could progmatically manage. All of our CI/CD could securely and pragmatically build artifacts into these S3 buckets. We found a great vendor which let us host a lot of data with unlimited egress (outbound) bandwidth for an amazing price. This worked great through 2021, however the vendor recently began walking back their "unlimited egress bandwidth" position. In late April 2021, they shutdown our buckets resulting in a repo + nightly outage of ~24 hours while we negotiated with their support team. | |
Around 2018 I moved our Haiku package repositories (and nightly images, and release images) to S3 object storage. This helped to reduce the large amount of data we were lugging around on our core infrastructure, and offloaded it onto an externally managed service which we could programmatically manage. All of our CI/CD could securely and programmatically build artifacts into these S3 buckets. We found a great vendor which let us host a lot of data with unlimited egress (outbound) bandwidth for an amazing price. This worked great through 2021, however the vendor recently began walking back their "unlimited egress bandwidth" position. In late April 2021, they shutdown our buckets resulting in a repo + nightly outage of ~24 hours while we negotiated with their support team. |
* Notice, 3+ minutes is longer than the default HTTP timeout (30 seconds). | ||
* Gateway timeouts can happen until the IPFS gateway "locates" the data. | ||
* IPFS has a steep learning curve for anyone mirroring. It takes time to find out how to do what | ||
* Haiku's go-lang port needs a lot more work before we can build IPFS on Haiku. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* Haiku's go-lang port needs a lot more work before we can build IPFS on Haiku. | |
* Haiku's Golang port needs a lot more work before we can build IPFS on Haiku. |
are not going away as long as we can continue to host data from our S3 buckets. I'm hopeful we can get enough | ||
people playing with the new system to reduce S3 bandwidth and give us some time to investigate this alternative path. | ||
|
||
A few users have mentioned adding native IPFS support to pkgman.. this would enable Haiku to obtain updates |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few users have mentioned adding native IPFS support to pkgman.. this would enable Haiku to obtain updates | |
A few users have mentioned adding native IPFS support to pkgman...this would enable Haiku to obtain updates |
Let's stop holding up blog posts over minor things. I don't think this is too long. I fixed a few issues in this one with my suggestions above. If I don't hear any feedback in a few hours I will apply those and merge. |
I think there is no feedfack ... ping :) |
Honestly... i'm fizzling out a bit on this. While the idea is sound (and I used IPFS for the distribution of R1/Beta3), in actual practice at scale IPFS sucks for "big data sets". Getting huge amounts of data pinned is unclear and unreliable from the perspective of the person trying to mirror the data. Being able to let users "simply pin + seed our package repos to users" was really appealing.. but given the issues above it's a bit less appealing. Some example issues:
I'm still mirroring our package repositories on IPFS, but not sure how hard we should push it. tldr; The idea is awesome and looks like something we badly need. The execution is a bit less awesome. |
I guess we can probably note the shortcomings of IPFS in the blog post then - it's an experiment so there are bound to be things that need improving. |
No description provided.