You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 8, 2023. It is now read-only.
What do you guys think about the idea of asking the Flatpak project to integrate IPFS?
I was wondering if there's interest in exploring the possibility of storing the Flatpak store on IPFS. Ideally without any compression, to let buzhash figuring out any diff of an update.
If IPFS could be mounted, like via NFS (see ipfs/roadmap#83), IPFS could become the storage for the application and fetch new versions on-the-fly when opening the app the next time.
An example of how it could work:
The app would be put unpacked to IPFS and published under an IPNS.
On an update, the latest IPNS would be resolved and the IPFS would be mounted to ./.local/share/flatpak/app/$app-id/
IPFS would be asked to store the app in the MFS under /flatpak-apps/$app-id/ for example and fetch it recursively.
You could start the app immediately after an update, while IPFS still fetches the differences, the same goes for installations. But a warning about degraded performance while the fetching is still running would be good.
Since we could use buzhash as a chunker and the files are stored uncompressed as a directory structure this would automatically do differential updates.
Starting an app while it's not downloaded would possibly fetch a lot of small files sequentially - I doubt this would have tolerable performance
With single IPFS nodes struggling to provide many small files (See this and a few comments below), hosting all the flatpak packages on IPFS might be non-trivial.
Similar problem is with IPFS nodes hosting large NFT collection. Unless you connect directly to node only newly added pictures are found by IPFS network.
Well, since the root CID changes you would automatically connect to nodes which already have the new version. Bitswap will do the rest.
Since 0.9 the performance is extremely good for my Arch package mirror, despite it having quite a lot of stored files.
Also websites like ipfs.io are stored on ipfs, which do fetch quite fast.
I don't see why this should be an issue.
@jcaesar to you first point, true, the first start would be very slow, but as soon as you install the app would be fetched recursively, so the app might "stall" for a short while from time to time if it is requesting files which are not yet available. But that's exactly why I think it makes sense to show a warning while ipfs is still fetching.
Maybe if the user doesn't press "continue anyway" the warning would automatically go away as soon as ipfs is returning and the app starts normally :)
What do you guys think about the idea of asking the Flatpak project to integrate IPFS?
I was wondering if there's interest in exploring the possibility of storing the Flatpak store on IPFS. Ideally without any compression, to let buzhash figuring out any diff of an update.
If IPFS could be mounted, like via NFS (see ipfs/roadmap#83), IPFS could become the storage for the application and fetch new versions on-the-fly when opening the app the next time.
An example of how it could work:
The app would be put unpacked to IPFS and published under an IPNS.
On an update, the latest IPNS would be resolved and the IPFS would be mounted to ./.local/share/flatpak/app/$app-id/
IPFS would be asked to store the app in the MFS under /flatpak-apps/$app-id/ for example and fetch it recursively.
You could start the app immediately after an update, while IPFS still fetches the differences, the same goes for installations. But a warning about degraded performance while the fetching is still running would be good.
Since we could use buzhash as a chunker and the files are stored uncompressed as a directory structure this would automatically do differential updates.
I already build a similar project with https://github.com/RubenKelevra/pacman.store - the discussion which lead to this is archived here: #84
The text was updated successfully, but these errors were encountered: