-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[2021 Theme Proposal] Push for mainstream browser integration #81
Comments
This is a great idea and a perfect valid long term plan, especially with all the detailed steps you laid out,
Still good that we discuss all this! Edit:
|
@JonasKruckenberg On 2, I disagree completely for 2 reasons. First, choosing Facebook is a strawman argument. Any big website would be a strong showcase, it doesn't have to be Facebook. CloudFlare, StackExchange, Wikipedia, Internet Archives, among others, might be more aligned with our goal and would do the trick. Secondly, I think you underestimate the capacity of GAFAM+ to adopt a new technology and adapt or pivot. Facebook is the opposite of privacy, yet they run a Onion Service to access their website. They are the opposite of decentralisation, yet they are launching a crypto currency. Of course, they do it their way, and not necessirally with the same values as ours. But I'm sure that their army of engineers will figure out a way to leverage the benefits of Web 3.0 yet protect their business. A very obvious idea: decentralize all the fetching of content for clients, but also tell them to fetch a script saying: "report everything that you do on this decentralized page to our centralized servers/ decentralized DB we got the key to. Store it for next time if you're offline for now.". They would make crazy gains in network/storage costs, yet be able to carry on their creepy business. I'm sure my idea is flawed somewhere, but I'm also sure an army of well-paid engineers will figure something out. |
Yeah alright Facebook was very explicitly chosen as an example and you may be right that a lot of companies can benefit from ipfs. My thought was rather that maybe companies that depend on ads won't be the first to adopt web3 and maybe wont "make it" so to speak.
|
No, it's not bad that old companies are replaced. I wish some of them were. But don't count them dead just yet. Anyway, we're drifting away from the issue. |
Yes you're right, so either making ipfs enabled-browsers mainstream and/or mainstream browsers ipfs-enabled is the goal then 👍🏻 |
Partly, yes. A great adoption means a lot of users using a lot of services. Some other proposals focus on easing development on IPFS, to increase the number, quality and reach of IPFS-enabled websites and service. This is the "supply" side of the equation. This proposal also want to increase the "demand" side, by making using the existing and future IPFS-enabled sites mainstream. Thinking more about it, I guess the task is so ambitious that we may need to improve DX to scale up the developing power of the community first (#67 , #63 ,(#77 ,)#62 , #82, #74, #61) summerized in #88. Since each of them is a very tough problem in itself, maybe this proposal is too ambitious. |
haha nothing wrong with ambitious! We propose a slightly different and maybe simpler approach: Create a new set of methods that can be used by ALL browsers that are built on IPFS+SPAN-AI (#79). The fundamental ones would be Network Put (NPUT) and Network Get (NGET). This assumes a network centric view that includes (distributed) storage. NPUT is interpreted as network storage command by the routing protocol (AI-HARD) and uses the IPFS network for high quality storage/"publication". It has arguments for QoS such as reach and availability e.g. 99.9%; endurance e.g. 30days; stream encode rate; etc. It will return bid cost(s) in FIL and may return a hash and location when paid. NGET uses name requests to retrieve content in real time using SPAN-AI, e.g. SPAN://disney/batman/directors_cut/rhett/bondi which is a fully self-certifying name and route request and can be routed by SPAN-AI using NDN like methods. Increased efficiency is "built in" as all routers/switches/servers/nodes in the path have the option of storing all or part of the requested content. Machine Learning optimises this at a local and global level. It requires a unified naming and directory system but it doesn't have to be deterministic. Publishers can publish where their content is stored. The SPAN-AI ML agents learn where content is and what demand is and base routing and storage decisions on that in real time., NPUSH requests can boost distribution and storage for a short period for big events such as a new GoT season or super bowl! Multicast and load balancing are inherent, as is mobility. We think all the pieces are available to build this, including open switch platforms from Nokia and Nvidia. Our current estimate is a team of about 20 (very smart) people over a year. That may be a bit uner-cooked but not by a lot. At the end we have a global hyper scale distributed content storage and real time delivery system. Love to know everyone's thoughts. |
Interesting proposal and event though this is probably discussed in #79 I'll share my browser specific thoughts.
|
Reading a bit further into the paper you linked @RhettSampson it seems since your solution is touching quite a few areas, it might be worth distinguishing clearly what idea fills what slot in the IPFS stack and lay down how those bits are supposed to communicate across layers. This would ensure compatibility and may show more clearly what areas of IPFS need/could use improvement and how your solutions fit in the picture. |
Good points @JonasKruckenberg agree totally. We do cross all the layers. In fact that is a requirement for virtual networks according to Forrester. And yes, we are "condensing" it for simplicity. It's all about what happens at level 3 network or as the NDN guys say the "narrow waist" (see diagram below from them). And yes it will require putting this into standards but it's a bit "chicken and egg". As you say we need to build it first then propose the standards which we fully intend to do. It will also require some tweaks to IP but so far we think those are very small. We did try to map IPFS and IPND as much as possible to SPAN-AI in the architecture but agree a diagram is worth many thousands of words. I'll give that a go but might take a while!. Appreciate your thoughtful comments thank you. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
Note, this is part of the 2021 IPFS project planning process - feel free to add other potential 2021 themes for the IPFS project by opening a new issue or discuss this proposed theme in the comments, especially other example workstreams that could fit under this theme for 2021. Please also review others’ proposed themes and leave feedback here!
Theme description
Brave browser is integrating a full IPFS in its future release. While this should be celebrated as a great milestone, the goal is for it to be fully integrated in all browsers, and in priority Google's Chrome Browser, Mozilla's Firefox, and Apple's Safari browser.
Hypothesis
Brave integration is both proof that this is technically realistic and a playground to test this full integration. Widespread adoption of IPFS will take off when both sides of the coin are minted: services and clients. The rise of IPFS and Filecoin as the go-to decentralized storage in the web3.0 stack enables new forms of websites, web apps, and services, so the services are being built. The IPFS community have now to focus on the client part. By cooperating with main browsers for integration, lobbying them, and treating IPFS in the browser as a priority, it could be seen by non-web3.0 developers as THE technology modern web developers should build upon. Firefox and Google integration would send a clear signal: IPFS is the new HTTPS, the new WebRTC, the new browser cache.
Vision statement
In a few years, IPFS is almost the only storage layer used in new websites. Server/Client architecture is rightfully called "legacy" by the industry, and bandwidth-heavy websites have made the switch, enjoying massive gains on network cost, CDN, and backup costs. Seeing that, more and more web services adopt IPFS thanks to an exhaustive and well-designed web dev suite. Users enjoy faster loadings because they fetch from nearby and idle nodes in priority and because servers are under lighter loads. Without them really knowing why that is, they feel HTTPS websites are "slow" and "less reliable".
Why focus this year
Browsers experience is one of the most ambitious use-cases from a UX point-of-view because their users are the general public, not technical people. Slow loading, lack of visual feedback, unreliable fetching, router being killed aor restarted, heavy need for configuration or steep learning curves before use are a no-go for the average user. Opaque tracking cookies and other tracking technologies are also a concern. Setting the ambitious goal of making IPFS a first-class citizen of browsers will force the community to both solve technical challenges, as well as listen for the average Joe, and hence abstracting away problems for both end-users and developers. IPFS also have to find and implement a reasonable solution for users' privacy before it can go mainstream.
Example workstreams
Other content
The text was updated successfully, but these errors were encountered: