You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This pipeline idea DOES NOT require mandatory usage of the Bitcom convention (Usage of Bitcoin address as "prefix" instead of using a centralized registry), any protocol can adopt this today.
But the Bicom CLI intends to implement this, and will be much more synergistic if the protocols that adopt this pipeline scheme adopt the Bitcom convention since the pipeline proposal assumes a world where there are infinite number of protocols, which isn't possible if all protocols are managed in a central repository, in a human curated fashion.
The pipeline can potentially solve the immediate problem of:
How to "scale" the B:// protocol without introducing technical debt
while giving room for extreme flexibility, such as adding filename or fps (frames per second), resolution, etc.
The key insight of Bicom pipeline:
We have started looking at Bitcom as an "embeddable OS" instead of Bitcom itself being a standalone protocol.
Following the Unix Philosophy, each module (a protocol) stays as simple as possible and as modular as possible.
Simply adopting the "convention", any OP_RETURN protocol can communicate with other app protocols, just like how Unix pipeline allows various Unix programs to intercommunicate through pipes.
How it Works
You can pipe two protocols together like unix programs.
Instead of packing every possible metadata into the single protocol, we could:
keep B very minimal: Only keep the [DATA] and [Content-Type]
And use the pipeline system to pass the Blob into another protocol.
Example
Before
Without a pipeline feature, whenever we want to add a new feature or an attribute, we will have to extend the base protocol itself, and the protocol will become larger and larger as time goes on, collecting technical debt, and become a nightmare to maintain:
Now, the protocol ONLY has 2 arguments: DATA and Media Type.
The rest can be determined by passing the B:// output to another protocol and let them take care of it. For example let's say we want to create a video storage system:
2. Write Metadata
Let's think of a hypothetical protocol that lets you:
Take ANY input object
Attach more metadata to it
And return the resulting object
It would be like JavaScript's Object.assign()
For example, the following command is executed independently,
It would return an aggregate object containing all of the [DATA], video/mp4, video.mp4, and 60 attributes.
This command is saying,
First write the binary image data using b://
Pass the output to the next protocol 1EKrfyTD6UoXR85vpfxZ7e8h2h8C5XEroy which happens to be a video metadata protocol, which attaches additional metadata
And post it as a single atomic OP_RETURN output.
With this we can ensure that this blob is atomically associated with this set of metadata.
Important: Note that the ordering is important. In this scenario, the metadata protocol can accept any random object, but the b:// protocol is a sole data producer protocol and cannot accept an input from another protocol. This will be explained below in FAQ.
Also, if you simply want to store a blob with media type and nothing else, you can do so as well, you just need to end with B://.
Lastly, if you decide to adopt the Bitcom OS protocol into your application protocol, you can use the $ echo A > B command to define the schema directly in a standardized manner, so other apps can look it up in order to make sense of what the inputs and outputs from the protocol represent within the context of the Bitcom pipeline.
Decentralized Protocol Emergence instead of Central Planning
The great part about this approach is that you DO NOT have to decide how your protocol will interact with the outside world from the beginning.
At first you can just define a self contained protocol that doesn't interact with the outside world but does one thing.
And as you become more comfortable with opening up access, you can define a schema that describes:
how it processes incoming input
what output it produces and passes on to the next program in the pipeline.
For example, I can create a blob upload protocol like b:// WITHOUT any pipeline schema.
Then people may want to either fork the protocol to build their own protocol, or ask me for certain features they want to use from the b:// protocol. Then based on this feedback I can come up with an "Open API" of the b:// protocol which is essentially a schema definition of inputs and outputs within the pipeline system.
And once defined, other protocol developers can easily reference the schema and build their own application protocols by mashing up with my protocol.
And the best part: this schema itself can be stored on-chain using the Bitcom $ echo command.
FAQ
1. Isn't this more like a & than a |?
There's a subtle but important difference.
& implies that there is no order between commands, whereas | assumes a single linear order of execution. Without a fixed order, various apps may use these protocols in various orders, which will make it hard to query for them.
For example, let's say we're using & instead of | to describe a file upload + naming + admin rights assignment. It can be expressed in 6 different ways:
This will make it very difficult to query the pattern from the blockchain through indexing services such as bitdb or realtime push subscription services such as bitsocket
However with a fixed sequence of what each protocol expects via pipeline (|), we only have one sequence:
which means we only need one query to deal with this specific template.
Of course, the & feature also has a great potential use case and will be supported in the future, but at this stage, if we were to introduce one more feature to solve as many problems as possible, it makes more sense to use the |.
Conclusion
The main value proposition of pipeline system is:
Keep each protocol as Minimal as possible
Make protocols interoperable through a standardized pipeline interface
Get rid of central planning in protocol design and let it emerge based on usage.
The text was updated successfully, but these errors were encountered:
Unix Pipeline in OP_RETURN
Example: "Upload a video, store its filename and frame rate, and then set admin rights, all in a single transaction, but using 3 separate protocols."
Following the Unix Pipe (
|
) system design, we can:This spec proposal is inspired by
The pipeline can potentially solve the immediate problem of:
filename
orfps
(frames per second),resolution
, etc.The key insight of Bicom pipeline:
How it Works
You can pipe two protocols together like unix programs.
Let's think of the B:// protocol for example.
Instead of packing every possible metadata into the single protocol, we could:
Example
Before
Without a pipeline feature, whenever we want to add a new feature or an attribute, we will have to extend the base protocol itself, and the protocol will become larger and larger as time goes on, collecting technical debt, and become a nightmare to maintain:
But what if we can make each protocol as minimal as possible, and then pipe them into one another?
After
With a piping feature, we could create a combination of two or more protocols that pipe linearly from one to the next:
B://
1. Write File
We can write a file using the same simple
B://
protocol:Now, the protocol ONLY has 2 arguments: DATA and Media Type.
The rest can be determined by passing the
B://
output to another protocol and let them take care of it. For example let's say we want to create a video storage system:2. Write Metadata
Let's think of a hypothetical protocol that lets you:
It would be like JavaScript's
Object.assign()
For example, the following command is executed independently,
It would return an object with the
video.mp4
and60
as attributes.However if we pass in the File object from the
b://
protocol, like this:It would return an aggregate object containing all of the
[DATA]
,video/mp4
,video.mp4
, and60
attributes.This command is saying,
b://
1EKrfyTD6UoXR85vpfxZ7e8h2h8C5XEroy
which happens to be a video metadata protocol, which attaches additional metadataWith this we can ensure that this blob is atomically associated with this set of metadata.
Also, if you simply want to store a blob with media type and nothing else, you can do so as well, you just need to end with
B://
.You have a choice.
Extensibility
By following the unix pipeline system, we can create a sequence of as many OP_RETURN protocols as we want.
You can pipe as many protocols as you want.
For example, there can be 3 protocols in the pipeline:
B://
for writing a data blobThe important part here is that each protocol can be a standalone module that:
But in this particular case we can write an OP_RETURN program that pipes one into another:
Lastly, if you decide to adopt the Bitcom OS protocol into your application protocol, you can use the
$ echo A > B
command to define the schema directly in a standardized manner, so other apps can look it up in order to make sense of what the inputs and outputs from the protocol represent within the context of the Bitcom pipeline.Decentralized Protocol Emergence instead of Central Planning
The great part about this approach is that you DO NOT have to decide how your protocol will interact with the outside world from the beginning.
At first you can just define a self contained protocol that doesn't interact with the outside world but does one thing.
And as you become more comfortable with opening up access, you can define a schema that describes:
For example, I can create a blob upload protocol like
b://
WITHOUT any pipeline schema.Then people may want to either fork the protocol to build their own protocol, or ask me for certain features they want to use from the
b://
protocol. Then based on this feedback I can come up with an "Open API" of theb://
protocol which is essentially a schema definition of inputs and outputs within the pipeline system.And once defined, other protocol developers can easily reference the schema and build their own application protocols by mashing up with my protocol.
And the best part: this schema itself can be stored on-chain using the Bitcom
$ echo
command.FAQ
1. Isn't this more like a
&
than a|
?There's a subtle but important difference.
&
implies that there is no order between commands, whereas|
assumes a single linear order of execution. Without a fixed order, various apps may use these protocols in various orders, which will make it hard to query for them.For example, let's say we're using
&
instead of|
to describe a file upload + naming + admin rights assignment. It can be expressed in 6 different ways:This will make it very difficult to query the pattern from the blockchain through indexing services such as bitdb or realtime push subscription services such as bitsocket
However with a fixed sequence of what each protocol expects via pipeline (
|
), we only have one sequence:which means we only need one query to deal with this specific template.
Of course, the
&
feature also has a great potential use case and will be supported in the future, but at this stage, if we were to introduce one more feature to solve as many problems as possible, it makes more sense to use the|
.Conclusion
The main value proposition of pipeline system is:
The text was updated successfully, but these errors were encountered: