From 52a03813f9d28dfde0640e3b6d368335328de24b Mon Sep 17 00:00:00 2001 From: May Rosenbaum Date: Wed, 28 Jun 2023 18:00:30 +0300 Subject: [PATCH] remove kafka from documentation Signed-off-by: May Rosenbaum --- docs/source/config_update.md | 4 +- .../create_channel_participation.md | 2 +- docs/source/kafka_raft_migration.md | 246 +----------------- docs/source/orderer/ordering_service.md | 93 +------ 4 files changed, 18 insertions(+), 327 deletions(-) diff --git a/docs/source/config_update.md b/docs/source/config_update.md index 129f891cd81..66f37d7f0d7 100644 --- a/docs/source/config_update.md +++ b/docs/source/config_update.md @@ -910,11 +910,11 @@ Governs configuration parameters unique to the ordering service and requires a m * **Block validation**. This policy specifies the signature requirements for a block to be considered valid. By default, it requires a signature from some member of the ordering org. -* **Consensus type**. To enable the migration of Kafka based ordering services to Raft based ordering services, it is possible to change the consensus type of a channel. For more information, check out [Migrating from Kafka to Raft](./kafka_raft_migration.html). +* **Consensus type**. Kafka is no longer supported in v3.x, therefore migration from Kafka based ordering services to Raft based ordering services is not possible. This process is possible in past versions. * **Raft ordering service parameters**. For a look at the parameters unique to a Raft ordering service, check out [Raft configuration](./raft_configuration.html). -* **Kafka brokers** (where applicable). When `ConsensusType` is set to `kafka`, the `brokers` list enumerates some subset (or preferably all) of the Kafka brokers for the orderer to initially connect to at startup. +* **Kafka brokers**. `ConsensusType` can no longer be set to `kafka` since in v3.x Kafka is no longer supported. Therefore, the `brokers` list is no longer supported. #### `Channel` diff --git a/docs/source/create_channel/create_channel_participation.md b/docs/source/create_channel/create_channel_participation.md index a4a5c3de822..f81795097bf 100644 --- a/docs/source/create_channel/create_channel_participation.md +++ b/docs/source/create_channel/create_channel_participation.md @@ -200,7 +200,7 @@ OrdererEndpoints: #### Orderer: -- **Orderer.OrdererType** Set this value to `etcdraft`. As mentioned before, this process does not work with Solo or Kafka ordering nodes. +- **Orderer.OrdererType** Set this value to `etcdraft`. As mentioned before, this process does not work with Solo ordering nodes. - **Orderer.EtcdRaft.Consenters** Provide the list of ordering node addresses, in the form of `host:port`, that are considered active members of the consenter set. All orderers that are listed in this section will become active "consenters" on the channel when they join the channel. For example: diff --git a/docs/source/kafka_raft_migration.md b/docs/source/kafka_raft_migration.md index 4e63eb4840d..893ca476f10 100644 --- a/docs/source/kafka_raft_migration.md +++ b/docs/source/kafka_raft_migration.md @@ -1,249 +1,7 @@ # Migrating from Kafka to Raft -**Note: this document presumes a high degree of expertise with channel -configuration update transactions. As the process for migration involves -several channel configuration update transactions, do not attempt to migrate -from Kafka to Raft without first familiarizing yourself with the [Add an -Organization to a Channel](channel_update_tutorial.html) tutorial, which -describes the channel update process in detail.** - -For users who want to transition channels from using Kafka-based ordering -services to [Raft-based](./orderer/ordering_service.html#Raft) ordering services, -nodes at v1.4.2 or higher allow this to be accomplished through a series of configuration update -transactions on each channel in the network. - -This tutorial will describe this process at a high level, calling out specific -details where necessary, rather than show each command in detail. - -## Assumptions and considerations - -Before attempting migration, take the following into account: - -1. This process is solely for migration from Kafka to Raft. Migrating between -any other orderer consensus types is not currently supported. - -2. Migration is one way. Once the ordering service is migrated to Raft, and -starts committing transactions, it is not possible to go back to Kafka. - -3. Because the ordering nodes must go down and be brought back up, downtime must -be allowed during the migration. - -4. Recovering from a botched migration is possible only if a backup is taken at -the point in migration prescribed later in this document. If you do not take a -backup, and migration fails, you will not be able to recover your previous state. - -5. All channels must be migrated during the same maintenance window. It is not -possible to migrate only some channels before resuming operations. - -6. At the end of the migration process, every channel will have the same -consenter set of Raft nodes. This is the same consenter set that will exist in -the ordering system channel. This makes it possible to diagnose a successful -migration. - -7. Migration is done in place, utilizing the existing ledgers for the deployed -ordering nodes. Addition or removal of orderers should be performed after the -migration. - -## High level migration flow - -Migration is carried out in five phases. - -1. The system is placed into a maintenance mode where application transactions - are rejected and only ordering service admins can make changes to the channel - configuration. -2. The system is stopped, and a backup is taken in case an error occurs during - migration. -3. The system is started, and each channel has its consensus type and metadata - modified. -4. The system is restarted and is now operating on Raft consensus; each channel - is checked to confirm that it has successfully achieved a quorum. -5. The system is moved out of maintenance mode and normal function resumes. - -## Preparing to migrate - -There are several steps you should take before attempting to migrate. - -* Design the Raft deployment, deciding which ordering service nodes are going to - remain as Raft consenters. You should deploy at least three ordering nodes in - your cluster, but note that deploying a consenter set of at least five nodes - will maintain high availability should a node goes down, whereas a three node - configuration will lose high availability once a single node goes down for any - reason (for example, as during a maintenance cycle). -* Prepare the material for - building the Raft `Metadata` configuration. **Note: all the channels should receive - the same Raft `Metadata` configuration**. Refer to the [Raft configuration guide](raft_configuration.html) - for more information on these fields. Note: you may find it easiest to bootstrap - a new ordering network with the Raft consensus protocol, then copy and modify - the consensus metadata section from its config. In any case, you will need - (for each ordering node): - - `hostname` - - `port` - - `server certificate` - - `client certificate` -* Compile a list of all channels (system and application) in the system. Make - sure you have the correct credentials to sign the configuration updates. For - example, the relevant ordering service admin identities. -* Ensure all ordering service nodes are running the same version of Fabric, and - that this version is v1.4.2 or greater. -* Ensure all peers are running at least v1.4.2 of Fabric. Make sure all channels - are configured with the channel capability that enables migration. - - Orderer capability `V1_4_2` (or above). - - Channel capability `V1_4_2` (or above). - -### Entry to maintenance mode - -Prior to setting the ordering service into maintenance mode, it is recommended -that the peers and clients of the network be stopped. Leaving peers or clients -up and running is safe, however, because the ordering service will reject all of -their requests, their logs will fill with benign but misleading failures. - -Follow the process in the [Add an Organization to a Channel](channel_update_tutorial.html) -tutorial to pull, translate, and scope the configuration of **each channel, -starting with the system channel**. The only field you should change during -this step is in the channel configuration at `/Channel/Orderer/ConsensusType`. -In a JSON representation of the channel configuration, this would be -`.channel_group.groups.Orderer.values.ConsensusType`. - -The `ConsensusType` is represented by three values: `Type`, `Metadata`, and -`State`, where: - - * `Type` is either `kafka` or `etcdraft` (Raft). This value can only be - changed while in maintenance mode. - * `Metadata` will be empty if the `Type` is kafka, but must carry valid Raft - metadata if the `ConsensusType` is `etcdraft`. More on this below. - * `State` is either `STATE_NORMAL`, when the channel is processing transactions, or - `STATE_MAINTENANCE`, during the migration process. - -In the first step of the channel configuration update, only change the `State` -from `STATE_NORMAL` to `STATE_MAINTENANCE`. Do not change the `Type` or the `Metadata` field -yet. Note that the `Type` should currently be `kafka`. - -While in maintenance mode, normal transactions, config updates unrelated to -migration, and `Deliver` requests from the peers used to retrieve new blocks are -rejected. This is done in order to prevent the need to both backup, and if -necessary restore, peers during migration, as they only receive updates once -migration has successfully completed. In other words, we want to keep the -ordering service backup point, which is the next step, ahead of the peer’s ledger, -in order to be able to perform rollback if needed. However, ordering node admins -can issue `Deliver` requests (which they need to be able to do in order to -continue the migration process). - -**Verify** that each ordering service node has entered maintenance mode on each -of the channels. This can be done by fetching the last config block and making -sure that the `Type`, `Metadata`, `State` on each channel is `kafka`, empty -(recall that there is no metadata for Kafka), and `STATE_MAINTENANCE`, respectively. - -If the channels have been updated successfully, the ordering service is now -ready for backup. - -#### Backup files and shut down servers - -Shut down all ordering nodes, Kafka servers, and Zookeeper servers. It is -important to **shutdown the ordering service nodes first**. Then, after allowing -the Kafka service to flush its logs to disk (this typically takes about 30 -seconds, but might take longer depending on your system), the Kafka servers -should be shut down. Shutting down the Kafka brokers at the same time as the -orderers can result in the filesystem state of the orderers being more recent -than the Kafka brokers which could prevent your network from starting. - -Create a backup of the file system of these servers. Then restart the Kafka -service and then the ordering service nodes. - -### Switch to Raft in maintenance mode - -The next step in the migration process is another channel configuration update -for each channel. In this configuration update, switch the `Type` to `etcdraft` -(for Raft) while keeping the `State` in `STATE_MAINTENANCE`, and fill in the -`Metadata` configuration. It is highly recommended that the `Metadata` configuration be -identical on all channels. If you want to establish different consenter sets -with different nodes, you will be able to reconfigure the `Metadata` configuration -after the system is restarted into `etcdraft` mode. Supplying an identical metadata -object, and hence, an identical consenter set, means that when the nodes are -restarted, if the system channel forms a quorum and can exit maintenance mode, -other channels will likely be able do the same. Supplying different consenter -sets to each channel can cause one channel to succeed in forming a cluster while -another channel will fail. - -Then, validate that each ordering service node has committed the `ConsensusType` -change configuration update by pulling and inspecting the configuration of each -channel. - -Note: For each channel, the transaction that changes the `ConsensusType` must be the last -configuration transaction before restarting the nodes (in the next step). If -some other configuration transaction happens after this step, the nodes will -most likely crash on restart, or result in undefined behavior. - -#### Restart and validate leader - -Note: exit of maintenance mode **must** be done **after** restart. - -After the `ConsensusType` update has been completed on each channel, stop all -ordering service nodes, stop all Kafka brokers and Zookeepers, and then restart -only the ordering service nodes. They should restart as Raft nodes, form a cluster per -channel, and elect a leader on each channel. - -**Note**: Since the Raft-based ordering service uses client and server TLS certificates for -authentication between orderer nodes, **additional configurations** are required before -you start them again, see -[Section: Local Configuration](raft_configuration.html#local-configuration) for more details. - -After restart process finished, make sure to **validate** that a -leader has been elected on each channel by inspecting the node logs (you can see -what to look for below). This will confirm that the process has been completed -successfully. - -When a leader is elected, the log will show, for each channel: - -``` -"Raft leader changed: 0 -> node-number channel=channel-name -node=node-number " -``` - -For example: - -``` -2019-05-26 10:07:44.075 UTC [orderer.consensus.etcdraft] serveRequest -> -INFO 047 Raft leader changed: 0 -> 1 channel=testchannel1 node=2 -``` - -In this example `node 2` reports that a leader was elected (the leader is -`node 1`) by the cluster of channel `testchannel1`. - -### Switch out of maintenance mode - -Perform another channel configuration update on each channel (sending the config -update to the same ordering node you have been sending configuration updates to -until now), switching the `State` from `STATE_MAINTENANCE` to `STATE_NORMAL`. Start with the -system channel, as usual. If it succeeds on the ordering system channel, -migration is likely to succeed on all channels. To verify, fetch the last config -block of the system channel from the ordering node, verifying that the `State` -is now `STATE_NORMAL`. For completeness, verify this on each ordering node. - -When this process is completed, the ordering service is now ready to accept all -transactions on all channels. If you stopped your peers and application as -recommended, you may now restart them. - -## Abort and rollback - -If a problem emerges during the migration process **before exiting maintenance -mode**, simply perform the rollback procedure below. - -1. Shut down the ordering nodes and the Kafka service (servers and Zookeeper - ensemble). -2. Rollback the file system of these servers to the backup taken at maintenance - mode before changing the `ConsensusType`. -3. Restart said servers, the ordering nodes will bootstrap to Kafka in - maintenance mode. -4. Send a configuration update exiting maintenance mode to continue using Kafka - as your consensus mechanism, or resume the instructions after the point of - backup and fix the error which prevented a Raft quorum from forming and retry - migration with corrected Raft configuration `Metadata`. - -There are a few states which might indicate migration has failed: - -1. Some nodes crash or shutdown. -2. There is no record of a successful leader election per channel in the logs. -3. The attempt to flip to `STATE_NORMAL` mode on the system channel fails. +Since Kafka is no longer supported as a consensus type in v3.x, migration from Kafka to Raft is not possible. +This process is possible in past versions. diff --git a/docs/source/orderer/ordering_service.md b/docs/source/orderer/ordering_service.md index 918da3ce29e..c8ef8e35356 100644 --- a/docs/source/orderer/ordering_service.md +++ b/docs/source/orderer/ordering_service.md @@ -195,13 +195,8 @@ implementation the node will be used in), check out [our documentation on deploy up and manage than Kafka-based ordering services, and their design allows different organizations to contribute nodes to a distributed ordering service. -* **Kafka** (deprecated in v2.x) - - Similar to Raft-based ordering, Apache Kafka is a CFT implementation that uses - a "leader and follower" node configuration. Kafka utilizes a ZooKeeper - ensemble for management purposes. The Kafka based ordering service has been - available since Fabric v1.0, but many users may find the additional - administrative overhead of managing a Kafka cluster intimidating or undesirable. +* **Kafka** + Kafka was deprecated in v2.x and is no longer supported in v3.x * **Solo** (deprecated in v2.x) @@ -231,56 +226,21 @@ centers and even locations. For example, by putting one node in three different data centers. That way, if a data center or entire location becomes unavailable, the nodes in the other data centers continue to operate. -From the perspective of the service they provide to a network or a channel, Raft -and the existing Kafka-based ordering service (which we'll talk about later) are -similar. They're both CFT ordering services using the leader and follower -design. If you are an application developer, smart contract developer, or peer -administrator, you will not notice a functional difference between an ordering -service based on Raft versus Kafka. However, there are a few major differences worth -considering, especially if you intend to manage an ordering service. - -* Raft is easier to set up. Although Kafka has many admirers, even those -admirers will (usually) admit that deploying a Kafka cluster and its ZooKeeper -ensemble can be tricky, requiring a high level of expertise in Kafka -infrastructure and settings. Additionally, there are many more components to -manage with Kafka than with Raft, which means that there are more places where -things can go wrong. Kafka also has its own versions, which must be coordinated -with your orderers. **With Raft, everything is embedded into your ordering node**. - -* Kafka and Zookeeper are not designed to be run across large networks. While -Kafka is CFT, it should be run in a tight group of hosts. This means that -practically speaking you need to have one organization run the Kafka cluster. -Given that, having ordering nodes run by different organizations when using Kafka -(which Fabric supports) doesn't decentralize the nodes because ultimately -the nodes all go to a Kafka cluster which is under the control of a -single organization. With Raft, each organization can have its own ordering -nodes, participating in the ordering service, which leads to a more decentralized -system. - -* Kafka is supported natively, which means that users are required to get the requisite images and -learn how to use Kafka and ZooKeeper on their own. Likewise, support for -Kafka-related issues is handled through [Apache](https://kafka.apache.org/), the -open-source developer of Kafka, not Hyperledger Fabric. The Fabric Raft implementation, -on the other hand, has been developed and will be supported within the Fabric +* The Fabric Raft implementation has been developed and will be supported within the Fabric developer community and its support apparatus. -* Where Kafka uses a pool of servers (called "Kafka brokers") and the admin of -the orderer organization specifies how many nodes they want to use on a -particular channel, Raft allows the users to specify which ordering nodes will +* Raft allows the users to specify which ordering nodes will be deployed to which channel. In this way, peer organizations can make sure -that, if they also own an orderer, this node will be made a part of a ordering +that, if they also own an orderer, this node will be made a part of an ordering service of that channel, rather than trusting and depending on a central admin -to manage the Kafka nodes. +to manage the nodes. * Raft is the first step toward Fabric's development of a byzantine fault tolerant (BFT) ordering service. As we'll see, some decisions in the development of Raft were driven by this. If you are interested in BFT, learning how to use Raft should ease the transition. -For all of these reasons, support for Kafka-based ordering service is being -deprecated in Fabric v2.x. - -Note: Similar to Solo and Kafka, a Raft ordering service can lose transactions +Note: Similar to Solo, a Raft ordering service can lose transactions after acknowledgement of receipt has been sent to a client. For example, if the leader crashes at approximately the same time as a follower provides acknowledgement of receipt. Therefore, application clients should listen on peers @@ -292,10 +252,8 @@ collect a new set of endorsements upon such a timeout. ### Raft concepts -While Raft offers many of the same features as Kafka --- albeit in a simpler and -easier-to-use package --- it functions substantially different under the covers -from Kafka and introduces a number of new concepts, or twists on existing -concepts, to Fabric. +Raft offers many features in a simple and easy-to-use package +and introduces a number of new concepts, or twists on existing concepts, to Fabric. **Log entry**. The primary unit of work in a Raft ordering service is a "log entry", with the full sequence of such entries known as the "log". We consider @@ -316,11 +274,10 @@ there to be a quorum. If a quorum of nodes is unavailable for any reason, the ordering service cluster becomes unavailable for both read and write operations on the channel, and no new logs can be committed. -**Leader**. This is not a new concept --- Kafka also uses leaders --- -but it's critical to understand that at any given time, a channel's consenter set -elects a single node to be the leader (we'll describe how this happens in Raft -later). The leader is responsible for ingesting new log entries, replicating -them to follower ordering nodes, and managing when an entry is considered +**Leader**. This is not a new concept, but it's critical to understand that at any given time, +a channel's consenter set elects a single node to be the leader (we'll describe how +this happens in Raft later). The leader is responsible for ingesting new log entries, +replicating them to follower ordering nodes, and managing when an entry is considered committed. This is not a special **type** of orderer. It is only a role that an orderer may have at certain times, and then not others, as circumstances determine. @@ -384,30 +341,6 @@ therefore receive block `180` from `L` and then make a `Deliver` request for blocks `101` to `180`. Blocks `180` to `196` would then be replicated to `R1` through the normal Raft protocol. -### Kafka (deprecated in v2.x) - -The other crash fault tolerant ordering service supported by Fabric is an -adaptation of a Kafka distributed streaming platform for use as a cluster of -ordering nodes. You can read more about Kafka at the [Apache Kafka Web site](https://kafka.apache.org/intro), -but at a high level, Kafka uses the same conceptual "leader and follower" -configuration used by Raft, in which transactions (which Kafka calls "messages") -are replicated from the leader node to the follower nodes. In the event the -leader node goes down, one of the followers becomes the leader and ordering can -continue, ensuring fault tolerance, just as with Raft. - -The management of the Kafka cluster, including the coordination of tasks, -cluster membership, access control, and controller election, among others, is -handled by a ZooKeeper ensemble and its related APIs. - -Kafka clusters and ZooKeeper ensembles are notoriously tricky to set up, so our -documentation assumes a working knowledge of Kafka and ZooKeeper. If you decide -to use Kafka without having this expertise, you should complete, *at a minimum*, -the first six steps of the [Kafka Quickstart guide](https://kafka.apache.org/quickstart) before experimenting with the -Kafka-based ordering service. You can also consult -[this sample configuration file](https://github.com/hyperledger/fabric/blob/release-1.1/bddtests/dc-orderer-kafka.yml) -for a brief explanation of the sensible defaults for Kafka and ZooKeeper. - -To learn how to bring up a Kafka-based ordering service, check out [our documentation on Kafka](../kafka.html).