From 17ea89db9f2e08a5766abf6a7cad032372323e62 Mon Sep 17 00:00:00 2001 From: Alex Dadgar Date: Fri, 10 Feb 2017 15:30:34 -0800 Subject: [PATCH] Add outage guide and move bootstrapping --- website/source/docs/cluster/automatic.html.md | 116 ----------- .../source/docs/cluster/bootstrapping.html.md | 24 --- website/source/docs/cluster/federation.md | 28 --- website/source/docs/cluster/manual.html.md | 65 ------ .../source/docs/cluster/requirements.html.md | 71 ------- .../operator-raft-list-peers.html.md.erb | 3 +- .../operator-raft-remove-peer.html.md.erb | 2 +- website/source/guides/index.html.markdown | 12 +- website/source/guides/outage.html.markdown | 189 ++++++++++++++++++ .../guides/some-guides/group.html.markdown | 15 -- .../guides/some-guides/one.html.markdown | 28 --- .../guides/some-guides/three.html.markdown | 28 --- .../guides/some-guides/two.html.markdown | 28 --- website/source/layouts/guides.erb | 27 +-- 14 files changed, 214 insertions(+), 422 deletions(-) delete mode 100644 website/source/docs/cluster/automatic.html.md delete mode 100644 website/source/docs/cluster/bootstrapping.html.md delete mode 100644 website/source/docs/cluster/federation.md delete mode 100644 website/source/docs/cluster/manual.html.md delete mode 100644 website/source/docs/cluster/requirements.html.md create mode 100644 website/source/guides/outage.html.markdown delete mode 100644 website/source/guides/some-guides/group.html.markdown delete mode 100644 website/source/guides/some-guides/one.html.markdown delete mode 100644 website/source/guides/some-guides/three.html.markdown delete mode 100644 website/source/guides/some-guides/two.html.markdown diff --git a/website/source/docs/cluster/automatic.html.md b/website/source/docs/cluster/automatic.html.md deleted file mode 100644 index 442e35337db..00000000000 --- a/website/source/docs/cluster/automatic.html.md +++ /dev/null @@ -1,116 +0,0 @@ ---- -layout: "docs" -page_title: "Automatically Bootstrapping a Nomad Cluster" -sidebar_current: "docs-cluster-automatic" -description: |- - Learn how to automatically bootstrap a Nomad cluster using Consul. By having - a Consul agent installed on each host, Nomad can automatically discover other - clients and servers to bootstrap the cluster without operator involvement. ---- - -# Automatic Bootstrapping - -To automatically bootstrap a Nomad cluster, we must leverage another HashiCorp -open source tool, [Consul](https://www.consul.io/). Bootstrapping Nomad is -easiest against an existing Consul cluster. The Nomad servers and clients -will become informed of each other's existence when the Consul agent is -installed and configured on each host. As an added benefit, integrating Consul -with Nomad provides service and health check registration for applications which -later run under Nomad. - -Consul models infrastructures as datacenters and multiple Consul datacenters can -be connected over the WAN so that clients can discover nodes in other -datacenters. Since Nomad regions can encapsulate many datacenters, we recommend -running a Consul cluster in every Nomad datacenter and connecting them over the -WAN. Please refer to the Consul guide for both -[bootstrapping](https://www.consul.io/docs/guides/bootstrapping.html) a single -datacenter and [connecting multiple Consul clusters over the -WAN](https://www.consul.io/docs/guides/datacenters.html). - -If a Consul agent is installed on the host prior to Nomad starting, the Nomad -agent will register with Consul and discover other nodes. - -For servers, we must inform the cluster how many servers we expect to have. This -is required to form the initial quorum, since Nomad is unaware of how many peers -to expect. For example, to form a region with three Nomad servers, you would use -the following Nomad configuration file: - -```hcl -# /etc/nomad.d/server.hcl - -server { - enabled = true - bootstrap_expect = 3 -} -``` - -This configuration would be saved to disk and then run: - -```shell -$ nomad agent -config=/etc/nomad.d/server.hcl -``` - -A similar configuration is available for Nomad clients: - -```hcl -# /etc/nomad.d/client.hcl - -datacenter = "dc1" - -client { - enabled = true -} -``` - -The agent is started in a similar manner: - -```shell -$ nomad agent -config=/etc/nomad.d/client.hcl -``` - -As you can see, the above configurations include no IP or DNS addresses between -the clients and servers. This is because Nomad detected the existence of Consul -and utilized service discovery to form the cluster. - -## Internals - -~> This section discusses the internals of the Consul and Nomad integration at a -very high level. Reading is only recommended for those curious to the -implementation. - -As discussed in the previous section, Nomad merges multiple configuration files -together, so the `-config` may be specified more than once: - -```shell -$ nomad agent -config=base.hcl -config=server.hcl -``` - -In addition to merging configuration on the command line, Nomad also maintains -its own internal configurations (called "default configs") which include sane -base defaults. One of those default configurations includes a "consul" block, -which specifies sane defaults for connecting to and integrating with Consul. In -essence, this configuration file resembles the following: - -```hcl -# You do not need to add this to your configuration file. This is an example -# that is part of Nomad's internal default configuration for Consul integration. -consul { - # The address to the Consul agent. - address = "127.0.0.1:8500" - - # The service name to register the server and client with Consul. - server_service_name = "nomad" - client_service_name = "nomad-client" - - # Enables automatically registering the services. - auto_advertise = true - - # Enabling the server and client to bootstrap using Consul. - server_auto_join = true - client_auto_join = true -} -``` - -Please refer to the [Consul -documentation](/docs/agent/configuration/consul.html) for the complete set of -configuration options. diff --git a/website/source/docs/cluster/bootstrapping.html.md b/website/source/docs/cluster/bootstrapping.html.md deleted file mode 100644 index d94aab68f9b..00000000000 --- a/website/source/docs/cluster/bootstrapping.html.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -layout: "docs" -page_title: "Bootstrapping a Nomad Cluster" -sidebar_current: "docs-cluster-bootstrap" -description: |- - Learn how to bootstrap a Nomad cluster. ---- - -# Bootstrapping a Nomad Cluster - -Nomad models infrastructure into regions and datacenters. Servers reside at the -regional layer and manage all state and scheduling decisions for that region. -Regions contain multiple datacenters, and clients are registered to a single -datacenter (and thus a region that contains that datacenter). For more details on -the architecture of Nomad and how it models infrastructure see the [architecture -page](/docs/internals/architecture.html). - -There are two strategies for bootstrapping a Nomad cluster: - -1. Automatic bootstrapping -1. Manual bootstrapping - -Please refer to the specific documentation links above or in the sidebar for -more detailed information about each strategy. diff --git a/website/source/docs/cluster/federation.md b/website/source/docs/cluster/federation.md deleted file mode 100644 index 06435f77ab9..00000000000 --- a/website/source/docs/cluster/federation.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -layout: "docs" -page_title: "Federating a Nomad Cluster" -sidebar_current: "docs-cluster-federation" -description: |- - Learn how to join Nomad servers across multiple regions so users can submit - jobs to any server in any region using global federation. ---- - -# Federating a Cluster - -Because Nomad operates at a regional level, federation is part of Nomad core. -Federation enables users to submit jobs or interact with the HTTP API targeting -any region, from any server, even if that server resides in a different region. - -Federating multiple Nomad clusters is as simple as joining servers. From any -server in one region, issue a join command to a server in a remote region: - -```shell -$ nomad server-join 1.2.3.4:4648 -``` - -Note that only one join command is required per region. Servers across regions -discover other servers in the cluster via the gossip protocol and hence it's -enough to join just one known server. - -If bootstrapped via Consul and the Consul clusters in the Nomad regions are -federated, then federation occurs automatically. diff --git a/website/source/docs/cluster/manual.html.md b/website/source/docs/cluster/manual.html.md deleted file mode 100644 index 0b32c3ee1f2..00000000000 --- a/website/source/docs/cluster/manual.html.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -layout: "docs" -page_title: "Manually Bootstrapping a Nomad Cluster" -sidebar_current: "docs-cluster-manual" -description: |- - Learn how to manually bootstrap a Nomad cluster using the server-join - command. This section also discusses Nomad federation across multiple - datacenters and regions. ---- - -# Manual Bootstrapping - -Manually bootstrapping a Nomad cluster does not rely on additional tooling, but -does require operator participation in the cluster formation process. When -bootstrapping, Nomad servers and clients must be started and informed with the -address of at least one Nomad server. - -As you can tell, this creates a chicken-and-egg problem where one server must -first be fully bootstrapped and configured before the remaining servers and -clients can join the cluster. This requirement can add additional provisioning -time as well as ordered dependencies during provisioning. - -First, we bootstrap a single Nomad server and capture its IP address. After we -have that nodes IP address, we place this address in the configuration. - -For Nomad servers, this configuration may look something like this: - -```hcl -server { - enabled = true - bootstrap_expect = 3 - - # This is the IP address of the first server we provisioned - retry_join = [":4648"] -} -``` - -Alternatively, the address can be supplied after the servers have all been -started by running the [`server-join` command](/docs/commands/server-join.html) -on the servers individual to cluster the servers. All servers can join just one -other server, and then rely on the gossip protocol to discover the rest. - -``` -$ nomad server-join -``` - -For Nomad clients, the configuration may look something like: - -```hcl -client { - enabled = true - servers = [":4647"] -} -``` - -At this time, there is no equivalent of the server-join command for -Nomad clients. - -The port corresponds to the RPC port. If no port is specified with the IP -address, the default RCP port of `4647` is assumed. - -As servers are added or removed from the cluster, this information is pushed to -the client. This means only one server must be specified because, after initial -contact, the full set of servers in the client's region are shared with the -client. diff --git a/website/source/docs/cluster/requirements.html.md b/website/source/docs/cluster/requirements.html.md deleted file mode 100644 index 8f22143dfba..00000000000 --- a/website/source/docs/cluster/requirements.html.md +++ /dev/null @@ -1,71 +0,0 @@ ---- -layout: "docs" -page_title: "Nomad Client and Server Requirements" -sidebar_current: "docs-cluster-requirements" -description: |- - Learn about Nomad client and server requirements such as memory and CPU - recommendations, network topologies, and more. ---- - -# Cluster Requirements - -## Resources (RAM, CPU, etc.) - -**Nomad servers** may need to be run on large machine instances. We suggest -having 8+ cores, 32 GB+ of memory, 80 GB+ of disk and significant network -bandwidth. The core count and network recommendations are to ensure high -throughput as Nomad heavily relies on network communication and as the Servers -are managing all the nodes in the region and performing scheduling. The memory -and disk requirements are due to the fact that Nomad stores all state in memory -and will store two snapshots of this data onto disk. Thus disk should be at -least 2 times the memory available to the server when deploying a high load -cluster. - -**Nomad clients** support reserving resources on the node that should not be -used by Nomad. This should be used to target a specific resource utilization per -node and to reserve resources for applications running outside of Nomad's -supervision such as Consul and the operating system itself. - -Please see the [reservation configuration](/docs/agent/configuration/client.html#reserved) for -more detail. - -## Network Topology - -**Nomad servers** are expected to have sub 10 millisecond network latencies -between each other to ensure liveness and high throughput scheduling. Nomad -servers can be spread across multiple datacenters if they have low latency -connections between them to achieve high availability. - -For example, on AWS every region comprises of multiple zones which have very low -latency links between them, so every zone can be modeled as a Nomad datacenter -and every Zone can have a single Nomad server which could be connected to form a -quorum and a region. - -Nomad servers uses Raft for state replication and Raft being highly consistent -needs a quorum of servers to function, therefore we recommend running an odd -number of Nomad servers in a region. Usually running 3-5 servers in a region is -recommended. The cluster can withstand a failure of one server in a cluster of -three servers and two failures in a cluster of five servers. Adding more servers -to the quorum adds more time to replicate state and hence throughput decreases -so we don't recommend having more than seven servers in a region. - -**Nomad clients** do not have the same latency requirements as servers since they -are not participating in Raft. Thus clients can have 100+ millisecond latency to -their servers. This allows having a set of Nomad servers that service clients -that can be spread geographically over a continent or even the world in the case -of having a single "global" region and many datacenter. - -## Ports Used - -Nomad requires 3 different ports to work properly on servers and 2 on clients, -some on TCP, UDP, or both protocols. Below we document the requirements for each -port. - -* HTTP API (Default 4646). This is used by clients and servers to serve the HTTP - API. TCP only. - -* RPC (Default 4647). This is used by servers and clients to communicate amongst - each other. TCP only. - -* Serf WAN (Default 4648). This is used by servers to gossip over the WAN to - other servers. TCP and UDP. diff --git a/website/source/docs/commands/operator-raft-list-peers.html.md.erb b/website/source/docs/commands/operator-raft-list-peers.html.md.erb index f83ccc984f5..bd95f54b847 100644 --- a/website/source/docs/commands/operator-raft-list-peers.html.md.erb +++ b/website/source/docs/commands/operator-raft-list-peers.html.md.erb @@ -11,7 +11,7 @@ description: > The Raft list-peers command is used to display the current Raft peer configuration. -See the [Outage Recovery](TODO alexdadgar) guide for some examples of how +See the [Outage Recovery](/guides/outage.html) guide for some examples of how this command is used. For an API to perform these operations programatically, please see the documentation for the [Operator](/docs/http/operator.html) endpoint. @@ -43,7 +43,6 @@ Node ID Address State Voter nomad-server01.global 10.10.11.5:4647 10.10.11.5:4647 follower true nomad-server02.global 10.10.11.6:4647 10.10.11.6:4647 leader true nomad-server03.global 10.10.11.7:4647 10.10.11.7:4647 follower true - ``` * `Node` is the node name of the server, as known to Nomad, or "(unknown)" if diff --git a/website/source/docs/commands/operator-raft-remove-peer.html.md.erb b/website/source/docs/commands/operator-raft-remove-peer.html.md.erb index f1b30e71972..fae613d2bfe 100644 --- a/website/source/docs/commands/operator-raft-remove-peer.html.md.erb +++ b/website/source/docs/commands/operator-raft-remove-peer.html.md.erb @@ -19,7 +19,7 @@ to clean up by simply running [`nomad server-force-leave`](/docs/commands/server-force-leave.html) instead of this command. -See the [Outage Recovery](TODO alexdadgar) guide for some examples of how +See the [Outage Recovery](/guides/outage.html) guide for some examples of how this command is used. For an API to perform these operations programatically, please see the documentation for the [Operator](/docs/http/operator.html) endpoint. diff --git a/website/source/guides/index.html.markdown b/website/source/guides/index.html.markdown index 0c65518896b..f5e9c6187c1 100644 --- a/website/source/guides/index.html.markdown +++ b/website/source/guides/index.html.markdown @@ -1,11 +1,15 @@ --- layout: "guides" page_title: "Guides" -sidebar_current: "what" +sidebar_current: "guides-home" description: |- - Welcome to the Nomad guides! + Welcome to the Nomad guides! The section provides various guides for common + Nomad workflows and actions. --- -# This is the home page for Guides +# Nomad Guides -Here is some content! +Welcome to the Nomad guides! If you are just getting started with Nomad, please +start with the [Nomad introduction](/intro/index.html) instead and then continue +on to the guides. The guides provide examples for common Nomad workflows and +actions for both users and operators of Nomad. diff --git a/website/source/guides/outage.html.markdown b/website/source/guides/outage.html.markdown new file mode 100644 index 00000000000..d1f18876b9e --- /dev/null +++ b/website/source/guides/outage.html.markdown @@ -0,0 +1,189 @@ +--- +layout: "guides" +page_title: "Outage Recovery" +sidebar_current: "guides-outage-recovery" +description: |- + Don't panic! This is a critical first step. Depending on your deployment + configuration, it may take only a single server failure for cluster + unavailability. Recovery requires an operator to intervene, but recovery is + straightforward. +--- + +# Outage Recovery + +Don't panic! This is a critical first step. + +Depending on your +[deployment configuration](/docs/internals/consensus.html#deployment_table), it +may take only a single server failure for cluster unavailability. Recovery +requires an operator to intervene, but the process is straightforward. + +~> This guide is for recovery from a Nomad outage due to a majority +of server nodes in a datacenter being lost. If you are just looking to +add or remove servers, see the [bootstrapping +guide](/guides/cluster/bootstrapping.html). + +## Failure of a Single Server Cluster + +If you had only a single server and it has failed, simply restart it. A +single server configuration requires the +[`-bootstrap-expect=1`](/docs/agent/configuration/server.html#bootstrap_expect) +flag. If the server cannot be recovered, you need to bring up a new +server. See the [bootstrapping guide](/guides/cluster/bootstrapping.html) +for more detail. + +In the case of an unrecoverable server failure in a single server cluster, data +loss is inevitable since data was not replicated to any other servers. This is +why a single server deploy is **never** recommended. + +## Failure of a Server in a Multi-Server Cluster + +If you think the failed server is recoverable, the easiest option is to bring +it back online and have it rejoin the cluster with the same IP address, returning +the cluster to a fully healthy state. Similarly, even if you need to rebuild a +new Nomad server to replace the failed node, you may wish to do that immediately. +Keep in mind that the rebuilt server needs to have the same IP address as the failed +server. Again, once this server is online and has rejoined, the cluster will return +to a fully healthy state. + +Both of these strategies involve a potentially lengthy time to reboot or rebuild +a failed server. If this is impractical or if building a new server with the same +IP isn't an option, you need to remove the failed server. Usually, you can issue +a [`nomad server-force-leave`](/docs/commands/server-force-leave.html) command +to remove the failed server if it's still a member of the cluster. + +If [`nomad server-force-leave`](/docs/commands/server-force-leave.html) isn't +able to remove the server, you have two methods available to remove it, +depending on your version of Nomad: + +* In Nomad 0.5.5 and later, you can use the [`nomad operator raft + remove-peer`](/docs/commands/operator-raft-remove-peer.html) command to remove + the stale peer server on the fly with no downtime. + +* In versions of Nomad prior to 0.5.5, you can manually remove the stale peer +server using the `raft/peers.json` recovery file on all remaining servers. See +the [section below](#peers.json) for details on this procedure. This process +requires Nomad downtime to complete. + +In Nomad 0.5.5 and later, you can use the [`nomad operator raft +list-peers`](/docs/commands/operator-raft-list-peers.html) command to inspect +the Raft configuration: + +``` +$ nomad operator raft list-peers +Node ID Address State Voter +nomad-server01.global 10.10.11.5:4647 10.10.11.5:4647 follower true +nomad-server02.global 10.10.11.6:4647 10.10.11.6:4647 leader true +nomad-server03.global 10.10.11.7:4647 10.10.11.7:4647 follower true +``` + +## Failure of Multiple Servers in a Multi-Server Cluster + +In the event that multiple servers are lost, causing a loss of quorum and a +complete outage, partial recovery is possible using data on the remaining +servers in the cluster. There may be data loss in this situation because multiple +servers were lost, so information about what's committed could be incomplete. +The recovery process implicitly commits all outstanding Raft log entries, so +it's also possible to commit data that was uncommitted before the failure. + +See the [section below](#peers.json) for details of the recovery procedure. You +simply include just the remaining servers in the `raft/peers.json` recovery file. +The cluster should be able to elect a leader once the remaining servers are all +restarted with an identical `raft/peers.json` configuration. + +Any new servers you introduce later can be fresh with totally clean data directories +and joined using Nomad's `server-join` command. + +In extreme cases, it should be possible to recover with just a single remaining +server by starting that single server with itself as the only peer in the +`raft/peers.json` recovery file. + +Prior to Nomad 0.5.5 it wasn't always possible to recover from certain +types of outages with `raft/peers.json` because this was ingested before any Raft +log entries were played back. In Nomad 0.5.5 and later, the `raft/peers.json` +recovery file is final, and a snapshot is taken after it is ingested, so you are +guaranteed to start with your recovered configuration. This does implicitly commit +all Raft log entries, so should only be used to recover from an outage, but it +should allow recovery from any situation where there's some cluster data available. + + +## Manual Recovery Using peers.json + +To begin, stop all remaining servers. You can attempt a graceful leave, +but it will not work in most cases. Do not worry if the leave exits with an +error. The cluster is in an unhealthy state, so this is expected. + +In Nomad 0.5.5 and later, the `peers.json` file is no longer present +by default and is only used when performing recovery. This file will be deleted +after Nomad starts and ingests this file. Nomad 0.5.5 also uses a new, automatically- +created `raft/peers.info` file to avoid ingesting the `raft/peers.json` file on the +first start after upgrading. Be sure to leave `raft/peers.info` in place for proper +operation. + +Using `raft/peers.json` for recovery can cause uncommitted Raft log entries to be +implicitly committed, so this should only be used after an outage where no +other option is available to recover a lost server. Make sure you don't have +any automated processes that will put the peers file in place on a +periodic basis. + +The next step is to go to the +[`-data-dir`](/docs/agent/configuration/index.html#data_dir) of each Nomad +server. Inside that directory, there will be a `raft/` sub-directory. We need to +create a `raft/peers.json` file. It should look something like: + +```javascript +[ +"10.0.1.8:4647", +"10.0.1.6:4647", +"10.0.1.7:4647" +] +``` + +Simply create entries for all remaining servers. You must confirm +that servers you do not include here have indeed failed and will not later +rejoin the cluster. Ensure that this file is the same across all remaining +server nodes. + +At this point, you can restart all the remaining servers. In Nomad 0.5.5 and +later you will see them ingest recovery file: + +```text +... +2016/08/16 14:39:20 [INFO] nomad: found peers.json file, recovering Raft configuration... +2016/08/16 14:39:20 [INFO] nomad.fsm: snapshot created in 12.484µs +2016/08/16 14:39:20 [INFO] snapshot: Creating new snapshot at /tmp/peers/raft/snapshots/2-5-1471383560779.tmp +2016/08/16 14:39:20 [INFO] nomad: deleted peers.json file after successful recovery +2016/08/16 14:39:20 [INFO] raft: Restored from snapshot 2-5-1471383560779 +2016/08/16 14:39:20 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:10.212.15.121:4647 Address:10.212.15.121:4647}] +... +``` + +If any servers managed to perform a graceful leave, you may need to have them +rejoin the cluster using the [`server-join`](/docs/commands/server-join.html) command: + +```text +$ nomad server-join +Successfully joined cluster by contacting 1 nodes. +``` + +It should be noted that any existing member can be used to rejoin the cluster +as the gossip protocol will take care of discovering the server nodes. + +At this point, the cluster should be in an operable state again. One of the +nodes should claim leadership and emit a log like: + +```text +[INFO] nomad: cluster leadership acquired +``` + +In Nomad 0.5.5 and later, you can use the [`nomad operator raft +list-peers`](/docs/commands/operator-raft-list-peers.html) command to inspect +the Raft configuration: + +``` +$ nomad operator raft list-peers +Node ID Address State Voter +nomad-server01.global 10.10.11.5:4647 10.10.11.5:4647 follower true +nomad-server02.global 10.10.11.6:4647 10.10.11.6:4647 leader true +nomad-server03.global 10.10.11.7:4647 10.10.11.7:4647 follower true +``` diff --git a/website/source/guides/some-guides/group.html.markdown b/website/source/guides/some-guides/group.html.markdown deleted file mode 100644 index fa29eb3b5e2..00000000000 --- a/website/source/guides/some-guides/group.html.markdown +++ /dev/null @@ -1,15 +0,0 @@ ---- -layout: "guides" -page_title: "Guide Index" -sidebar_current: "some-guides" -description: |- - Index for a group in a group. ---- - -# The index for a group of guides - -It is an undisputed fact that distributed systems are hard; building -one is error-prone and time-consuming. As a result, few organizations -build a scheduler due to the inherent challenges. However, -most organizations must develop a means of deploying applications -and typically this evolves into an ad hoc deployment platform. diff --git a/website/source/guides/some-guides/one.html.markdown b/website/source/guides/some-guides/one.html.markdown deleted file mode 100644 index 53645a8b4ae..00000000000 --- a/website/source/guides/some-guides/one.html.markdown +++ /dev/null @@ -1,28 +0,0 @@ ---- -layout: "guides" -page_title: "Guide One" -sidebar_current: "some-guides" -description: |- - First in a group. ---- - -# Guide to being rad - -It is an undisputed fact that distributed systems are hard; building -one is error-prone and time-consuming. As a result, few organizations -build a scheduler due to the inherent challenges. However, -most organizations must develop a means of deploying applications -and typically this evolves into an ad hoc deployment platform. - -These deployment platforms are typically special cased to the needs -of the organization at the time of development, reduce future agility, -and require time and resources to build and maintain. - -Nomad provides a high-level job specification to easily deploy applications. -It has been designed to work at large scale, with multi-datacenter and -multi-region support built in. Nomad also has extensible drivers giving it -flexibility in the workloads it supports, including Docker. - -Nomad provides organizations of any size a solution for deployment -that is simple, robust, and scalable. It reduces the time and effort spent -re-inventing the wheel and users can focus instead on their business applications. diff --git a/website/source/guides/some-guides/three.html.markdown b/website/source/guides/some-guides/three.html.markdown deleted file mode 100644 index 213d163a782..00000000000 --- a/website/source/guides/some-guides/three.html.markdown +++ /dev/null @@ -1,28 +0,0 @@ ---- -layout: "guides" -page_title: "Guide One" -sidebar_current: "some-guides" -description: |- - First in a group. ---- - -# Guide to being rad three - -It is an undisputed fact that distributed systems are hard; building -one is error-prone and time-consuming. As a result, few organizations -build a scheduler due to the inherent challenges. However, -most organizations must develop a means of deploying applications -and typically this evolves into an ad hoc deployment platform. - -These deployment platforms are typically special cased to the needs -of the organization at the time of development, reduce future agility, -and require time and resources to build and maintain. - -Nomad provides a high-level job specification to easily deploy applications. -It has been designed to work at large scale, with multi-datacenter and -multi-region support built in. Nomad also has extensible drivers giving it -flexibility in the workloads it supports, including Docker. - -Nomad provides organizations of any size a solution for deployment -that is simple, robust, and scalable. It reduces the time and effort spent -re-inventing the wheel and users can focus instead on their business applications. diff --git a/website/source/guides/some-guides/two.html.markdown b/website/source/guides/some-guides/two.html.markdown deleted file mode 100644 index a5350f6b068..00000000000 --- a/website/source/guides/some-guides/two.html.markdown +++ /dev/null @@ -1,28 +0,0 @@ ---- -layout: "guides" -page_title: "Guide One" -sidebar_current: "some-guides" -description: |- - First in a group. ---- - -# Guide to being rad twoooooo - -It is an undisputed fact that distributed systems are hard; building -one is error-prone and time-consuming. As a result, few organizations -build a scheduler due to the inherent challenges. However, -most organizations must develop a means of deploying applications -and typically this evolves into an ad hoc deployment platform. - -These deployment platforms are typically special cased to the needs -of the organization at the time of development, reduce future agility, -and require time and resources to build and maintain. - -Nomad provides a high-level job specification to easily deploy applications. -It has been designed to work at large scale, with multi-datacenter and -multi-region support built in. Nomad also has extensible drivers giving it -flexibility in the workloads it supports, including Docker. - -Nomad provides organizations of any size a solution for deployment -that is simple, robust, and scalable. It reduces the time and effort spent -re-inventing the wheel and users can focus instead on their business applications. diff --git a/website/source/layouts/guides.erb b/website/source/layouts/guides.erb index bc2ef4a045a..e18758d1053 100644 --- a/website/source/layouts/guides.erb +++ b/website/source/layouts/guides.erb @@ -2,24 +2,27 @@ <% content_for :sidebar do %> <% end %>