Skip to content

Commit

Permalink
doc,man: typos found by codespell
Browse files Browse the repository at this point in the history
Signed-off-by: Dimitri Papadopoulos <[email protected]>
  • Loading branch information
DimitriPapadopoulos committed Dec 15, 2021
1 parent 82a77ef commit 7677651
Show file tree
Hide file tree
Showing 58 changed files with 94 additions and 94 deletions.
4 changes: 2 additions & 2 deletions doc/cephadm/adoption.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Converting an existing cluster to cephadm
=========================================

It is possible to convert some existing clusters so that they can be managed
with ``cephadm``. This statment applies to some clusters that were deployed
with ``cephadm``. This statement applies to some clusters that were deployed
with ``ceph-deploy``, ``ceph-ansible``, or ``DeepSea``.

This section of the documentation explains how to determine whether your
Expand Down Expand Up @@ -51,7 +51,7 @@ Preparation

cephadm ls

Before starting the converstion process, ``cephadm ls`` shows all existing
Before starting the conversion process, ``cephadm ls`` shows all existing
daemons to have a style of ``legacy``. As the adoption process progresses,
adopted daemons will appear with a style of ``cephadm:v1``.

Expand Down
4 changes: 2 additions & 2 deletions doc/cephadm/host-management.rst
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ All osds on the host will be scheduled to be removed. You can check osd removal

see :ref:`cephadm-osd-removal` for more details about osd removal

You can check if there are no deamons left on the host with the following:
You can check if there are no daemons left on the host with the following:

.. prompt:: bash #

Expand Down Expand Up @@ -202,7 +202,7 @@ Setting the initial CRUSH location of host
==========================================

Hosts can contain a ``location`` identifier which will instruct cephadm to
create a new CRUSH host located in the specified hierachy.
create a new CRUSH host located in the specified hierarchy.

.. code-block:: yaml
Expand Down
2 changes: 1 addition & 1 deletion doc/cephadm/operations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -524,7 +524,7 @@ Purging a cluster

.. danger:: THIS OPERATION WILL DESTROY ALL DATA STORED IN THIS CLUSTER

In order to destory a cluster and delete all data stored in this cluster, pause
In order to destroy a cluster and delete all data stored in this cluster, pause
cephadm to avoid deploying new daemons.

.. prompt:: bash #
Expand Down
2 changes: 1 addition & 1 deletion doc/cephadm/services/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -435,7 +435,7 @@ Consider the following service specification:
count: 3
label: myfs
This service specifcation instructs cephadm to deploy three daemons on hosts
This service specification instructs cephadm to deploy three daemons on hosts
labeled ``myfs`` across the cluster.

If there are fewer than three daemons deployed on the candidate hosts, cephadm
Expand Down
4 changes: 2 additions & 2 deletions doc/cephadm/services/mon.rst
Original file line number Diff line number Diff line change
Expand Up @@ -170,8 +170,8 @@ network ``10.1.2.0/24``, run the following commands:

ceph orch apply mon --placement="newhost1,newhost2,newhost3"

Futher Reading
==============
Further Reading
===============

* :ref:`rados-operations`
* :ref:`rados-troubleshooting-mon`
Expand Down
8 changes: 4 additions & 4 deletions doc/cephadm/services/osd.rst
Original file line number Diff line number Diff line change
Expand Up @@ -768,8 +768,8 @@ layout, it is recommended to apply different OSD specs matching only one
set of hosts. Typically you will have a spec for multiple hosts with the
same layout.

The sevice id as the unique key: In case a new OSD spec with an already
applied service id is applied, the existing OSD spec will be superseeded.
The service id as the unique key: In case a new OSD spec with an already
applied service id is applied, the existing OSD spec will be superseded.
cephadm will now create new OSD daemons based on the new spec
definition. Existing OSD daemons will not be affected. See :ref:`cephadm-osd-declarative`.

Expand Down Expand Up @@ -912,8 +912,8 @@ activates all existing OSDs on a host.

This will scan all existing disks for OSDs and deploy corresponding daemons.

Futher Reading
==============
Further Reading
===============

* :ref:`ceph-volume`
* :ref:`rados-index`
2 changes: 1 addition & 1 deletion doc/cephadm/services/rgw.rst
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ High availability service for RGW
=================================

The *ingress* service allows you to create a high availability endpoint
for RGW with a minumum set of configuration options. The orchestrator will
for RGW with a minimum set of configuration options. The orchestrator will
deploy and manage a combination of haproxy and keepalived to provide load
balancing on a floating virtual IP.

Expand Down
4 changes: 2 additions & 2 deletions doc/cephadm/troubleshooting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -273,7 +273,7 @@ To call miscellaneous like ``ceph-objectstore-tool`` or
0: [v2:127.0.0.1:3300/0,v1:127.0.0.1:6789/0] mon.myhostname

This command sets up the environment in a way that is suitable
for extended daemon maintenance and running the deamon interactively.
for extended daemon maintenance and running the daemon interactively.

.. _cephadm-restore-quorum:

Expand Down Expand Up @@ -324,7 +324,7 @@ Get the container image::

ceph config get "mgr.hostname.smfvfd" container_image

Create a file ``config-json.json`` which contains the information neccessary to deploy
Create a file ``config-json.json`` which contains the information necessary to deploy
the daemon:

.. code-block:: json
Expand Down
2 changes: 1 addition & 1 deletion doc/cephfs/capabilities.rst
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ clients allowed, even some capabilities are not needed or wanted by the clients,
as pre-issuing capabilities could reduce latency in some cases.

If there is only one client, usually it will be the loner client for all the inodes.
While in multiple clients case, the MDS will try to caculate a loner client out for
While in multiple clients case, the MDS will try to calculate a loner client out for
each inode depending on the capabilities the clients (needed | wanted), but usually
it will fail. The loner client will always get all the capabilities.

Expand Down
6 changes: 3 additions & 3 deletions doc/cephfs/cephfs-mirroring.rst
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ To stop a mirroring directory snapshots use::
$ ceph fs snapshot mirror remove <fs_name> <path>

Only absolute directory paths are allowed. Also, paths are normalized by the mirroring
module, therfore, `/a/b/../b` is equivalent to `/a/b`.
module, therefore, `/a/b/../b` is equivalent to `/a/b`.

$ mkdir -p /d0/d1/d2
$ ceph fs snapshot mirror add cephfs /d0/d1/d2
Expand All @@ -124,7 +124,7 @@ module, therfore, `/a/b/../b` is equivalent to `/a/b`.
Error EEXIST: directory /d0/d1/d2 is already tracked

Once a directory is added for mirroring, its subdirectory or ancestor directories are
disallowed to be added for mirorring::
disallowed to be added for mirroring::

$ ceph fs snapshot mirror add cephfs /d0/d1
Error EINVAL: /d0/d1 is a ancestor of tracked path /d0/d1/d2
Expand Down Expand Up @@ -301,7 +301,7 @@ E.g., adding a regular file for synchronization would result in failed status::

This allows a user to add a non-existent directory for synchronization. The mirror daemon
would mark the directory as failed and retry (less frequently). When the directory comes
to existence, the mirror daemons would unmark the failed state upon successfull snapshot
to existence, the mirror daemons would unmark the failed state upon successful snapshot
synchronization.

When mirroring is disabled, the respective `fs mirror status` command for the file system
Expand Down
2 changes: 1 addition & 1 deletion doc/cephfs/disaster-recovery-experts.rst
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ It is **important** to ensure that all workers have completed the
scan_extents phase before any workers enter the scan_inodes phase.

After completing the metadata recovery, you may want to run cleanup
operation to delete ancillary data geneated during recovery.
operation to delete ancillary data generated during recovery.

::

Expand Down
6 changes: 3 additions & 3 deletions doc/cephfs/fs-volumes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ storage administrators among others can use the common CLI provided by the
ceph-mgr volumes module to manage the CephFS exports.

The ceph-mgr volumes module implements the following file system export
abstactions:
abstractions:

* FS volumes, an abstraction for CephFS file systems

Expand Down Expand Up @@ -359,13 +359,13 @@ To delete a partial clone use::
$ ceph fs subvolume rm <vol_name> <clone_name> [--group_name <group_name>] --force

.. note:: Cloning only synchronizes directories, regular files and symbolic links. Also, inode timestamps (access and
modification times) are synchronized upto seconds granularity.
modification times) are synchronized up to seconds granularity.

An `in-progress` or a `pending` clone operation can be canceled. To cancel a clone operation use the `clone cancel` command::

$ ceph fs clone cancel <vol_name> <clone_name> [--group_name <group_name>]

On successful cancelation, the cloned subvolume is moved to `canceled` state::
On successful cancellation, the cloned subvolume is moved to `canceled` state::

$ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1
$ ceph fs clone cancel cephfs clone1
Expand Down
2 changes: 1 addition & 1 deletion doc/cephfs/health-messages.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ performance issues::
MDS_SLOW_REQUEST 1 MDSs report slow requests
mds.fs-01(mds.0): 5 slow requests are blocked > 30 secs

Where, for intance, ``MDS_SLOW_REQUEST`` is the unique code representing the
Where, for instance, ``MDS_SLOW_REQUEST`` is the unique code representing the
condition where requests are taking long time to complete. And the following
description shows its severity and the MDS daemons which are serving these
slow requests.
Expand Down
4 changes: 2 additions & 2 deletions doc/cephfs/lazyio.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Using LazyIO
============

LazyIO includes two methods ``lazyio_propagate()`` and ``lazyio_synchronize()``.
With LazyIO enabled, writes may not be visble to other clients until
With LazyIO enabled, writes may not be visible to other clients until
``lazyio_propagate()`` is called. Reads may come from local cache (irrespective of
changes to the file by other clients) until ``lazyio_synchronize()`` is called.

Expand Down Expand Up @@ -59,7 +59,7 @@ particular client/file descriptor in a parallel application:
/* The barrier makes sure changes associated with all file descriptors
are propagated so that there is certainty that the backing file
is upto date */
is up to date */
application_specific_barrier();

char in_buf[40];
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/cephadm/scalability-notes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

This document does NOT define a specific proposal or some future work.
Instead it merely lists a few thoughts that MIGHT be relevant for future
cephadm enhacements.
cephadm enhancements.

*******
Intro
Expand Down
6 changes: 3 additions & 3 deletions doc/dev/cephfs-mirroring.rst
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ To stop a mirroring directory snapshots use::
$ ceph fs snapshot mirror remove <fs_name> <path>

Only absolute directory paths are allowed. Also, paths are normalized by the mirroring
module, therfore, `/a/b/../b` is equivalent to `/a/b`.
module, therefore, `/a/b/../b` is equivalent to `/a/b`.

$ mkdir -p /d0/d1/d2
$ ceph fs snapshot mirror add cephfs /d0/d1/d2
Expand All @@ -170,7 +170,7 @@ module, therfore, `/a/b/../b` is equivalent to `/a/b`.
Error EEXIST: directory /d0/d1/d2 is already tracked

Once a directory is added for mirroring, its subdirectory or ancestor directories are
disallowed to be added for mirorring::
disallowed to be added for mirroring::

$ ceph fs snapshot mirror add cephfs /d0/d1
Error EINVAL: /d0/d1 is a ancestor of tracked path /d0/d1/d2
Expand Down Expand Up @@ -355,7 +355,7 @@ E.g., adding a regular file for synchronization would result in failed status::

This allows a user to add a non-existent directory for synchronization. The mirror daemon
would mark the directory as failed and retry (less frequently). When the directory comes
to existence, the mirror daemons would unmark the failed state upon successfull snapshot
to existence, the mirror daemons would unmark the failed state upon successful snapshot
synchronization.

When mirroring is disabled, the respective `fs mirror status` command for the file system
Expand Down
8 changes: 4 additions & 4 deletions doc/dev/continuous-integration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ Shaman
is a server offering RESTful API allowing the clients to query the
information of repos hosted by chacra nodes. Shaman is also known
for its `Web UI`_. But please note, shaman does not build the
packages, it justs offers information of the builds.
packages, it just offers information on the builds.

As the following shows, `chacra`_ manages multiple projects whose metadata
are stored in a database. These metadata are exposed via Shaman as a web
Expand Down Expand Up @@ -199,7 +199,7 @@ libraries in our dist tarball. They are
- pmdk

``make-dist`` is a script used by our CI pipeline to create dist tarball so the
tarball can be used to build the Ceph packages in a clean room environmet. When
tarball can be used to build the Ceph packages in a clean room environment. When
we need to upgrade these third party libraries, we should

- update the CMake script
Expand Down Expand Up @@ -231,8 +231,8 @@ ref
a unique id of a given version of a set packages. This id is used to reference
the set packages under the ``<project>/<branch>``. It is a good practice to
version the packaging recipes, like the ``debian`` directory for building deb
packages and the ``spec`` for building rpm packages, and use ths sha1 of the
packaging receipe for the ``ref``. But you could also the a random string for
packages and the ``spec`` for building rpm packages, and use the sha1 of the
packaging receipe for the ``ref``. But you could also use a random string for
``ref``, like the tag name of the built source tree.

distro
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/crimson/crimson.rst
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ pg stats reported to mgr
------------------------

Crimson collects the per-pg, per-pool, and per-osd stats in a `MPGStats`
messsage, and send it over to mgr, so that the mgr modules can query
message, and send it over to mgr, so that the mgr modules can query
them using the `MgrModule.get()` method.

asock command
Expand Down
8 changes: 4 additions & 4 deletions doc/dev/crimson/poseidonstore.rst
Original file line number Diff line number Diff line change
Expand Up @@ -254,7 +254,7 @@ Comparison
* Worst case

- At least three writes are required additionally on WAL, object metadata, and data blocks.
- If the flush from WAL to the data parition occurs frequently, radix tree onode structure needs to be update
- If the flush from WAL to the data partition occurs frequently, radix tree onode structure needs to be update
in many times. To minimize such overhead, we can make use of batch processing to minimize the update on the tree
(the data related to the object has a locality because it will have the same parent node, so updates can be minimized)

Expand Down Expand Up @@ -285,7 +285,7 @@ Detailed Design

.. code-block:: c
stuct onode {
struct onode {
extent_tree block_maps;
b+_tree omaps;
map xattrs;
Expand Down Expand Up @@ -380,7 +380,7 @@ Detailed Design

* Omap and xattr
In this design, omap and xattr data is tracked by b+tree in onode. The onode only has the root node of b+tree.
The root node contains entires which indicate where the key onode exists.
The root node contains entries which indicate where the key onode exists.
So, if we know the onode, omap can be found via omap b+tree.

* Fragmentation
Expand Down Expand Up @@ -437,7 +437,7 @@ Detailed Design
WAL
---
Each SP has a WAL.
The datas written to the WAL are metadata updates, free space update and small data.
The data written to the WAL are metadata updates, free space update and small data.
Note that only data smaller than the predefined threshold needs to be written to the WAL.
The larger data is written to the unallocated free space and its onode's extent_tree is updated accordingly
(also on-disk extent tree). We statically allocate WAL partition aside from data partition pre-configured.
Expand Down
4 changes: 2 additions & 2 deletions doc/dev/dev_cluster_deployement.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Options

.. option:: -k

Keep old configuration files instead of overwritting theses.
Keep old configuration files instead of overwriting these.

.. option:: -K, --kstore

Expand Down Expand Up @@ -135,7 +135,7 @@ Environment variables

{OSD,MDS,MON,RGW}

Theses environment variables will contains the number of instances of the desired ceph process you want to start.
These environment variables will contains the number of instances of the desired ceph process you want to start.

Example: ::

Expand Down
8 changes: 4 additions & 4 deletions doc/dev/developer_guide/running-tests-locally.rst
Original file line number Diff line number Diff line change
Expand Up @@ -137,12 +137,12 @@ Running Workunits Using vstart_enviroment.sh

Code can be tested by building Ceph locally from source, starting a vstart
cluster, and running any suite against it.
Similar to S3-Tests, other workunits can be run against by configuring your enviroment.
Similar to S3-Tests, other workunits can be run against by configuring your environment.

Set up the enviroment
^^^^^^^^^^^^^^^^^^^^^
Set up the environment
^^^^^^^^^^^^^^^^^^^^^^

Configure your enviroment::
Configure your environment::

$ . ./build/vstart_enviroment.sh

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ A job failure might be caused by one or more of the following reasons:

* environment setup (`testing on varied
systems <https://github.com/ceph/ceph/tree/master/qa/distros/supported>`_):
testing compatibility with stable realeases for supported versions.
testing compatibility with stable releases for supported versions.

* permutation of config values: for instance, `qa/suites/rados/thrash
<https://github.com/ceph/ceph/tree/master/qa/suites/rados/thrash>`_ ensures
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/documenting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
User documentation
==================

The documentation on docs.ceph.com is generated from the restructuredText
The documentation on docs.ceph.com is generated from the reStructuredText
sources in ``/doc/`` in the Ceph git repository.

Please make sure that your changes are written in a way that is intended
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/mon-on-disk-formats.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ AuthMonitor::upgrade_format() called by `PaxosService::_active()`::
boil down
---------

* if `format_version >= current_version` then format is uptodate, return.
* if `format_version >= current_version` then format is up-to-date, return.
* if `features doesn't contain LUMINOUS` then `current_version = 1`
* else if `features doesn't contain MIMIC` then `current_version = 2`
* else `current_version = 3`
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/msgr2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -578,7 +578,7 @@ Compression will not be possible when using secure mode, unless configured speci

Post-compression frame format
-----------------------------
Depending on the negotiated connection mode from TAG_COMPRESSION_DONE, the connection is able to acccept/send compressed frames or process all frames as decompressed.
Depending on the negotiated connection mode from TAG_COMPRESSION_DONE, the connection is able to accept/send compressed frames or process all frames as decompressed.

# msgr2.x-force mode

Expand Down
Loading

0 comments on commit 7677651

Please sign in to comment.