Skip to content

Commit

Permalink
doc: fix typos
Browse files Browse the repository at this point in the history
Signed-off-by: Kefu Chai <[email protected]>
  • Loading branch information
tchaikov committed Sep 21, 2018
1 parent 98e5354 commit 5ee1fd2
Show file tree
Hide file tree
Showing 33 changed files with 51 additions and 51 deletions.
6 changes: 3 additions & 3 deletions README.FreeBSD
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Last updated: 2017-04-08

The FreeBSD build will build most of the tools in Ceph.
Note that the (kernel) RBD dependant items will not work
Note that the (kernel) RBD dependent items will not work

I started looking into Ceph, because the HAST solution with CARP and
ggate did not really do what I was looking for. But I'm aiming for
Expand Down Expand Up @@ -70,7 +70,7 @@ Build Prerequisites
11-RELEASE will also work. And Clang is at 3.8.0.
It uses the CLANG toolset that is available, 3.7 is no longer tested,
but was working when that was with 11-CURRENT.
Clang 3.4 (on 10.2-STABLE) does not have all required capabilites to
Clang 3.4 (on 10.2-STABLE) does not have all required capabilities to
compile everything

The following setup will get things running for FreeBSD:
Expand Down Expand Up @@ -158,5 +158,5 @@ Task to do:
with all the packages FreeBSD already has in place. Lots of minute
details to figure out

- Design a vitual disk implementation that can be used with behyve and
- Design a virtual disk implementation that can be used with behyve and
attached to an RBD image.
2 changes: 1 addition & 1 deletion doc/architecture.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1436,7 +1436,7 @@ Ceph Clients include a number of service interfaces. These include:

- **Filesystem**: The :term:`Ceph Filesystem` (CephFS) service provides
a POSIX compliant filesystem usable with ``mount`` or as
a filesytem in user space (FUSE).
a filesystem in user space (FUSE).

Ceph can run additional instances of OSDs, MDSs, and monitors for scalability
and high availability. The following diagram depicts the high-level
Expand Down
2 changes: 1 addition & 1 deletion doc/cephfs/cache-size-limits.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,4 @@ Be aware that the cache limit is not a hard limit. Potential bugs in the CephFS
The memory tracking used is currently imprecise by a constant factor. This
will be addressed in http://tracker.ceph.com/issues/22599. MDS deployments
with large `mds_cache_memory_limit` (64GB+) should underallocate RAM to
accomodate.
accommodate.
2 changes: 1 addition & 1 deletion doc/cephfs/client-auth.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ the shell.

See `User Management - Add a User to a Keyring`_. for additional details on user management

To restrict a client to the specfied sub-directory only, we mention the specified
To restrict a client to the specified sub-directory only, we mention the specified
directory while mounting using the following syntax. ::

./ceph-fuse -n client.*client_name* *mount_path* -r *directory_to_be_mounted*
Expand Down
2 changes: 1 addition & 1 deletion doc/cephfs/mds-config-ref.rst
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,7 @@

``mds bal fragment interval``

:Description: The delay (in seconds) between a fragment being elegible for split
:Description: The delay (in seconds) between a fragment being eligible for split
or merge and executing the fragmentation change.
:Type: 32-bit Integer
:Default: ``5``
Expand Down
2 changes: 1 addition & 1 deletion doc/cephfs/multimds.rst
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ to. A default value of ``-1`` indicates the directory is not pinned.

A directory's export pin is inherited from its closest parent with a set export
pin. In this way, setting the export pin on a directory affects all of its
children. However, the parents pin can be overriden by setting the child
children. However, the parents pin can be overridden by setting the child
directory's export pin. For example:

::
Expand Down
2 changes: 1 addition & 1 deletion doc/cephfs/upgrading.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ assertions or other faults due to incompatible messages or other functional
differences. For this reason, it's necessary during any cluster upgrade to
reduce the number of active MDS for a file system to one first so that two
active MDS do not communicate with different versions. Further, it's also
necessary to take standbys offline as any new CompatSet flags will propogate
necessary to take standbys offline as any new CompatSet flags will propagate
via the MDSMap to all MDS and cause older MDS to suicide.

The proper sequence for upgrading the MDS cluster is:
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/blkin.rst
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ You may want to check that ceph is up.::

./ceph status

Now put something in usin rados, check that it made it, get it back, and remove it.::
Now put something in using rados, check that it made it, get it back, and remove it.::

./ceph osd pool create test-blkin 8
./rados put test-object-1 ./vstart.sh --pool=test-blkin
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/cephfs-snapshots.rst
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ Generating a SnapContext
------------------------
A RADOS `SnapContext` consists of a snapshot sequence ID (`snapid`) and all
the snapshot IDs that an object is already part of. To generate that list, we
combine `snapids` associated with the SnapRealm and all vaild `snapids` in
combine `snapids` associated with the SnapRealm and all valid `snapids` in
`past_parent_snaps`. Stale `snapids` are filtered out by SnapClient's cached
effective snapshots.

Expand Down
2 changes: 1 addition & 1 deletion doc/dev/config.rst
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ Default values

There is a default value for every config option. In some cases, there may
also be a *daemon default* that only applies to code that declares itself
as a daemon (in thise case, the regular default only applies to non-daemons).
as a daemon (in this case, the regular default only applies to non-daemons).

Safety
------
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/mds_internals/data-structures.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ quite large. Please be careful if you want to add new fields to them.

*OpenFileTable*
Open file table tracks open files and their ancestor directories. Recovering
MDS can easily get open files' pathes, significantly reducing the time of
MDS can easily get open files' paths, significantly reducing the time of
loading inodes for open files. Each entry in the table corresponds to an inode,
it records linkage information (parent inode and dentry name) of the inode. MDS
can constructs the inode's path by recursively lookup parent inode's linkage.
Expand Down
8 changes: 4 additions & 4 deletions doc/dev/osd_internals/erasure_coding/proposals.rst
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ object keys. Perhaps some modeling here can help resolve this
issue. The data of the temporary object wants to be located as close
to the data of the base object as possible. This may be best performed
by adding a new ObjectStore creation primitive that takes the base
object as an addtional parameter that is a hint to the allocator.
object as an additional parameter that is a hint to the allocator.

Sam: I think that the short lived thing may be a red herring. We'll
be updating the donor and primary objects atomically, so it seems like
Expand Down Expand Up @@ -224,7 +224,7 @@ code necessarily has designated parity shards which see every write
might be desirable to rotate the shards based on object hash). Even
if you chose to designate a shard as witnessing all writes, the pg
might be degraded with that particular shard missing. This is a bit
tricky, currently reads and writes implicitely return the most recent
tricky, currently reads and writes implicitly return the most recent
version of the object written. On reads, we'd have to read K shards
to answer that question. We can get around that by adding a "don't
tell me the current version" flag. Writes are more problematic: we
Expand Down Expand Up @@ -254,7 +254,7 @@ user version assert on ec for now (I think? Only user is rgw bucket
indices iirc, and those will always be on replicated because they use
omap).

We can avoid (1) by maintaining the missing set explicitely. It's
We can avoid (1) by maintaining the missing set explicitly. It's
already possible for there to be a missing object without a
corresponding log entry (Consider the case where the most recent write
is to an object which has not been updated in weeks. If that write
Expand Down Expand Up @@ -355,7 +355,7 @@ though. It's a bit silly since all "shards" see all writes, but it
would still let us implement and partially test the augmented backfill
code as well as the extra pg log entry fields -- this depends on the
explicit pg log entry branch having already merged. It's not entirely
clear to me that this one is worth doing seperately. It's enough code
clear to me that this one is worth doing separately. It's enough code
that I'd really prefer to get it done independently, but it's also a
fair amount of scaffolding that will be later discarded.

Expand Down
2 changes: 1 addition & 1 deletion doc/dev/osd_internals/last_epoch_started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Thus, the minimum last_update across all infos with
info.last_epoch_started >= MAX(history.last_epoch_started) must be an
upper bound on writes reported as committed to the client.

We update info.last_epoch_started with the intial activation message,
We update info.last_epoch_started with the initial activation message,
but we only update history.last_epoch_started after the new
info.last_epoch_started is persisted (possibly along with the first
write). This ensures that we do not require an osd with the most
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/osd_internals/log_based_pg.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ writes on overlapping regions), we might as well serialize writes on
the whole PG since it lets us represent the current state of the PG
using two numbers: the epoch of the map on the primary in which the
most recent write started (this is a bit stranger than it might seem
since map distribution itself is asyncronous -- see Peering and the
since map distribution itself is asynchronous -- see Peering and the
concept of interval changes) and an increasing per-pg version number
-- this is referred to in the code with type eversion_t and stored as
pg_info_t::last_update. Furthermore, we maintain a log of "recent"
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/osd_internals/osd_throttles.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ included in FileStore as FileStore::wbthrottle. The intention is to
bound the amount of outstanding IO we need to do to flush the journal.
At the same time, we don't want to necessarily do it inline in case we
might be able to combine several IOs on the same object close together
in time. Thus, in FileStore::_write, we queue the fd for asyncronous
in time. Thus, in FileStore::_write, we queue the fd for asynchronous
flushing and block in FileStore::_do_op if we have exceeded any hard
limits until the background flusher catches up.

Expand Down
4 changes: 2 additions & 2 deletions doc/dev/rados-client-protocol.rst
Original file line number Diff line number Diff line change
Expand Up @@ -77,9 +77,9 @@ A backoff request has four properties:
#. hobject_t end

There are two types of backoff: a *PG* backoff will plug all requests
targetting an entire PG at the client, as described by a range of the
targeting an entire PG at the client, as described by a range of the
hash/hobject_t space [begin,end), while an *object* backoff will plug
all requests targetting a single object (begin == end).
all requests targeting a single object (begin == end).

When the client receives a *block* backoff message, it is now
responsible for *not* sending any requests for hobject_ts described by
Expand Down
4 changes: 2 additions & 2 deletions doc/dev/radosgw/s3_compliance.rst
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ S3 Documentation reference : http://docs.aws.amazon.com/AmazonS3/latest/API/REST
+--------+------------------------+------------+------------------------------------------------------------------------------------------------------------+-------------+
| GET | Bucket requestPayment | No | | |
+--------+------------------------+------------+------------------------------------------------------------------------------------------------------------+-------------+
| GET | Bucket versionning | No | | |
| GET | Bucket versioning | No | | |
+--------+------------------------+------------+------------------------------------------------------------------------------------------------------------+-------------+
| GET | Bucket website | No | | |
+--------+------------------------+------------+------------------------------------------------------------------------------------------------------------+-------------+
Expand Down Expand Up @@ -209,7 +209,7 @@ S3 Documentation reference : http://docs.aws.amazon.com/AmazonS3/latest/API/REST
+--------+------------------------+------------+------------------------------------------------------------------------------------------------------------+-------------+
| PUT | Bucket requestPayment | No | | |
+--------+------------------------+------------+------------------------------------------------------------------------------------------------------------+-------------+
| PUT | Bucket versionning | No | | |
| PUT | Bucket versioning | No | | |
+--------+------------------------+------------+------------------------------------------------------------------------------------------------------------+-------------+
| PUT | Bucket website | No | | |
+--------+------------------------+------------+------------------------------------------------------------------------------------------------------------+-------------+
Expand Down
2 changes: 1 addition & 1 deletion doc/glossary.rst
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ reflect either technical terms or legacy ways of referring to Ceph systems.
``fsid`` term is used interchangeably with ``uuid``

OSD uuid
Just like the OSD fsid, this is the OSD unique identifer and is used
Just like the OSD fsid, this is the OSD unique identifier and is used
interchangeably with ``fsid``

bluestore
Expand Down
2 changes: 1 addition & 1 deletion doc/mgr/dashboard.rst
Original file line number Diff line number Diff line change
Expand Up @@ -385,7 +385,7 @@ User accounts are also associated with a set of roles that define which
dashboard functionality can be accessed by the user.

The Dashboard functionality/modules are grouped within a *security scope*.
Security scopes are predefined and static. The current avaliable security
Security scopes are predefined and static. The current available security
scopes are:

- **hosts**: includes all features related to the ``Hosts`` menu
Expand Down
2 changes: 1 addition & 1 deletion doc/mgr/influx.rst
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ Additional optional configuration settings are:
Debugging
---------

By default, a few debugging statments as well as error statements have been set to print in the log files. Users can add more if necessary.
By default, a few debugging statements as well as error statements have been set to print in the log files. Users can add more if necessary.
To make use of the debugging option in the module:

- Add this to the ceph.conf file.::
Expand Down
2 changes: 1 addition & 1 deletion doc/mgr/orchestrator_modules.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ provides the ability to discover devices and create Ceph services. This
includes external projects such as ceph-ansible, DeepSea, and Rook.

An *orchestrator module* is a ceph-mgr module (:ref:`mgr-module-dev`)
which implements common managment operations using a particular
which implements common management operations using a particular
orchestrator.

Orchestrator modules subclass the ``Orchestrator`` class: this class is
Expand Down
2 changes: 1 addition & 1 deletion doc/rados/configuration/mon-config-ref.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1197,7 +1197,7 @@ Miscellaneous

:Description: Largest number of PGs per "involved" OSD to let split create.
When we increase the ``pg_num`` of a pool, the placement groups
will be splitted on all OSDs serving that pool. We want to avoid
will be split on all OSDs serving that pool. We want to avoid
extreme multipliers on PG splits.
:Type: Integer
:Default: 300
Expand Down
2 changes: 1 addition & 1 deletion doc/rados/configuration/pool-pg-config-ref.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

When you create pools and set the number of placement groups for the pool, Ceph
uses default values when you don't specifically override the defaults. **We
recommend** overridding some of the defaults. Specifically, we recommend setting
recommend** overriding some of the defaults. Specifically, we recommend setting
a pool's replica size and overriding the default number of placement groups. You
can specifically set these values when running `pool`_ commands. You can also
override the defaults by adding new ones in the ``[global]`` section of your
Expand Down
2 changes: 1 addition & 1 deletion doc/rados/operations/erasure-code-jerasure.rst
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ Where:
``packetsize={bytes}``

:Description: The encoding will be done on packets of *bytes* size at
a time. Chosing the right packet size is difficult. The
a time. Choosing the right packet size is difficult. The
*jerasure* documentation contains extensive information
on this topic.

Expand Down
4 changes: 2 additions & 2 deletions doc/rados/operations/erasure-code-lrc.rst
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ Low level plugin configuration

The sum of **k** and **m** must be a multiple of the **l** parameter.
The low level configuration parameters do not impose such a
restriction and it may be more convienient to use it for specific
restriction and it may be more convenient to use it for specific
purposes. It is for instance possible to define two groups, one with 4
chunks and another with 3 chunks. It is also possible to recursively
define locality sets, for instance datacenters and racks into
Expand Down Expand Up @@ -280,7 +280,7 @@ The steps found in the layers description::
step 3 ____cDDD

are applied in order. For instance, if a 4K object is encoded, it will
first go thru *step 1* and be divided in four 1K chunks (the four
first go through *step 1* and be divided in four 1K chunks (the four
uppercase D). They are stored in the chunks 2, 3, 6 and 7, in
order. From these, two coding chunks are calculated (the two lowercase
c). The coding chunks are stored in the chunks 1 and 5, respectively.
Expand Down
2 changes: 1 addition & 1 deletion doc/rados/operations/monitoring-osd-pg.rst
Original file line number Diff line number Diff line change
Expand Up @@ -413,7 +413,7 @@ Stale
While Ceph uses heartbeats to ensure that hosts and daemons are running, the
``ceph-osd`` daemons may also get into a ``stuck`` state where they are not
reporting statistics in a timely manner (e.g., a temporary network fault). By
default, OSD daemons report their placement group, up thru, boot and failure
default, OSD daemons report their placement group, up through, boot and failure
statistics every half second (i.e., ``0.5``), which is more frequent than the
heartbeat thresholds. If the **Primary OSD** of a placement group's acting set
fails to report to the monitor or if other OSDs have reported the primary OSD
Expand Down
2 changes: 1 addition & 1 deletion doc/radosgw/elastic-sync-module.rst
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ than string.
POST /{bucket}?mdsearch
x-amz-meta-search: <key [; type]> [, ...]

Multiple metadata fields must be comma seperated, a type can be forced for a
Multiple metadata fields must be comma separated, a type can be forced for a
field with a `;`. The currently allowed types are string(default), integer and
date

Expand Down
6 changes: 3 additions & 3 deletions doc/releases/cuttlefish.rst
Original file line number Diff line number Diff line change
Expand Up @@ -354,7 +354,7 @@ Please see `Upgrading from Bobtail to Cuttlefish`_ for details.

* The sysvinit script now uses the ceph.conf file on the remote host
when starting remote daemons via the '-a' option. Note that if '-a'
is used in conjuction with '-c path', the path must also be present
is used in conjunction with '-c path', the path must also be present
on the remote host (it is not copied to a temporary file, as it was
previously).

Expand Down Expand Up @@ -472,7 +472,7 @@ Notable changes from v0.56 "Bobtail"
* mds: many fixes (Yan Zheng)
* mds: misc bug fixes with clustered MDSs and failure recovery
* mds: misc bug fixes with readdir
* mds: new encoding for all data types (to allow forward/backward compatbility) (Greg Farnum)
* mds: new encoding for all data types (to allow forward/backward compatibility) (Greg Farnum)
* mds: store and update backpointers/traces on directory, file objects (Sam Lang)
* mon: 'osd crush add|link|unlink|add-bucket ...' commands
* mon: ability to tune leveldb
Expand Down Expand Up @@ -665,7 +665,7 @@ Notable Changes
* radosgw: fix object copy onto self (Yehuda Sadeh)
* radosgw: ACL grants in headers (Caleb Miles)
* radosgw: ability to listen to fastcgi via a port (Guilhem Lettron)
* mds: new encoding for all data types (to allow forward/backward compatbility) (Greg Farnum)
* mds: new encoding for all data types (to allow forward/backward compatibility) (Greg Farnum)
* mds: fast failover between MDSs (enforce unique mds names)
* crush: ability to create, remove rules via CLI
* many many cleanups (Danny Al-Gaaf)
Expand Down
2 changes: 1 addition & 1 deletion doc/start/quick-ceph-deploy.rst
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@ A Ceph Storage Cluster requires at least one Ceph Monitor and Ceph
Manager to run. For high availability, Ceph Storage Clusters typically
run multiple Ceph Monitors so that the failure of a single Ceph
Monitor will not bring down the Ceph Storage Cluster. Ceph uses the
Paxos algorithm, which requires a majority of monitors (i.e., greather
Paxos algorithm, which requires a majority of monitors (i.e., greater
than *N/2* where *N* is the number of monitors) to form a quorum.
Odd numbers of monitors tend to be better, although this is not required.

Expand Down
Loading

0 comments on commit 5ee1fd2

Please sign in to comment.