Skip to content
This repository was archived by the owner on Jan 20, 2018. It is now read-only.

Releases: spjmurray/puppet-ceph

Version 4.0.0

13 Sep 09:14
Compare
Choose a tag to compare

Major new release for Ceph Luminous LTS. The most significant, and breaking API changes relate to the handling of keys. Ceph keys are no longer keyed on file path but on user name, the path being optional now. Capabilities are now passed around as a hash rather than individual keys per service which makes the underlying code cleaner and more extensible.

Upgrading from Ceph Jewel

This is a fairly simple operation. Refer to the official documentation if any of this is unclear.

Begin by ensuring the required flags are set:

ceph osd set sortbitwise
ceph osd set require_jewel_osds

It is recommended that monitors be running 10.2.8 or later as it will report a warning condition if these are not set.

With your existing puppet-ceph module update the ceph::repo_version to luminous and let it propagate across your cluster, this will non-destructively update the repositories for the upgrade step.

Disable puppet across all ceph nodes.

Manually upgrade the monitor nodes:

apt-get -y install ceph
systemctl restart ceph-mon.target

Upgrade the ceph puppet module to 4.x.x. You will need to update your ceph key configuration and also create a client.bootstrap-mgr key which lives in /var/lib/ceph/bootstrap-mgr/ceph.keyring. Consult the README for a configuration example.

Run run puppet on the monitors, this will enable and activate the ceph manager daemons.

Upgrade the OSDs, first by setting the noout flag, then upgrading and restarting the daemons:

apt-get -y install ceph
systemctl restart ceph-osd.target

Once complete you can remove monitor warnings with the following:

ceph osd require-osd-release luminous
ceph osd unset noout

Finally upgrade your rados gateways, metadata servers and any clients in the case of OpenStack integration

apt-get -y install ceph
systemctl restart ceph-radosgw.target

Puppet can now be re-enabled across all systems. It's recommended that you perform a --noop run first to ensure nothing unexpected is going to change.

Final thing to do is upgrade several hundred OSDs to bluestore to enjoy the performance benefits!

Version 3.1.2

07 Jul 11:56
Compare
Choose a tag to compare

Fixes a critical regression in the 3.x.x series where support for absolute device paths was disabled, of primary concern is users of NVMe devices.

Version 3.1.1

26 Jun 13:53
Compare
Choose a tag to compare

Hot fix for devices behind an LSI3xxx SAS expander. Type pattern checking was accidentally broken.

Version 3.1.0

26 Jun 11:22
Compare
Choose a tag to compare

Minor release which modifies how OSDs can be defined. Backwards compatible with releases from v2.0.0 onwards, it allows the definition of a defaults OSD which applies journals and params to all other OSD resource types defined. In essence a cosmetic change to keep your yaml definitions less verbose.

Version 3.0.1

26 Jun 10:05
Compare
Choose a tag to compare

Key features are a hotfix to the typing system so that valueless OSD parameters work again and also tightening of constraints around the RGW ID parameter.

Version 3.0.0

26 Jun 10:03
Compare
Choose a tag to compare

This major new release drops support for puppet 3. Why? Because static typing makes code far easier to test and validate. For legacy deployments I'd recommend upgrading Puppet however version 2.x.x of this module should work for the foreseeable future.

Version 2.0.2

18 Nov 09:07
Compare
Choose a tag to compare

Adds in full rspec-puppet support. Detected an issue with keys not having the ownership explicitly set

Version 2.0.1

17 Nov 09:11
Compare
Choose a tag to compare

A couple bug fixes for enclosure slot device mapping and empty journal parameters in the osd type

Version 2.0.0

15 Nov 14:23
Compare
Choose a tag to compare

This is a major new release and is not backwards compatible with v1.x.x releases. This brings improvements across the board. Please read below before upgrading.

The version 2.x.x series will be the last to support Puppet 3

OSD Type

The API has changed quite significantly. The OSD name is now the OSD path only, not a composite with the journal device. The journal device is (optionally) specified via the journal parameter. Other parameters are specified as a params hash, the keys of which directly map to the options of ceph_disk with the double hyphen stripped. A value of undef translates to an option without an argument. The net gain is that we can now support dmcrypt, bluestore and any other future extension without code modifications. Your manifests/hiera should look like the following:

ceph_osd { '2:0:0:0':
  journal => '12:0:0:0',
  params  => {
    'fs-type' => 'xfs',
    'dmcrypt' => undef,
  },
}
ceph::osds:
  '2:0:0:0':
    journal: '12:0:0:0'
    params:
      bluestore: ~

Absolute paths to devices are now supported, but with the caveat that they should only be used with /dev/nvme* class devices in production. SCSI enumeration methods are the preferred way of deploying HDD/SSD devices.

Meta-data Server & Ceph FS

Support is now included for an mds service. Simply set ceph::mds: true to provision a node into the cluster. You will need to have previously defined a bootstrap-mds key on your monitors and mds nodes.

Rados Gateway

Rados gateway nodes are now provisioned in a similar way to meta-data servers. You will need to define a bootstrap-rgw on the monitors and gateways. As a side effect rgw_id parameters must be in the form rgw.${name} due to how ceph auth get-or-create works during initial provisioning. However you no longer need to manually manage key paths in ceph.conf. The upgrade path should be similar to the following:

  • Shutdown old rgw instance, disable service if using systemd
  • Delete old /var/lib/puppet/radosgw/${cluster}-radosgw.${hostname}
  • Revoke old key
  • Update puppet to remove keyring from ceph.conf and update the client name finally update rgw_id to rgw.${hostname}
  • Run puppet

Version 1.5.3

04 Oct 11:54
Compare
Choose a tag to compare

Adds support for SAS topologies which register disk array slots as DISK00 rather than Slot 01 in sysfs