-
Notifications
You must be signed in to change notification settings - Fork 0
Noderesource
We have the frontend already configured in puppet, but we still need a resource definition for the nodes.
$ sudo vim /etc/puppet/manifests/site.pp
node /^node\d+.cluster.domain$/ {
___class {'cluster_software':}
}
We don't have a cluster_software class yet, but this class will allow us to control which packages are on all systems.
$ sudo mkdir -p /etc/puppet/modules/cluster_software/manifests/
$ sudo vim /etc/puppet/modules/cluster_software/manifests/init.pp
class cluster_software () {
___package { ['vim-enhanced','htop','nfs-utils','ntp']: ensure => installed,}
}
Now what happens if our repos get out of line or need to be updated? Well, puppet will probably fail. Unless, of course, we tell it where to get the repo information and to depend on it. Thankfully Puppet has a resource type for repos. First, let us ensure we always get the repos /before/ we try to install a package by requiring that class.
$ sudo cat /etc/puppet/modules/cluster_software/manifests/init.pp
class cluster_software (){
____include cluster_software::repos
____package { ['vim-enhanced','htop','nfs-utils',]:
_______ensure => installed,
_______require => Class['cluster_software::repos'],
}
Now we can create a sub class for these repo files.
$ sudo cat /etc/puppet/modules/cluster_software/manifests/repos.pp
class cluster_software::repos(){
____yumrepo{ 'EPEL':
________name => 'EPEL',
________descr => 'Extra Packages for Enterprise Linux 6 - $basearch',
________baseurl => 'http://http.cluster.domain/epel/6/$basearch',
________enabled => '1',
________gpgcheck => '1',
________gpgkey => 'file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6',
____}
____yumrepo{ 'PuppetProduct':
________name => "PuppetProducts",
________descr => 'Puppet Labs Products El 6 - $basearch',
________baseurl => 'http://yum.puppetlabs.com/el/6/products/$basearch',
________enabled => '1',
________gpgcheck => '1',
________gpgkey => 'file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs'
____}
____yumrepo{ 'PuppetDeps':
________name => 'PuppetDeps',
________descr => 'Puppet Labs Dependencies El 6 - $basearch',
________baseurl => 'http://yum.puppetlabs.com/el/6/dependencies/$basearch',
________enabled => '1',
________gpgcheck => '1',
________gpgkey => 'file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs',
____}
____yumrepo{ 'SL':
________name => 'SL',
________descr => 'Scientific Linux $releasever - $basearch',
________baseurl => 'http://http.cluster.domain/scientificlinux/$releasever/$basearch/os/',
________enabled => '1',
________gpgcheck => '1',
________gpgkey => 'file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl6 file:///etc/pki/rpm-gpg/RPM-GPG-KEY-cern',
____}
____yumrepo{ 'SL-Security':
________name => 'SLSecurity',
________descr => 'Scientific Linux $releasever - $basearch - security updates',
________baseurl => 'http://http.cluster.domain/scientificlinux/$releasever/$basearch/updates/security/',
________enabled => '1',
________gpgcheck => '1',
________gpgkey => 'file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl6 file:///etc/pki/rpm-gpg/RPM-GPG-KEY-cern',
____}
____yumrepo{ 'SL-Fastbugs':
________name => 'SLFastbugs',
________descr => 'Scientific Linux $releasever - $basearch - fastbug updates',
________baseurl => 'http://http.cluster.domain/scientificlinux/$releasever/$basearch/updates/fastbugs/',
________enabled => '1',
________gpgcheck => '1',
________gpgkey => 'file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl file:///etc/pki/rpm-gpg/RPM-GPG-KEY-sl6 file:///etc/pki/rpm-gpg/RPM-GPG-KEY-cern',
____}
}
We have the software, but it doesn't do much good if the users can't get to it. Most larger installations will have LDAP, NIS, or 390 servers that manage the user information. For smaller user-base clusters, we can manage them with puppet. Now there are a LOT more options that can be passed, but this will get a base configuration working. You will still need to have a password set so how you want to do this is up to you. Add it in as a puppet line, or do it manually. Whatever fits your needs.
$ sudo mkdir -p /etc/puppet/modules/cluster_users/manifests
$ sudo cat /etc/puppet/modules/cluster_users/manifests/init.pp
class cluster_users(){
____mount {'home':
________ensure => "mounted",
________atboot => "true",
________fstype => "nfs",
________options => "defaults",
________device => "10.10.10.10:/home",
____}
____user {'stack':
________name => "stack",
________ensure => "present",
________managehome => "true",
________require => Mount["home"],
____}
}
Now that we have this working, we need to add a line to the node definitions in /etc/puppet/manifests/site.pp :class {'cluster_users':}
TODO: For the moment, I am managing the users manually on the frontend because I obviously can't mount /home when I am exporting it. Then I take the information from the frontend and put it in puppet to keep it the same across the nodes. This should really be cleaned up and made to run a bit nicer.