Quantcast
Channel: Hacker News 50
Viewing all articles
Browse latest Browse all 9433

The Definitive Slaughter Guide

$
0
0

Comments:"The Definitive Slaughter Guide"

URL:http://www.steve.org.uk/Software/slaughter/guide/


Steve Kemp

This document describes the Slaughter system-administration & configuration tool.

The source for this document is stored on github, and corrections/fixups are most welcome.

Chapter 1. Introduction

This document describes Slaughter, the system-administration, configuration management, and automation tool.

Slaughter is available from the following location:

Slaughter is a tool which is used to automate the configuration, maintenance, and management of a large number of servers.

Slaughter is implemented in perl, a language which has traditionally been suited to system-administration tasks, and thanks to CPAN there exists a wide range of utilities and libraries which are available to you.

Although Slaughter is a modern tool, written from scratch, it was significantly inspired by CFEngine. (The supplied primitives are very similar, but the client-server model was replaced by the more flexible system of transports.)

Slaughter is very small both in terms of the number of lines of code, and the concepts which are need to understand and use it. (These are important considerations when you're evaluating a new configuration-management tool, as you might need to experiment with several systems before picking the most suitable.)

However despite being simple to use slaughter is flexible enough to allow you to carry out a wide range of maintenance upon a large number of systems.

Slaughter is a client-pull application, which means that each machine which has slaughter installed upon it is expected to schedule itself. There is no single-server which is in charge of scheduling, control, or mediation. This makes the deployment very straight-forward. (It is true that a central "thing" is required, such that clients can pull their policies/files/modules from it, however this might not be a traditional server, it could just be a remote github repository, or a HTTP-server)

In terms of requirements the dependencies of slaughter are very minimal; you merely require a working Perl environment, and a small number of very commonly available modules.

When the slaughter command is invoked several things will happen in quick succession:

  • The parsing of the global configuration file, if present. (See Appendix A)

  • The parsing of the command line arguments, if any. (See Appendix B)

  • The discovery of information about the current host. (See Appendix D)

  • The downloading of the policy file(s) from the central server.

  • The creation of an executable script, using the policies obtained in the previous step.

  • The execution of the generated script.

  • Cleanup of temporary downloads, and the generated script.

The end result is that policies stored on a single central host are executed upon the local machine.

It is assumed you'll schedule slaughter via cron, to ensure it runs regularly.

With slaughter you fetch policies from a central location, and these are executed on each system you manage. The policies allow you to carry out configuration changes, etc.

The following example installs and configures molly-guard upon managed systems. molly-guard is a simple package which prompts you to enter the current hostname prior to allowing the commands "shutdown", "reboot" to complete - it can save you from rebooting the wrong server.

Example 1-1. Configuring molly-guard.

#
# Ensure we have don't reboot the wrong server(s).
#
#
# Install the package
#
if ( !PackageInstalled( Package => "molly-guard" ) )
{
 InstallPackage( Package => "molly-guard" );
}
#
# Enable the package.
#
if ( -e "/etc/molly-guard/rc" )
{
 AppendIfMissing( File => "/etc/molly-guard/rc",
 Line => "ALWAYS_QUERY_HOSTNAME=true" );
}

The remainder of this documentation will explain how this works, and how to configure Slaughter on your systems. This example was merely designed to express concisely how you might control systems with Slaughter.

Chapter 2. Installing slaughter

The preferred mechanism for installing slaughter is via the Debian GNU/Linux package provided by the author. If you're not running upon a Debian GNU/Linux host these will of course not work.

Installing upon a non-Debian system should be as simple as fetching the latest release from the homepage, unpacking the archive, and running "make install".

Assuming you're running the Squeeze release of Debian GNU/Linux you can install by running the following commands:

Example 2-1. Installing via apt-get.

echo "deb http://packages.steve.org.uk/slaughter/squeeze/ ./" >> /etc/apt/sources.list.d/slaughter.list
apt-get update
apt-get install slaughter2-client

At the time of this documents creation the most recent release was 2.6. Installing this can be carried out via:

Example 2-2. Installing from the most recent release.

$ wget http://www.steve.org.uk/Software/slaughter/slaughter-2.6.tar.gz
$ tar zxf slaughter-2.6.tar.gz
$ cd slaughter-2.6
$ sudo make install

Chapter 3. Slaughter Concepts

Although Slaughter is pretty simple to understand you will find that it uses some terms that have special meaning. These are listed here briefly, with further explanations to follow:

PoliciesPolicies are the list of instructions to apply to each managed host.Meta-InformationInformation gathered at run-time about the current host.ModulesThese are abstractions of common code, and are entirely optional.TransportsThe mechanisms by which files/policies/modules are made available to clients.

The whole point of a server-automation tool is that you'll write your list of configuration steps, recipes, or actions in one place and then they will be applied over a wide range of hosts.

In slaughter the list of configuration steps is known as a policy, and a policy is comprised of distinct actions that are applied to each host. These policies are written in 100% pure perl code - with a range of appropriate primitives supplied to allow you to carry out your tasks.

When the slaughter client runs it gathers information about the current host before it begins to fetch and execute the remote policies.

The meta-information includes such things as details of the IP addresses the host has, the hostname, the free RAM, & etc. The information obtained is made available to your policies, and is documented in Appendix D

Because the policies you write are going to be pure perl it might be useful to abstract out some of the nuts and bolts, the actual individual steps, into a simple module. In slaughter this is permissible, and there is special functionality to allow you to fetch these modules from the central server.

For example you might find you wish to configure Apache and that might mean enabling some modules, disabling others, and creating a new "site" for it to serve. You could easily write the code to do this in your policies, but it would make slightly more sense to create an "Apache Module" to cover the common cases, and then invoke that in your policies.

The Slaughter distribution does contain a simple Apache module, and you can see a couple more in the live repository I host.

As has been previously mentioned slaughter is a client-pull system. This means that when slaughter runs it will attempt to connect to a central server to download the policies which should be applied to the local system.

Because there is no "slaughter-server" we've had to create a system by which these downloads can be carried out. We came up with a flexible approach which means you can configure the slaughter client to download the policies over one of several available mechanisms:

  • Via HTTP-fetches, from a web-server.

  • Via a rsync fetch from a remote rsync server.

  • Via the cloning/checkout of a remote revision control system such as git, subversion, or mercurial.

The transport is nothing more than the means by which the remote policies are fetched. So if you're fetching them via HTTP-fetches then you're using the HTTP-transport, etc.

Chapter 4. Getting started

This chapter will cover the setup of a host to contain a simple policy, and walk through configuring the first client to download and execute that policy.

As briefly discussed in Section 3.4 there are many options when it comes to fetching the policies from the central server.

To keep things simple, and to demonstrate more than one approach our server setup will consist of configuring two transports: HTTP & git.

With these two quick examples we'll have discussed the server-setup enough for the moment, although we'll come back it later. For the moment it is sufficient to realise what we're doing:

  • We'll configure two different ways of hosting the file policies/default.policy.

  • We'll show that we don't need to be running anything complex on the server-side. (Specifically we don't need any custom daemons, and nor do we need to have slaughter itself installed upon the master-host.)

  • We'll implicitly hint that you can store your policies in a Github repository if you wish, which gives you high-availability and revision control.

Slaughter mandates a particular layout for the policies, files, and modules on the central server:

  • Any policies must be beneath the top-level directory policies/

  • Any files must be beneath the top-level directory files/. (Fileserving is introduced in Chapter 6.)

  • Any modules must be beneath the top-level directory modules/. (Modules are introduced in Chapter 7.)

For the moment you can skip over this section, but it is worth remembering in the future that the slaughter client will request policies/default.policy from the root of the server, rather than merely fetching default.policy.

To keep things simple we're going to assume that your central server is called host.example.com, and that you already have a webserver installed. We'll further assume that your document-root is /var/www.

With that in mind we need to create a directory to hold our policies, and then create a file with the first policy inside it.

Example 4-1. Setting up a minimal policy-server

# cd /var/www/
# mkdir -p slaughter/policies/
# echo 'print "I am alive\n";' > slaughter/policies/default.policy

To save time I've already created a git repository holding a sample policy. If you have the git installed you can fetch this with the following command:

Example 4-2. Fetching the minimal Git policy example.

git clone https://github.com/skx/slaughter-example.git

You'll see that it has the same layout as the previous example. There is a top-level directory policies/, and inside that is the file default.policy.

If you were to host the git repository upon your own machine we'd also assume that was accessible via the hostname host.example.com, with a path of /slaughter-example.git. If that isn't the case you'll need to update your path(s) appropriately.

Once you've got an appropriate server configured (Section 4.1) we can actually begin to use the slaughter client itself. (See Chapter 2 if you've not yet installed the client.)

Because it is anticipated that you'll be making system-wide changes with your slaughter policies it is worth noting explicitly that it will refuse to run, except in minimal ways, unless you're root.

To launch the slaughter client we merely need to invoke slaughter with two pieces of information:

  • The transport to use, which in our case will first be "git" and then be "http".

  • The location from which to fetch the initial policy, policies/default.policy.

In the previous section we briefly discussed the setup of a HTTP-server, and we also documented the location of a git repository which contains a sample policy. We'll use the git transport first because it doesn't require you to perform any action - just install the slaughter software and the git command.

We'll launch the client like this:

Example 4-3. Running slaughter against a git repository.

# slaughter --transport=git --prefix=https://github.com/skx/slaughter-example.git
This is perl code and I am alive!
#

We've specified the location of the remote tree (which could contain top-level directories; files/, modules/ and policies/, but in this case we've only got the latter). We've also specified the transport to use, that has lead to a temporary directory being created and the remote repository cloned into it. Once that happened the script was executed which lead to the output of the message "This is perl code and I am alive!".

Using the HTTP-location will be almost the same, except in that case the prefix will be specified as the URL of the location where the policies live, and the transport will be updated to match:

Example 4-4. Running slaughter against a HTTP server.

# slaughter --transport=http --prefix=https://example.com/slaughter/
..
I am alive!
#

In the HTTP-example the same series of steps occurred; the remote policy was downloaded into a temporary directory, wrapped up to become an executable and then executed.

The behaviour of the client can be altered by the addition of command-line flags (documented in Appendix B). Adding the two flags "--no-delete" and "--no-execute" will prevent the generated script from being executed and from being removed allowing you to examine it:

Example 4-5. Keeping the generated script around.

# slaughter --transport=http --prefix=https://example.com/slaughter/ \
 --no-execute --no-delete --verbose
..
Policy written to: /tmp/I6mY_AgUxv
Not launching script due to --no-execute.
#

Chapter 5. Writing Policies

We've briefly shown how to write a basic policy, which just used the standard print function from perl to output a message. This isn't terribly interesting, and although Perl has traditionally been used for system administration tasks we can do better.

Slaughter augments the perl environment with a number of primitives to help you carry out tasks on the machines it runs upon. These primitives are the main reason why you'd use slaughter, along with the information gathered about the current host at run-time (see Appendix D).

The supplied primitives allow you to do various things, such as fetching a file from the master-server, editing existing files, running commands, & etc.

Using the slaughter primitives is as simple as including them in your policies. For example the following code does what you'd expect (add the line "ALL: 1.2.3.4" to the file /etc/hosts.allow, unless it is already present):

Example 5-1. Simple Primitive Example.

#
# Append an administrative IP address to /etc/hosts.allow.
#
# This will override any blacklisting denyhosts tries to perform.
#
AppendIfMissing( File => "/etc/hosts.allow",
 Line => "All: 1.2.3.4" );

To use slaughter effectively you need to combine your logic, in perl, with the primitives supplied, to control your machines.

The complete list of supplied primitives is included in Appendix C.

We've demonstrated previously (Chapter 4) that when the slaughter client launches it will fetch and execute the contents of the file policies/default.policy. It is expected that you'll not actually write any of your code in there, although you certainly can if you wish, instead we assume the initial file will just pull in your real policies.

The following example should make that clear:

Example 5-2. Including policies

#
# This is policies/default.policy
#
#
# Ensure slaughter runs on each node once an hour.
#
FetchPolicy "slaughter.policy";
#
# If we have MySQL then we should have backups of the contents.
#
FetchPolicy "mysql.policy";
#
# ... more
#
#
# Fetch the host-specific policy, if present.
#
FetchPolicy "hosts/$fqdn.policy";

This policy references several distinct files, each of which will be fetched via the transport. In the example above we would expect our master-server to look like this:

Example 5-3. Including policies

$ cd policies/
$ ls
default.policy hosts/ mysql.policy slaughter.policy

You'll notice that the final inclusion uses the variable $fqdn to refer to the current host-name. If the fetching of this policy fails it will be silently ignored. (See Appendix D for more on defined variables).

The FetchPolicy directive will only work in the default.policy file. Policies are not recursively processed for inclusion.

Chapter 6. Serving Files

There are going to be many times when you write policies which will want to download files from the central-server, and this is supported via the FetchFile primitive.

The FetchFile primitive is used like so:

Example 6-1. Fetching a file from the central server

#
# Fetch the master denyhosts.conf file from the server
#
if ( FetchFile( Source => "/etc/denyhosts.conf",
 Dest => "/etc/denyhosts.conf",
 Owner => "root",
 Group => "root",
 Mode => "644",
 Expand => "false") )
{
 RunCommand( Cmd => "/etc/init.d/denyhosts restart" );
}

Here the file has been fetched from the central server, from the path files/etc/denyhosts.conf - relative to the top of the transport-prefix.

(For example if your prefix was http://host.example.com/slaughter, then the file would be fetched from http://host.example.com/slaughter/files/etc/denyhosts.conf.)

The files/-prefix is mandatory for file-fetching, although the actual organization of files is up to you - it is recommended that the source-path matches the destination path for simplicity.

When you invoke the FetchFile primitive you can set "Expand => True", which will cause the file being retrieved to be template-expanded.

The template expansion happens via the Perl Text::Template module, and this allows you to set the contents of the file with ease.

This facility is demonstrated in the FileCopy example, located in Section 9.2

Slaughter has built-in support for fetching per-host files, this allows you to easily serve different files to different clients, without having to make that conditional work part of your policy.

If a FetchFile primitive is invoked to fetch /files/foo it is actually expanded into request for each of the following files (with the first successful fetch terminating the search):

  • foo.$fqdn - Where "fqdn" is the fully-qualified domain name of the client.

  • foo.$hostname - Where "hostname" is the unqualified hostname.

  • foo.$role - Where "role" is role assigned to this client.

  • foo.$os - Where "os" is the operating system code-name.

  • foo.$arch - Where "arch" is the architecture of the client, e.g. i386.

  • foo

These variables, with the exception of $role, are all set automatically at run-time, and are documented in Appendix D.

$role is special because it will only exist if you set it for a host, via either the command-line, or the configuration file.

Chapter 7. Organising code via modules

Your slaughter policy files will contain the actions to be applied on your hosts, and because the primitives are fairly low-level they might be large.

For example configuring an Apache server might involve:

  • Installing Apache.

  • Enabling various Apache-modules, such as mod_rewrite, mod_rpaf, & mod_expires.

  • Creating a "site" and enabling that.

Each of these steps could be automated as in the following example:

Example 7-1. Configuring Apache with slaughter

#
# If apache isn't installed, then get it
#
if ( ! -x "/etc/init.d/apache2" )
{
 InstallPackage( Package => "apache2-mpm-worker" );
}
#
# Restart apache?
#
my $restart = 0;
#
# If mod_rewrite isn't installed, enable it.
#
if ( ! -e "/etc/apache2/mods-enabled/rewrite.load" )
{
 RunCommand( Cmd => "a2enmod rewrite" );
 $restart = 1;
}
#
# ...
#
if ( $restart )
{
 RunCommand( Cmd => "/etc/init.d/apache2 restart" );
}

The previous section demonstrated how you might manage Apache "manually", and although this is possible there will be a lot of Apache-specific code in your policies if you do that.

The alternative is to write a simple Perl module, which can be used to contain the Apache-specific code. If you do that you can simplify your policies by hiding the individual steps, as in this example:

Example 7-2. Configuring Apache with a module

#
# We have the Apache module loaded, so:
#
my $a = Apache->new();
$a->enable_module( "rewrite" );
$a->enable_module( "rpaf" );
#
# ..
#

If you wish to use a module like this you need to do three things:

  • Write the module. (There is an Apache module included in the slaughter distribution.)

  • Save it in your slaughter repository, beneath the top-level directory modules/

  • Load the module in your default.policy file, using the FetchModule directive.

There is a sample Github repository which demonstrates this, and you can see it tested like so:

Example 7-3. Using slaughter modules

# slaughter --transport=git --prefix=https://github.com/skx/slaughter-module-example
Hello, world

The FetchModule directive will only work in the default.policy file.

Chapter 8. The implementation details

This chapter briefly discusses the internal implementation of the Slaughter tool. These details may prove interesting, but unless you wish to extend Slaughter you shouldn't need to know them.

Ignoring the transports, and the meta-information gathered at run-time, the core of the code is the implementation of the primitives.

Each of the primitives is written as a standard Perl subroutine, exported to the main name-space via some module trickery. The routines are exported whenever you load the Slaughter module, which means you can use them outside of a policy script if you so-wished:

Example 8-1. Using slaughter-primitives outside Slaughter

#!/usr/bin/perl -w
use strict;
use warnings;
use Slaughter;
RunCommand( Cmd => "/usr/bin/uptime" );

Internally the Slaughter.pm module does very little, it merely loads the real implementation of the primitives from the platform-specific libraries. For example on a Debian GNU/Linux host the code "use Slaughter;" actually becomes:

use Slaughter::API::generic;
use Slaughter::API::linux;
use Slaughter::API::Local::linux;

The Slaughter::API::Local namespace won't exist by default, so errors in loading it will be ignored, however it does provide a great starting point for the addition of local-primitives you might wish to implement.

Adding new primitives is documented in the HACKING file, distributed with Slaughter, but in brief you have two choices:

  • Create the Slaughter::API::Local::linux module, to make your primitives available in-house.

  • Update the Slaughter::API::generic module and submit it for inclusion in future releases of Slaughter.

The transports are implemented using a plugin-like system. Each transport is implemented via a module beneath the Slaughter::Transport namespace.

Transport modules will be dynamically discovered, created, and invoked at run-time based upon their module names. Thanks to a standard API adding a new one is very straight-forward. You merely need to give your module the appropriate name, and implement the following API methods:

Table 8-1. Transport Methods

SubroutinePurposenew()The constructor, called only onceisAvailable()Return 1 if this transport is available. (Your transport might rely upon non-standard modules, for example).error()Return a suitable error message if the isAvailable method fails.name()Return the name of this transport.fetchContents()Fetch a file/module/policy beneath the root of the transport prefix.

If your transport is based upon executing an external program, such as "git", or "rsync", there is a helper available to simplify the code considerably, see the existing git.pm, hg.pm files.

The information gathered at run-time also comes from a series of modules. There is a module prefix of Slaughter::Info beneath which platform-specific modules exist.

For example the following snippet will gather all the information available for a GNU/Linux host:

use Slaughter::Info::linux;
my $obj = Slaughter::Info::linux->new();
my $data = $obj->getInformation();
# use data here ..

You can augment this information by creating your own local module. When a platform-specific module is loaded an additional module is looked for too. For example if your system is a GNU/Linux host the following two modules will be loaded:

use Slaughter::Info::linux;
use Slaughter::Info::Local::linux;

Create the local module, and implement the getInformation method, and your extra data will be available for use in all your policy scripts.

Chapter 9. Useful Links

This final chapter contains links to locations which might be of interest.

Primitive documentationThe list of primitives contains documentation for each of the available primitives that are supplied with Slaughter.Variables documentationThe list of variables shows the various variables which are created when Slaughter is gathering information about the system upon which it is running.
Basic ExampleThis example is similar to the one we introduced earlier, and it just prints a message via the perl print function.File-Copying ExampleThis example uses the FileCopy primitive to fetch the file /etc/motd from the remote source, and expand template-expand it.Real-world ExampleThis collection of policies is significantly more complex than the previous two, for the reason that it is my personal configuration which controls about ten hosts.

There are several existing configuration management tools, most notably CFEngine and Puppet.

Because CFengine and Puppet are pretty heavyweight there are also some simpler alternatives, like slaughter:

  • Ansible, which uses pure SSH to manage hosts.

  • Bcfg2 another mature and stable project.

  • Salt, configuration via YAML.

  • Fabric, a python-based solution for writing adhoc scripts which manipulate hosts over SSH.

Appendix A. Slaughter configuration File

When the slaughter client is executed the global configuration file /etc/slaughter/slaughter.conf is parsed, if it is present.

The configuration file may include anything that may be specified upon the command-line, merely removing "--" from the option should be sufficient to turn it into a valid configuration file entry.

Note that some settings, such as "dump=1" are valid, but would prevent normal execution. Here is a valid configuration file:

Example A-1. Slaughter configuration file.

#
# The transport to use, in this case 'rsync'.
#
transport = rsync
#
# The location from which to fetch our policies/files/modules
#
prefix = rsync://www.steve.org.uk/slaughter
#
# The (optional) role for this client. You could setup "webserver"
# for some hosts, and "database" for others.
#
role = example
#
# Sleep up to a minute before launching.
#
delay = 60
#
# Don't be verbose.
#
verbose = 0

Note that it is not possible to specify an alternative configuration file, although that might change in the future.

Appendix B. Slaughter command-line flags

The slaughter client accepts several command line flags, some of which have already been used briefly. The following table lists all known flags, along with their meanings:

Table B-1. Command line flags

FlagMeaning--delaySleep a random amount of time, up to the number given, before launching.--dumpShow host meta-information. (See Appendix D.)--helpShow some brief help information.--manualShow the manual for the client.--no-deleteDon't delete the wrapped policies after they are created.--no-executeDon't execute the downloaded and wrapped policies.--roleSpecify a role for this host, this is only used for file-serving (see Section 6.2).--transportsOutput the list of available transports and then exit.--verboseRun verbosely.

Appendix C. Primitives

The following primitives are available and included with Slaughter, these are documented online:

Alert Send a message, by email.AppendIfMissing Append a line to a file if not already present.CommentLinesMatching Comment out lines matching a particular pattern.DeleteFilesMatching Delete files which have names matching a given regexp.DeleteOldFiles Delete files older than the given number of days.FetchFile Fetch a file, via the currently-active transport.FileMatches Test to see if a file contains content that matches a given line/regexp.FindBinary Find a binary on a $PATH-like string.InstallPackage Install a package.Mounts Find active mount-pounts.PackageInstalled Test to see if a package is installed.PercentageUsed Determine how full a given filesystem is.RemovePackage Remove a package.ReplaceRegexp Replace lines in a file matchign a given regexp.RunCommand Run an arbitrary command, via system().SetPermissions Set file owner, group owner, and permissions bits on a file/directory.UserCreate Create a new system user.UserDetails Get the details about the given login-account.UserExists Does the given login-account exist?

Appendix D. Meta-Information

The meta-information gathered at run-time may be accessed in one of two ways, via the global variable produced, or via the global hash of options. The following code shows both means of accessing the current hostname:

Example D-1. Accessing meta-information in a policy script.

print $fqdn . "\n";
print $template->{'fqdn'} . "\n";

(In the example above the variable $fqdn stands for fully-qualified domain-name.)

The complete list of meta-information available on the current host may be dumped by executing "slaughter --dump":

Example D-2. Dumping all meta-information.

root@da-web4 ~ # slaughter --dump
 $VAR1 = {
 'load_average_5' => '0.04',
 'verbose' => 0,
 'arch' => 'amd64',
 'memfree' => '3736436',
 'hostname' => 'da-web4',
 'cpumodel' => 'QEMU Virtual CPU version 0.15.0',
 ..
 };

Dumping the variables is a good way to get a feel for what has been discovered upon your system(s), because the variables really will be host-specific.

The following is a list of the variables upon my current system, with a brief explanation of their meanings.

Note that some variables might not necessarily be displayed, for example softwareraid will only be defined if there is software-RAID detected upon the current host.

Hr01, Hr02, ... Hr22, Hr23The variable which matches the current hour of the day will have a value of 1, the others will have 0.archThe architecture of the current host: i386/amd64.bitsThe bit-count of the processor: 32/64.cpu_countThe number of CPUs the system has.cpumodelThe CPU name.distributionThe distribution the system is running, as reported by 'lsb_release'.domainThe domain portion of the systems hostnamefqdnThe fully-qualified hostname of the system.hostnameThe short hostname of the machine.ip6_1, ip6_2, .. ip6_NThe IPv6 addresses the host has, as limited by 'ip6_count'.ip6_countThe count of IPv6 addresses the system has.ip_1, ip_2, .. ip_NThe IPv4 addresses the host has, as limited by 'ip_count'.ip_countThe count of IPv4 addresses the system has.kernelThe version of the kernel that is being used.kvm1 if the system is a KVM-guestload_averageThe three figures reporting the uptime, as reported by 'uptime'.load_average_1One minute load-average, as reported by 'uptime'.load_average_15Fifteen minute load average, as reported by 'uptime'.load_average_5Five minute load average, as reported by 'uptime'.memfreeThe free RAM on the host.memtotalThe total RAM reported on the system.monday, tuesday, ... saturday, sundayThe variable which matches the current day of the week will have a value of 1, the others will have 0.osThe operating system, as reported by Perl.pathThe system PATHreleaseThe code-name for the distribution, as reported by 'lsb_release'.roleThe role of this node as specified by the user in the configuration file (see Appendix A) or command line (See Appendix B).transportThe transport being used for this execution; rsync, http, etcverbose1 if slaughter was executed with the --verbose flag.versionThe version of the operating system running, as reported by 'lsb_release'.

Viewing all articles
Browse latest Browse all 9433

Trending Articles