fiddyspence's blog

Dashboard and Active Directory

So, I decided to make the dashboard authenticate against Active Directory.  It took a surprising amount of time.

The first trick was to find the authoritative config for the ruby-cas YAML with a hash of authenticators (the file /etc/puppetlabs/console-auth/cas_client_config.yml is fairly straightforward):

# /etc/puppetlabs/rubycas-server/config.yml
  - class: CASServer::Authenticators::SQLEncrypted
      reconnect: true
      adapter: mysql
      database: console_auth
      username: console_auth
      password: ‘xxxxxxxx’
    user_table: users
    username_column: username
  - class: CASServer::Authenticators::ActiveDirectoryLDAP
        port: 389
        base: dc=puppet-ad,dc=spence,dc=org,dc=uk,dc=local
        filter: (&(objectClass=person)(memberof=CN=puppetconsoleaccess,OU=Groups,DC=puppet-ad,DC=spence,DC=org,DC=uk,DC=local))
        auth_user: cn=ldapbind,cn=users,dc=puppet-ad,dc=spence,dc=org,dc=uk,dc=local
        auth_password: xxxxxxxx

The other trick is to make sure the filter actually works.  I think somehow I managed to hose the default filter (objectClass=person), which caused all sorts of aggro.  The debugging info from the rubycas is awful - I resorted to running a tcpdump -i eth0 tcp port 389 -X to see the messages I was getting from the Active Directory, and the authentication method doesn’t cause any access denied errors in the event log at all on a Windows 2003 DC (which is irritating).

The rubycas doesn’t really help you when the YAML is broken either - using irb helped me here to validate that at least the YAML is sane:

irb(main):001:0> require ‘yaml’
=> true
irb(main):002:0> YAML.load_file ‘/etc/puppetlabs/rubycas-server/config.yml’
=> {“maximum_session_lifetime”=>172800, “<snip>

One thing I would have found useful is the option to configure access levels via active directory rather than having to add an AD user, then configure access level in dashboard.

Don’t forget to edit the console-auth/cas_client_config.yml file:

    default_role: read-only
    description: Local
    default_role: read-only
    description: Active Directory

So, to login just use the bare username in AD:

Which translates to:

(I added my AD account as an admin in the console already using the built-in admin account)

More Travel Rantage

Dear reader, I write to you once more with tales of distress and misery in relation to getting around the world.

This episodes subject is none other than budget airlines.

I had the misfortune to travel easyjet from Naples recently, hot on the heels of a flight courtesy of my other least favourite airline Ryanair.  I was amazed to discover that between the two experiences, easyjet was actually the most unpleasant.  The situation was exacerbated by an inability of my fellow travellers to appreciate the following two things:

Firstly, actually taking the phrase ‘cattle class’ literally is an error.  Despite the airline doing it’s utmost to treat us like animals, bunching together, pushing and jostling and barking at each other doesn’t help anybody, and in fact is counter productive.

Secondly.  That bit of paper the nice people gave you when you checked in with a seat number on it.  The seat number on it actually means plop your arse on that particular seat.  What happens if you decide arbitrarily to put yourself somewhere else is that we play a deeply annoying game of human solitaire where you or someone else has to move around the cabin looking for the empty seat.  Cue confusion and delay.  Seat rows on airplanes I personally believe are actually quite easy to understand.  If you look along a row of seats, somewhere above it will be a handy indicator (called, usefully, a ‘row number’) and an indication of which seat is which (this is usually a letter from the roman alphabet, generally starting from ‘A’ at the left of the airplane as you face the front, working up a later letter depending on what type of airplane you’re on).  The combination of number and letter shows you where to sit.  You should sit there.

Funnily enough, nobody was more surprised than I to juxappose the two airlines of misery and find that not having assigned seats appeared to make it less dire.  Admittedly my recent data sample is one flight of each airline, and I do not endorse Ryanair.  In fact travelling with either provider is less preferable to me than crawling on my stomach to my destination upon a road paved with week old used shit and glass filled nappies.

Puppet Data Encryption

Some time ago, I wrote a backend for Hiera to provide another option for encrypting/decrypting data at rest.  It exists in the same kind of problem space as hiera-gpg, but with the additional downside that you don’t quite have the flexibility that you do with GPG keyrings.  It does have the advantage that it just uses Puppet SSL certs, so there aren’t any external requirements for e.g. GPG.  I always planned on having a specific SSL keypair to do the work so as not to compromise host keys.

That code is here: https://github.com/fiddyspence/hiera-puppetcert

It’s quite well integrated - there is a Puppet face to encrypt the YAML data that hiera will query, and the backend will decrypt that data so you get the privacy of your data in your VCS/backup systems etc.  What it lacked was flexibility in terms of the implementation.

What I really wanted to be able to do was to reuse the encryption routines in a portable way to do encryption/decryption using a library but across different bits of Puppet, but this implementation sucked for that.   The aim was to be able to reasonably sensibly implement fact encryption, so sensitive data can get passed around and only be decrypted in memory but held reasonably safely at rest.

I realised last week, after implementing some mcollective magic to do config file hacking (http://ibroketheinternet.co.uk/blog/2012/12/01/mcollective-config-file-hackage/) that I could reimplement the functions into a Puppet::Util class(thanks to cprice404 for the inspiration), and use them wherever I wanted to.  The advantage of this is that you get to pluginsync the Util class, and Puppet will be able to load it.

[root@puppet certencryption]# tree
├── lib
│   └── puppet
│       └── util
│           └── certencryption.rb

Admittedly, the implementation of the actual encryption is also a bit hacky, but that’s relatively easy to change to something better (on the roadmap).

So, I give you https://github.com/fiddyspence/puppet-certencryption or Puppet::Util::Certencryption as it’s better known.

Watch this space for the fact encryption code - should be along shortly.

Mcollective Config File Hackage

Technical news this week:  I was inspired by a fellow engineer to create a solution to edit config files out of band to the configuration management platform of choice (Puppet) using the orchestration tool of choice (mcollective).

The problem was updating the configuration file for Puppet when Puppet is set to not make any changes during a puppet run - i.e. there a ‘noop = true’ line in the puppet.conf.  Using Puppet to fix this problem is a non-starter.

Thusly did I create 2 mcollective agents - a specific solution to the noop problem (that reports on, enables or disables noop in the puppet.conf) which is called puppetnoop, and another general purpose agent (puppetconf) that can change any setting in any section of the config file.  The implementation is built upon cprice404’s ini_file Puppet Util class which actually updates the configuration file.  Note, that the agent could be extremely easily extended to update programmatically any file that conforms to the ini file format by adding another argument to the agent that indicates which file to edit (by default either agent will check for Puppet[‘config’] which should be the puppet.conf your particular version of Puppet uses.

The only requirement for the agent to work is that the RUBYLIB of the mcollective agent needs to include the cprice404 ini_file utility libraries, which if you’re using Puppet should be auto distributed (on the Ubuntu agent I tested on I had to do some hacking to make sure this was the case).

The code can be found at puppetnoop on github

First in a Long Series of Travel Rants

I do quite a significant amount of travel for work.  This means lots of time spent on planes, in airports and hanging around in hotels.  This has given me a newfound depth of hatred for stuff I never knew annoyed me.   I have decided to write about some of these things.

I loathe the….

  • … hopelessly bewildered

People - IT IS NOT HARD.  I’m telling you.  Signs are usually informative, you should read them.  Note this does not mean standing 14 abreast across a corridor designed to accomodate 3 maybe 4 adults thus stopping the rest of the herd from moving.  Oh - and get a move on.  You might be oblivious to the rest of the world actually trying to get stuff done and get places, but it is sadly impossible to not notice you.  Please, do not just fucking stand on an elevator and block the progress of those of us with places to be, who have seen the interior of Heathrow Terminal 5 no fewer than six times in the last three weeks and who just want to go and have a shower to rid themselves of the stench of sweaty tourist (yes dear, I do mean you).  You should try doing this on the London Tube and see how far you get.  Seriously - you should.  I more or less guarantee some sort of psychological or physical injury within your first hour of your attempt.  Ditto the magic carpet/travelator - it is designed to move you, and here’s the clincher, OTHERS faster, not to give your poor little legs a rest (which especially fills me with wonderment since you’ve been sat on your fat confused arse for the last 12 hours resting them!)  

  • …having to queue pointlessly
We invented queueing where I come from.  It’s like a national sport.  Note the simile there.  It is merely an imitation of sport.  Actual sport tends to involve physical exertion, maybe an element of skill and more than likely some kind of scoring system leading to perhaps a victory.  Darts doesn’t really fit this definition, which I’m quite pleased about.  As far as I can work out, making me queue isn’t really a game that I can win - it’s a loss as soon as I start.  I resent queueing.  Especially at immigration.  Your immigration I can understand having to queue at, almost by definition there’s not a lot of physicality involved and there’s not a lot of skill involved (though some places I’ve been, you might wonder).  If you’ve got any sense where you come from, the last thing on your wish list to Santa this year (or any other for that matter) is a horde of bewildered zombies cluttering up the corridors in your shiny new airport, muttering and moaning “Ee, do you think the exit might be in the vague direction of the sign marked exit?  I’m not sure if they have the same English here as we do back home.  This is Scotland after all.”  Your immigration I do not massively begrudge queueing at.  Mine, I do.  I fucking live here.  I’ve got a fucking local passport.  I queue at immigration somewhere probably on average of once a week (which if you think about it is about 26 times the amount of queueing someone who goes on holiday once a year might do).  I’m not a twat about being a card holding member of some airline club, but when they advertise fast track immigration at my home airport for gold or silver card holding members of that club (and I have one of those) I fucking resent being told that this is only for non-UK passport holders.  You.  Fucking.  What?

I am pausing for breath for a while at this point, but rest assured I have plenty more rant left in me.  Stay tuned.

Reverse Wardriving

Sometimes funny ideas just occur to me.  One such hit me yesterday.  I was wandering around London taking break from a customer visit, and I started thinking about wardriving and finding open wifi.  I have kind of wardriven myself - travelling around South-East Asia on a shoestring looking for bandwidth makes you do that sort of stuff.  Sometimes it transpires that folks wardriving look for open wifi just to see what’s there.

It occurred to me that one could do the reverse.  You see, I am fundamentally lazy (it’s why I think I make a reasonable sysadmin - my goal is to always make myself replaceable so I can do other, more interesting things).  Thus today, I am going to take the credit for inventing Warcouching.

The principle here is that I am interested in seeing what devices wander past my flat that promiscuously associate themselves with a wholly private wireless VLAN using a commonly known SSID, watching the DHCP logs and having a look at what connects.

So, a teensy bit of ruby watches the messages file on the dhcp server triggers an nmap scan of the IP address of the device that connected and captures the log:

      if runcommand
          system(“/bin/nmap -O -vvv #{$log[loopstart.to_int].split(’ ‘)[7]} >> #{$mylog}”)
        rescue => e
          puts “#{e.message}”
        runcommand = false

and thusly do we see in the logfile:

Starting Nmap 6.00 ( http://nmap.org ) at 2012-11-13 13:28 GMT
Initiating ARP Ping Scan at 13:28
Scanning [1 port]
Completed ARP Ping Scan at 13:28, 0.15s elapsed (1 total hosts)
Initiating Parallel DNS resolution of 1 host. at 13:28
Completed Parallel DNS resolution of 1 host. at 13:28, 6.50s elapsed
DNS resolution of 1 IPs took 6.50s. Mode: Async [#: 2, OK: 1, NX: 0, DR: 0, SF: 0, TR: 3, CN: 0]
Initiating SYN Stealth Scan at 13:28

Scanning dhcp-54.spence.org.uk.local ( [1000 ports]

Warcouching - watching passing strangers automagically.  It’ll be fashionable, I tell you.

Not Going Phishing

A short thought today.

My bank invites you on their homepage to send them a tweet in the event that you’ve got any problems.  I assume this is to help in in the eventuality that you can’t pick up the phone, or use their interwebnet banking service.

Their website today, and a couple of times over the last couple of days has been and is now a bit broken - it displays HTML source rather than actually rendering the site.

My experiment with social media triggered the ‘must tweet them’ library in my brain, and was about to but then thought the following things:

1/ I want to tell them their site is broken,
2/ I can’t be arsed to phone them, but I can’t use their website to tell them,
3/ I could tweet them, but then I’d be telling the world that I’m a <bankname> customer,
4/ I should expect abuse for telling the world where I keep all my money if I did this,
5/ I should probably go get a cup of coffee and wait for someone else to fix it.

I checked the twitter briefly, and I now know that if I want to steal money from 3 randoms on the interwebs at least where to start trying to tickle their current account.

Banks - please don’t encourage people to advertise where they keep their money publicly on the interwebs - it’s probably not wise.

Raspberry Puppet

I got my Raspberry Pi this week in an unexpected fit of generosity on the part of RS components.  It was particularly nice of them, considering I live in England, to send me what appears to be a French mains adapter on the other hand.  For fear of going off on a tangent, we’ll leave their shortcomings aside.

What was cool however was getting it up and running.  I followed the destructions at the Raspberry Pi quick start guide and fairly quickly had it up and running.  I plugged it into the TV via HDMI, and it booted straight away into a post install config menu.  Not having a USB keyboard I pottered off, found out it’s IP address and SSH’d into it to have a poke about.

The first thing I did was to expand the root filesystem to fill the whole 16GB SD card - the image it comes with, because I slapped it onto the card with dd, doesn’t fill the space.  Reboot.

What now then?

Well, for kicks, I thought I’d try and put Puppet on it - I wondered how it would do on ARM in a resource constrained environment.  I had a Puppet 3 master kicking around the place, so having an agent seemed ideal.

I figured running Puppet from source was going to be the easiest path to enlightenment, so I followed the destructions on the running Puppet from source documentation.  I installed ruby1.9.3 from apt.  I hacked together a basic configuration file, added the user and group and with high hopes kicked off an agent run:

root@raspberrypi:~# puppet agent -t
Info: Creating a new SSL key for raspberrypi.spence.org.uk.local
Info: Caching certificate for ca
Info: Creating a new SSL certificate request for
Info: Certificate Request fingerprint (SHA256):
Exiting; no certificate found and waitforcert is disabled

root@raspberrypi:~# puppet agent -t
Info: Caching certificate for raspberrypi.spence.org.uk.local
Info: Caching certificate_revocation_list for ca
Info: Retrieving plugin
Info: Caching catalog for raspberrypi.spence.org.uk.local
Info: Applying configuration version ‘1349353720’
Finished catalog run in 0.82 seconds
root@raspberrypi:~# puppet –version


Next - will it run a master under webrick…
Before I did this, I modified the memory split of the device to only have 32mb memory for Video, and I turned off the X server too.

root@raspberrypi:/# puppet master


root@raspberrypi:/var/log# puppet agent -t
Info: Retrieving plugin
Info: Caching catalog for raspberrypi.spence.org.uk.local
Info: Applying configuration version ‘1349425180’
this is a test
/Stage[main]//Node[default]/Notify[this is a test]/message: defined ‘message’ as ‘this is a test’
Info: Creating state file /var/lib/puppet/state/state.yaml
Finished catalog run in 0.76 seconds

Hell yeah!

It’s not fast, by any means - I installed stdlib 3.0.1, and re-ran the agent.  Pluginsync took 1 minute 56 seconds to complete running an agent against the box locally….

I’m going to see whether I can get it to scale a bit better, but given memory constraints, adding a webserver and a rack server (initial indications are that it runs OK actually - pluginsync performance is no better really - the limitation appears to be CPU - the box swapped a bit, but to no great extent).

Running another box against it as a puppet master, not that many resources, nothing fancy like an ENC or PuppetDB etc (though I could point myself at a remote one) I was pleasantly surprised to see it actually run pretty fast in terms of catalog compiles (including some templating and stuff) especially as I’m running in debug.

With about 5 file resources, 600 sysctl resources and a notify, and a complex-ish graph (File <| |> -> Sysctl <| |> -> Notify <| |>) a catalog compile takes:

Oct  5 12:18:49 raspberrypi puppet-master[25733]: Compiled catalog for debian-1.spence.org.uk.local in environment production in 28.14 seconds

Compared to a KVM 2.7.12 PE Puppet master, doing the same catalog compile it looks slow:

Oct  5 13:35:50 puppet puppet-master[11989]: Compiled catalog for debian-1.spence.org.uk.local in environment production in 2.12 seconds

but for a $25 appliance with 16GB of storage that can now bootstrap an entire datacenter, I think it’s awesome!

Cross Node Puppet Resources

One of the big topics that keeps coming up again and again with the configuration management space is how to deal with configurations that span multiple nodes. Typically these are OS instances, but the requirement is almost always to update inter-dependent configurations (even when on the same host).

A typical example of this would be a service requiring an application instance such as a Tomcat container, a web front end to the application, maybe a database server and possibly a load balancer in front of it all.

This requirement manifests itself in needing to do orchestrated configurations of one or more nodes which may then need to be installed or updated in a particular sequence in order to correctly configure the service that they should offer.  In our example, probably get the databases going, then the Tomcat, then the web front end(s) and based on all that configuration configure the load balancer.  If a config changes for one part of the stack, then you will probably want changes to other parts to flow through your infrastructure in the right order and probably soon, too!

Consider the following Puppet code for a vhost in our application:

file { ‘/etc/httpd/conf.d/vhost_for_my_custom_domain.conf’:
  ensure  => present,
  content => template(‘apachemodule/vhost.conf.erb’)

When that file changes on my webserver, I want the config on my load balancer to update.  I can probably use an exported resource within the Apache class that resource lives in to pass the config to apply on my load balancer node later.  However when my original node changes I don’t want to sit around and wait for the load balancer to check in and update it’s configuration - that might be 30 minutes time - I want my configuration to update now dammit!

One way of doing this would be to wrap the change up in some external orchestration - my tool of choice would be to do some ordered mcollective magic - and run the nodes in order somehow (manually or scriptomagically).

 What I really want to achieve is a way of Puppet doing this for me transparently so I don’t need to worry about external orchestration. So using some resource tagging sauce I can now drive mcollective at the end of a Puppet run from the Puppet master.  The runs are filtered appropriately using the data the resources are tagged with so I don’t need to worry too much about having every single one of my nodes check in at once (thundering herd ahoy!).

The code is here: Puppet mcollective notify report processor

To use it, I need a working mcollective install - my target nodes need to have the mcollective server and Puppet on them. I also need to add some data tags to the resources to trigger the run from the report processor:

file { ‘/etc/httpd/conf.d/vhost_for_my_custom_domain.conf’:
  ensure  => present,
  content => template(‘apachemodule/vhost.conf.erb’),
  tag     => [‘mconotify–class–loadbalancer’],

If this resource changes (successfully, and not in noop mode), the report processor will trigger an mcollective run for all nodes in my configuration that have the Puppet class ‘loadbalancer’ applied to them. Magic!

I need to configure the processor with some mcollective configuration so it can authenticate and send RPC messages appropriately but that only requires a sensibly formatted configuration file.  In this way we cut down the volume of external dependencies to get our orchestration going.

I think this method is pretty cool - it uses pure Puppet, although I’d like to not have to use tags to provide the data to the report processor (that would mean a meta-parameter in it’s own right - which isn’t a massive deal but it would require a change to Puppet core).  Throw away your shell scripts!

Update 20121024:  I uploaded the module to the Puppet forge here - http://forge.puppetlabs.com/fiddyspence/mconotify

If I look at the debug output for a Puppet run containing tagged resources, I can see what the processor is doing:

Oct 3 16:41:09 puppet puppet-master[21564]: MCONOTIFY puppet.spence.org.uk.local: CONFIG:{:delimiter=>”–”, :debug=>true, :mcoconfig=>”/tmp/client.cfg”, :mcotimeout=>5}
Oct 3 16:41:09 puppet puppet-master[21564]: MCONOTIFY puppet.spence.org.uk.local: Added mconotify tag mconotify–class–loadbalancermconotify–node–ibroketheinternet.co.ukmconotify–node–debian–1

Oct 3 16:41:09 puppet puppet-master[21564]: MCONOTIFY puppet.spence.org.uk.local: End of tag matching

Oct 3 16:41:09 puppet puppet-master[21564]: MCONOTIFY puppet.spence.org.uk.local: matched mconotify–class–loadbalancer to a class
Oct 3 16:41:09 puppet puppet-master[21564]: MCONOTIFY puppet.spence.org.uk.local: matched mconotify–node–ibroketheinternet.co.uk to a node
Oct 3 16:41:09 puppet puppet-master[21564]: MCONOTIFY puppet.spence.org.uk.local: matched mconotify–node–debian–1 to a node
Oct 3 16:41:09 puppet puppet-master[21564]: MCONOTIFY puppet.spence.org.uk.local: Filters: node 2 class 1
Oct 3 16:41:09 puppet puppet-master[21564]: MCONOTIFY puppet.spence.org.uk.local: Doing an mco run for ibroketheinternet.co.uk,debian
Oct 3 16:41:09 puppet puppet-master[21564]: MCONOTIFY puppet.spence.org.uk.local: /ibroketheinternet.co.uk|debian/
Oct 3 16:41:09 puppet puppet-master[21564]: MCONOTIFY puppet.spence.org.uk.local: Doing an mco run for loadbalancer

Old Fashioned Config Management

I was having a rummage around some old code that I wrote in a previous life the other day, and encountered my old chum ‘wincheck’.  Most, if not all of you will not have heard of wincheck, but I like to think it was pretty groovy in two quite separate ways.

Way the first - it checked common definable misconfigurations and/or configuration items to validate whether they were right or not.  For example, it was marginally pluggable in that it referenced files containing data that referenced specific configuration instances (such as a Windows service, or a package) to validate whether it was installed/not installed or running/not running and more importantly whether that was the right thing to be doing.

Looking at the code, it did some other cool stuff too.  Mostly it was WMI driven, so you could run your check against the local machine or some other arbitrary host on the network, assuming you had rights to read the WMI instances on the node.  The report was human readable, it output some HTML in a handy browser tab so you could consume it - with a small amount of tweaking it could have collated those reports or run unattended and collected all sorts of useful information.

The way the second it was pretty groovy is a sad indictment of why people give up on tools or their development.  Wincheck was pretty rudimentary, though it could have (with more development effort) been really useful.  No doubt it might have evolved into a tool that as well as auditing and reporting could have remediated.  It might have ended up being rewritten in something better and more efficient than vbscript.

Sadly, wincheck serves, for me, as a reminder that even though something has already been invented, you don’t win prizes if people don’t either deign to acknowledge a tool’s existence, or aren’t aware of it.  In either of those two cases, when the organisation you are working for runs a clever ideas competition, the prize (it happened to be a laptop, if I recall correctly) for best cleverest idea (which happened to invent a configuration auditing tool) doesn’t go to one of the direct reports of the person giving the award, working on a project sponsored by that person.  Yeah - that still pisses me off.

Thus, for lack of interest, wincheck has been dormant in a tar file for 8 years, with the last commit dated 21st June 2004.  I might upload it to the github just for fun.