I love Cobbler.
Cobbler + Chef in my environment means that I can go from bare metal to an active cluster node in moments with little effort.
It is a powerful system for managing kickstart profiles, pxeboot, power, dhcp, dns etc..
Below are some notes to help get you going with just the basic feature set. It is a system you can easy go nuts with to automate a lot of your infrastructure.
Following up from my last post on creating a simple yum repository, here is how to setup a local CentOS mirror.
Here is some low hanging fruit to improve your RHEL environment and simplify your work… setup a simple Yum repository.
Over the last year Google Analytics says I’ve been getting a lot of hits from search that indicate there are some folks who want to know how the C-series DRAC works.
It is easy enough to setup like any other IPMI/DRAC system.
First you’ll need to plug the IPMI/Management Ethernet port into your network (preferably an our of band (OOB) network seperate from your production network). In the BIOS, make sure the management port is set to ‘Dedicated’, earlier ones shipped with it set to ‘Shared’ by default which precluded the dedicated IPMI port.
We have a pretty normal single master MySQL setup.
Since we have a read heavy application it makes sense. Everyone writes to the master and reads from a large pool of read-only slaves.
But, with more and more slaves it becomes hard to manage what nodes read from what slaves. It can get unmanageable pretty quick when configuring the app servers.
If we lose a MySQL slave, we have to redirect all of those servers to the new one… which descends into a bunch of temporary app config or DNS changes that sometimes are not temporary :/
The stuff in this article isn’t my bit of magic, but it is what we have been using in one of our three datacenters for about a year now and am hoping to migrate the others to the scheme. My boss and an ex co-worker set it up an I think it is pretty nice.
I find myself running more and more Cassandra clusters and when we were on Chef 0.9.8 I was being lazy and just cloning my Cassandra cookbook per cluster. Not exactly a way to scale the manageability of your config
Now I’ve refactored the cookbook to allow me to manage multiple clusters by extracting the
initial_token from a databag. Once we start implementing the new Environments feature in Chef 0.10 I’ll be able to simplify this further.
I’m debating having the cookbook auto-generate tokens and assign them as well as re-generate/nodetool move/re-balance when I’ve added another node with that cluster specified in the databag. That’s a big project and for now I’m too much of a control freak to automate that, but I’m thinking on it.
I’ve also made it so the cookbook auto-generates the
cassandra-topology.properties for the
PropertyFileSnitch based off of location info stored in the databag.
Finally upgraded to Chef 0.10.6 from 0.9.8.
Hot, sweet, environments and encrypted data bag action.
Except… well… the chef-client would ocassionally die… quetly.
No log, no debug output, no exit codes, just poof… no more chef-client daemon.
(This is not the point there you guys tell me I should use cron or runit or daemontools or something to run chef, I’ve heard it)
The lovely folks at Opscode said that running on ruby 1.8.7 rather than ruby 1.9.2 was the culprit and then drew my attention to the super-happy-awesome Opscode Chef Omnibus installer here (avaliable as rpms, debs, tgz, etc..)
It installs (almost) everything you need into /opt and lets Chef run in it’s own ‘embedded’ ruby 1.9.2 environment keeping my system ruby clean.
Lets talk about Cassandra maintenance.
Nothing crazy here… these are just some notes I jotted down for folks I work with explaining a cronjob I put into production as well as providing the simple script. Thought some other people might benefit.
So, memcached 1.4.11 lets you rebalance and reassign slab memory!
This is epic!
Info why this is epic here.
Info on the implementation is in the release notes
From the release notes, please remember that the slab reassignment feature is in beta and is subject to some changes.
I just took the regular spec file I found for the project elsewhere and modified it a little. I disabled the SASL stuff in my spec file since we don’t use it and I didn’t want to mess with building it.
EDIT: Actually, this article has revised for less yak shaving. With the help of Dormando and Justin Lintz. I was able to shed some unneeded dependencies.
So here you go:
Thought I’d play a little with Hadoop 0.23 (a.k.a YARN, MR2, NextGen Hadoop) and dump my notes here.
Gotta keep my skillz sharp y’all so I don’t become irrelephant. (Yes, that just happened.)
Below I just setup a pseudo-distributed mode setup and run some examples on it, nothing crazy.
I’m hoping to test and write more on how 0.23 differs from the main line 0.20.x, 1.0 and CDH3 releases as well as playing with the NameNode federation and using some other paradigms like MPI, Hama and Spark.