Categories
computing linux sysadmin

Emergency Server Moves, Automation,

So, I’m having a bad server day.

This is a Monday Morning email if ever there was one
This is a Monday Morning email if ever there was one

It’s my fault, to a large extent. Earlier this year I discovered that my server had been sending daily emails complaining about a problem with one of the mirrored hard-drives, but these were going directly into Spam. I asked my hosting provider (Hetzner) to swap a new drive in, which they did, but the process of copying the old hard-drive mirror to the new one failed because of an error on the old hard drive. Not having the time to devote to fixing it properly, I bodged a fix and moved on with my life.

This is not a good email to receive either, but in addition to the last one, it's worse
This is not a good email to receive either, but in addition to the last one, it’s worse

Earlier this week, that problem developed into a fatal one, it’s getting worse, and so now I have a limited time to move everything off.

 

So now I own a brand new server, continuing with my usual conventions, this is Archipelago.water.gkhs.net.

One of the reasons I didn’t jump to a new server earlier is because I wanted to improve the setup of it. Atoll – the server which is failing – was set up three years ago in a hurry as I attempted to move stuff off of fjord – its predecessor – before the billing period was up. All of it was before I moved back into sysadmin/devops, and it’s a very traditionally set-up server – the kind I refer to at work as “Artisanal”; Hand crafted by craftsmen, unique and unrepeatable because it has no documentation as to *why* any config is as it was. My mission at work is to automate that stuff out of existence, and I wanted to apply some of the stuff I’ve learned in the last few years to my own systems.

Server herding over the last ten years has shifted away from big servers doing multiple jobs, and towards huge servers pretending to be many tiny servers doing single jobs. Initially I thought of setting up the new server as a virtual hosting server, setting up many small servers. I’ve come to the conclusion this is impractical for my use-case, which is that the server tends to host many dozens of tiny websites and services. Dedicating a slice of resource to each does none of them any favours, and increases the admin burden rather than decreases it. Instead, I’ve gone towards a halfway house solution of putting most of the separate servers on Docker.

docker-logo(Docker, for those who haven’t seen it, is basically chroot on steroids, an enclosed mini-OS that shares physical resources with the host, but only has access to the service and disk resources you specifically give it. For example, you can grant it a port to talk to MySQL on the host (or another Docker container), and a directory so it can maintain state between restarts).

Rather than manually set up the server for Docker, and have many automated boxes inside a grand artisanal crafted shelf, I’ve decided to use Ansible to manage the main server setup (and a lot of the DNS, since most of my sites use Amazon’s easily scriptable Route53 DNS server service). I’m still learning Docker, but I’m comfortable in Ansible, so I haven’t gone as far as to orchestrate the Docker setup itself with Ansible, just the install.

All of this is kind of like getting everyone off of the capital of Sokovia before the rockets hit the right altitude. In this metaphor, the unbeatable forces of entropy will be playing the part of Ultron
All of this is kind of like getting everyone off of the capital of Sokovia before the rockets hit the right altitude. In this metaphor, the unbeatable forces of entropy will be playing the part of Ultron

Docker’s given me some fantastic advantages just in the 12 hours or so I’ve spent using it. I’ve long been a user of things like Virtualenv for python to separate out my language and library versions on a per-application basis, and the ability to do that with whole OS packages is very useful. It enables one of the holy grails of PHP development: Hosting multiple sites with different versions of PHP at the same time. So WordPress can be on the brand new PHP7 (you’re using it now, in fact), while a PHPbb install can remain on 5.6 until they get their act together (or I switch it to Discourse or something). For this traditional hosted kind of server, which does many things for many different people, it’s really useful.

Taking Control
Taking Control (Photo by Ady Satria Herzegovina, used with permission)

All this is still an arse, though. Working through today, I’ve got WordPress and the forums moved over, some of the static sites, but still less than 20% of functionality’s been moved. Taking a break now (It’s midnight as I write this) for some gaming and then probably sleep. Maybe I can finish this tomorrow…

 

 

Categories
computing linux

Fear and go seek

The common refrain for people advancing the cause that says that encryption should have government back-doors is that only people with something to hide need to encrypt their work. If you have nothing to hide, then you have nothing to fear.

Quite apart from that not being true in the general case, in the specific case it’s bad too, and this is why:

The following things use the same kinds of encryption that the government wants to put back doors into:

* Every website you put your credit card number in to.
* Your online bank access
* The communication between the website you put your credit card number in to and your credit card company.
* Your connection to your email
* The ability for home-workers to log in to their company’s network (VPN)

Companies are legally obliged to keep their company secrets secured, workers are contractually obliged (and in some cases also legally) to keep those secrets to the best of their ability.

These measures would grant the government the ability to read – as they flow through the wire – the items above, and even if you believe the government should be allowed to do this, there’s a wider issue.

Governments have, so far, not demonstrated the ability to keep their own documents secure, which would include the details of back-doors into secure systems, and once widely-used standards to encrypt communication are blown open. Even if somehow governments managed to perfect their own security, the known existence of a back-door would encourage the high number of highly intelligent people that have the required technical skills to try and find it, either for intellectual curiosity, or in order to read your data. Basically, it means that the encryption we rely on every day to make our lives easier and be able to do things over the internet, advancements that make things like personal banking and shopping possible for disabled, busy or just lazy people; suddenly become a lot more risky.

Computer security’s taken a bashing in the last year. Several deep investigations into the publically developed libraries that underpin a lot of internet security have resulted in a number of very public and news-friendly panics, and undermined confidence in them in general (To which the response is: Bugs being found – and fixed – is good. I’m happy that smarter people than me can see the code I rely on, and will publicly say if there’s a problem with it, rather than hope nobody notices), but the fact that ISIS, Apple, Google and other reprobates can encrypt their data so that it can’t be recovered in sub-decade timescales means that your credit card data can be stored safely, that your bank is able to offer your balance to your phone, and that companies – maybe yours – can let you VPN in and work from home once in a while; and crippling it is a very high price to pay.

Categories
Apple computing linux tv windows

Ripping TV Yarns

I’m in the process of ripping some boxsets of DVDs to Plex, and I thought I should probably document the process. The most obvious thing I’m not using here is Handbrake, which works really well for some people, but I am not one of them.

Physical to Digital

MakeMKV turns any DVD or Bluray I’ve thrown at it into an MKV file. The one thing it could do to make my life better would be custom tags on filenames, but the default {Directory}\{DVD Identifier}\title{nn}.mkv  is good enough. {DVD Identifier} is annoyingly unspecific most of the time, and sometimes within disks of the same box set (The thing I’m currently ripping has both WW_S4_T5_D3 and WESTWING_S4_D6 in it, as discs 1 and 6 respectively), so the next stage is to make those directory names consistent. It doesn’t matter what they are, so long as when I “ls” the directory, they are in the right order. Then, I run this:

export COUNT=1; # Start at 1
find . -name \*mkv \ # Find all files ending MKV
	| while read fle;\ # For each of those (as variable $fle)
		do mv $fle $(printf "The_West_Wing-S04E%0.2d.mkv" $COUNT);\ # Increment the filename
		COUNT=$(($COUNT + 1));\ # Add one to the filename count
	done

Note: You’ll need to collapse that back into a single line without the comments for it to work:

export COUNT=1; find . -name \*mkv \| while read fle; do echo mv $fle $(printf "The_West_Wing-S04E%0.2d.mkv" $COUNT);COUNT=$(($COUNT + 1)); done

This gives me a directory of well-named MKV files.

Digital to MP4

Plex is happier with mp4 encoded videos than with MKV files, though, plus they’re smaller without a noticable (to me) drop in quality, so when I’ve got a few series of these built up, I’ll run this overnight:

for fle in mkv/*.mkv; do encode.sh $fle; done

Where encode.sh looks like this:

#!/bin/bash
file=$1
ffmpeg -i $file -codec:v libx264 -profile:v high -preset ultrafast -crf 16 -minrate 30M -maxrate 30M -bufsize 15M -metadata:s:a:0 language=eng -c:a ac3 -b:a 384k -threads 2 ${file%.*}.mp4

Which is a standard ffmpeg encode line, the only real weirdness being the ${file%.*}.mp4 bashism, which basically turns the $file variable from “Foobar.mkv” into “Foobar.mp4” (It will also turn “Foo.bar.mkv” into “Foo.mp4” though, so be careful)

MP4 to Mediacentre

Once that’s finished, I’ll get rid of the mkv files, and send them into Plex. To ensure consistency of my filenames and also get any subtitle files I need, this is done using filebot, like this:

filebot -script fn:amc --output "/media/mediashare" --log-file amc.log --action move --conflict skip -non-strict --def music=y subtitles=en artwork=y --def "seriesFormat=TV Boxsets/{n}/{'S'+s}/{s00e00} - {t}" "animeFormat=Anime/{n}/{fn}" "movieFormat=Movies/{n} {y}/{fn}" "musicFormat=Music/{n}/{fn}" --def plex=localhost .

(Filebot, rename using the (included) automediacentre script. Output to directories below my media drive mount, log to amc.log, move (don’t copy) the files, if it already exists skip it. Don’t do strict checking, download music, search for subtitles, get series artwork, send TV shows to the “TV Boxsets” directory in {Series Name}/S{Series Number}/s{Series number}e{Episode Number} – {Episode Title} format. Anime should go somewhere else, Movies somewhere else, Music somewhere else, then notify plex on the local machine. Do this on the current directory)

Operating System Notes

None of this is OS specific. Filebot, FFMPEG Plex & MakeMKV are available – and work identically – on Windows, Mac & Linux. The various bash scripts could be adapted to powershell, but I’d instead recommend Babun, which is a repackaging of cygwin with a far nicer interface and package management system that’ll give you the basic *nix commandline tools on your windows machine (all of the above up to MP4 to Mediacentre runs on my beast-sized windows gaming rig, to avoid making the puny media centre CPU cry too much)

Categories
computing linux Personal

Week 17 – There are lights in the ground, where the lights in the sky have fallen

Busy fortnight.

I’ve got a new address. Istic.Networks ltd has a new trading address, since we’ve joined the Innovation Warehouse. Mostly, this is because I need to manage work/life split a bit better, and having a transitionary commute helps this. Also, it makes my watch happier that I have to walk to work.

Yes, I wrote the phrase “It makes my watch happier”. I am a slave to the little red circle, and soon the green ones, I hope.

This weekend is Odyssey, and all the stuff that brings. Slightly worried at my first NPC role in the system – I’m head ref, this shouldn’t worry me – and the number of small crisises that continue to plague us. They do every time, though, and I’m sure it’ll be fine.

Project-wise, I’ve laid down some solid work on both larp.me and PiracyInc the last couple of weeks, but I’m looking forward to having some dedicated time for them next week. I’ve been full-time for my contracting gig for almost ten days straight, which has been a bit of a shock to my system, and the change of scene will be good for me.

Achievements? Well, my Windows-based PLEX media-server developed a fatal flaw, which resulted in a reinstall. I’ve had it with Windows servers, however, so I’ve moved it over to Ubuntu. I still need to teach it a bit about boot systems and file library arrangements, but so far it’s running both better and faster than the windows version. It was originally windows because that was the requirement for Plex’s iPlayer channel, but since that’s dodo’d, there’s not been a good reason for it to stay there.

It also means I can move my OpenVPN install off the raspberry pi and onto a slightly more solid basis.

Fatal flaw in the VPN, though: It will only support one device at a time. Not sure what I did there, but can almost certainly fix that with the move to the new environment.

Failed to write any blog posts. There are five in the hopper, including “Technical Gadgets That Have Not Changed My Life”, “The Explanation of LARP” and “intersection between ssh connections and aerial faith plates”, the latter of which is suffering from being a better title than article.

 

Categories
linux sysadmin

Week 15 – Outstanding in our field

I had a lovely empire event. It had nice weather, nice people, and things that I wanted to happen happened, and happened well.

Most of what I’ve done other than that, though, has been dribs and drabs.

I’ve turned my dust-gathering RaspberryPi into an OpenVPN server for the house, enabling me to get at my media server and desktop files and folders from anywhere in the world without opening the services up to it. This required a bit of playing around, because OpenVPN didn’t work though my Fibre router, but putting the router from Virgin into “Modem mode” and moving all router duties to a nicer Belkin box stopped my kettle talking to the wifi (also my Kindle and media server).

Some playing with radio settings and configuration later, and the Belkin box is running the roost, with everything connected to it. Of course, the public story on that is just that I can turn on my kettle from anywhere in the world, because that’s the obvious bit.

The other bit was implementing recaptcha on a quotes service. I’ve been running quotefiles for channels for eight years or so, but I’ve never found a service that didn’t suck. Rash and the other QDB lookalikes have ownership, maintenance and being-awful problems, which pushed me towards Chirpy, a perl-based system written for Mozilla. I don’t generally work with perl, which meant that Chirpy didn’t really work for me. When it would crash with obscure templating errors that repaired themselves in a few minutes, I had nothing to do. Plus, as we drifted away from its 2007 last release date, and the 2010 last code-commit, I trust it less.

So when it failed for the last time, as I upgraded the server it was on, I honestly couldn’t be bothered to go through the CPAN dance again, and hacked my own together in PHP. It doesn’t have the features, or tagging, or any of the other things we didn’t use. But it worked.

Well, nearly. When I checked the queue for quotes to approve, as I do every few weeks, I found a spambot had hit the form, so I’ve added very basic recaptcha support, which took 45 minutes only because I can never spell captcha the same way twice in a file.

 

Categories
computing linux sysadmin

apt-get dist-upgrade

The release of a new Debian version is one of those Deep Thought moments. The great machinery has been churning and grinding for seven and a half million years to produce this single result, and it pops out and stands there for a while, while everyone looks at it to see who will bite first.

This weekend, I upgraded larp.me to use redis as a caching later, more for practice than because it desperately needs the speed boost, but the php-redis package doesn’t exist in Wheezy, so it was time to upgrade to Jessie, the new version.

The server hosting larp.me is Atoll.water.gkhs.net, which is almost pure webhosting right now. Specifically, it hosts:

So a mix of stuff that nobody would notice if it died forever, and stuff people will send me messages about when it goes down.

First stage was to RTFM. Debian’s release upgrade process is deceptively simple, and I’ve successfully updated servers though four releases – that’s nearly ten years – with just an apt-get dist-upgrade, but one of the ways I’ve done that is to read the release notes and see where the iceburgs might be.

Here, a few security updates (Root logins disabled by default), but nothing major for anything I use… go forth.

The biggest problem I had was Apache, in fact. The 2.4 release Jessie upgrades to I missed as a big release, and the biggest problem was the previous permissions system’s “allow from all” declarations were not legal under the new system. This, coupled with a few changes to SSL config, caused me mild panic. A simple read-though of the Apache 2.2 -> 2.4 upgrade guide soon set me right, though.

The upgrade of PHP to the numerically pleasing number of  5.6.7 seems not to have broken anything major.

The packaging of web apps, however, is still moderately fucked. Mediawiki’s stable version is going to be abandoned at some point during Jessie’s lifetime, which will stop security updates of it. Mediawiki’s upgrade process is horrible anyway, and the Debian package solution of complex mazes of symlinks which break every point release hasn’t helped keep people on it, but I think the security release abandonment is the final nail in the coffin of me using it at all.

At some point soon I’ll need to upgrade my other server to Jessie as well, a more complicated process, since while Atoll is almost entirely my stuff, Cenote hosts over 100 sites for a couple of dozen users, as well as things like a Mumble server. Time to schedule some downtime there, I think…

Categories
cevearn comics linux Python Shebang tv

Week Nine – it’s better than bad, it’s good

Quiet work week, so we’ll skip that. Decided that I’d had enough of print statements, and moved both Lifestream and Lampstand over to use Python logging instead for everything outputty. Lampstand also needs a pass to separate output into levels, right now everything’s at INFO.

Positive feedback on some creative writing I did recently – on tumblr, and in scraps elsewhere – has led me to want to carve out time to get the novel moving forward again. I need to suppress the urge to kill it with fire and start from scratch, but right now it’s plodding a bit.

Somewhere between Rest and Play lies Odyssey work this week. A good Story Team meeting at the weekend has set some flags out for the year, and indeed next, and then I spent a few hours putting together the Odyssey T-Shirt shop, to supplement our costume & props budget with mercenary goodness.

Somewhere over the last week I’ve also carved out 13 hours to watch the full first series of Daredevil on Netflix, which I enjoyed a lot, and should turn into another entry shortly…

Categories
Apple computing linux sysadmin windows

My Terribly Organised Life III:B – Technical Development

Code starts in a text editor. Your text editor might be a full IDE, custom built for your language, a vim window with more commands than you can remember, or an emacs with more metakeys than you have fingers. Nowadays, it might even be a window in a browser tab, but that’s always given me flashbacks to deploying software by pasting lines into textareas in Zope, but the lines I type are in a text editor, and currently that’s Sublime Text 3.

I used Eclipse, Netbeans and Aptana and variants on the Java-based juggernaut for years, but partly because my main development languages are PHP and Python, it never really worked that well for me. My primary development OS is OS X, on by beloved Macbook Air, but I don’t want that to matter at all. I use SublimeText because it has plugins to do most of the things I liked about IDEs (SublimeCodeIntel, some VCS plugins, and a small host of other things) and it works the same, and looks the same, across every OS I use day to day. I’ve got my prefs and package lists syncing via dropbox, even, so the plugins stay the same.

I work as a contractor for hire, most of the time, and I’m terminally addicted to new projects. So I’ve generally got upwards of a dozen different development projects active at any one time. Few of them use the same language/framework combination, and all of them need to be kept separate, and talk to each other only in the proscribed ways. Moore’s law has made that a lot easier with the advent of things like Virtualbox being able to run several things at once, but getting those all consistently setup and easy to control was always a bit of an arse. Vagrant is my dev-box wrangler of choice right now. It could do a lot more, but mostly I use it to get up and shut down development VMs as I need them, safe in the knowledge that I can reformat the environment with a single command, and – with most projects, after prep work I’ve already done – anyone can set up a fresh and working dev environment in a few minutes.

(In theory. In practice there’s always some “How up to date is your system” crap)

Plus, the command line history always looks like it’s instructions for some kind of evil gift-giving robot. Vagrant up! Vagrant Provision! Vagrant Reload! VAGRANT DESTROY!

It’s a year or so since I switched almost everything to vagrant environments, but it’s only in the last few months I’ve looked in more depth about using something other than shell-scripts to provision them. I don’t really want to run a separate server for it, I’m not working to that kind of scale, so Ansible is currently my provisioning system of choice.

Ansible technically breaks my rules on development environments being platform agnostic, since it’s fairly militantly anti-windows as a host platform, but with babun (which is a cygwin repackage, complete with a replacement for the awful cygwin interactive shell, zsh, and a full package manager. If you take away nothing else from this, never install cygwin again) it works fine.

I’m fairly lucky in that all my clients have standardized on git as their vcs of choice, as it’s my choice too. Tower absolutely shatters my platform independance rule, but it’s hands-down the best git GUI I’ve used, and its built in git-flow support makes a lot of things easier. In Windows I’m using Atlassian SourceTree for the same job, which does a passable job. I’d still not recommend a git gui unless you know how to drive the command line to some level, if only because the terminology gets weird, but at the same time I’ve really liked being able to work with cli-phobic front-end developers who could still commit directly to the repo and make changes without needing a dev to rebuild.

For that, and not much else, I’ll recommend the Github client (in both Windows and Mac forms). It’s the most easy to use git client out there, but it’s doing that by hiding a lot of complexity rather than only doing simple things. It will work with non-git repos, even, though it’s not terribly happy about the concept. Does have the massive advantage of being free, though.

For the full Rained On By The Cloud experience, current primary deploy stack for Skute backend involves pushes to Github branches automatically triggering CodeShip CI, which runs the test suite before deploying (assuming success, of course) to Heroku. Secondary stack is similar, but deploys with ansible to AWS (for Reasons. At some point in the future I’ll no doubt be doing deeper stuff on how I’ve built the backend for Skute). Leaning heavily on the cloud is, in IT as much as life, not entirely a good idea, but it’s a really good starting point, and redundancy is in place.

Heroku’s mostly been a good experience. We’ve run into some fun issues with their autodetection (They decided our flask-based frontend service should be deployed as node.js, because the asset build system had left a package.json in the root) but the nodes have been rock-solid. Anyway, I’ve drifted into specifics.

Other dev utilities I couldn’t live without? Putty, in windows, for all the normal reasons. Expandrive is a Windows/Mac util for mounting sftp services as logical drives (or, indeed, S3 buckets or a dozen other similar things). LiveReload automatically watches and recompiles CoffeeScript, SASS, LESS etc. when necessary, Sequel Pro is an OS X GUI for MySQL access… and Evernote, where go checklists and almost every other bit of writing that isn’t also code.

There’s probably more, but that’ll be another article now.

Categories
linux Ubuntu

Year of the Linux Desktop

So, I’m developing Piracy Inc in Ruby on Rails version 3. Because newness is obviously better, I’m also doing this with a fully integrated IDE, in the form of Netbeans. Currently, the Debugger functionality of Netbeans doesn’t work under Windows (because it wants to compile a module, and there isn’t a compiler, and the precompiled versions don’t work, and getting a compiler working led to two hours of yak shaving that could have better been spent picking the lint out of my bellybutton).

So I decided to install Ubuntu, because there was a new version out, and because the server PInc will be running on will be a Linux server, so doing dev work on the same OS makes sense.

I have an ATI (now AMD) graphics card, a motherboard, and a complicated hard-drive setup. There’s a 1Tb drive acting as a windows/boot drive, a 30Gb SSD drive for things I want really fast access to (It used to also be the boot drive, but broke and got sent back, so the current boot drive was put in as a temporary stopgap, and I haven’t reinstalled since the SSD got returned) an SIL 3132 RAID card with 2x1Tb drives in Raid1, a USB hard drive for backups, and an HTC Desire mobile telephone.

So: Boot, SSD, Raid(Disk1,Disk2), USB, Phone.

Boot from CD. the partition manager scanning of the above takes ages.

Resize boot drive to install things onto, big partition.

No swap. Back. Wait for rescan.

Big partition. Swap drive. Install.

Tea.

Reboot.

Grub cannot find the boot device. “grub rescue>” prompt.

GoogleGoogleGoogle How do I use grub rescue?

“The grub-rescue> mode is a more restricted subset of the grub> shell. Some commands are phrased differently here for easier use. Try help to start. “

Awesome. Except help doesn’t work, and the documentation doesn’t say how the command set differs. Root and boot don’t work either. Also:

Useful tip: try to load normal mode: insmod /boot/grub/normal.mod

Ah, so your hint to how to use grub rescue is to get to grub normal. Why does grub rescue exist, then?

I assume there is documentation for grub rescue somewhere, but I couldn’t find it. Eventually I realized I still needed to care about boot drives being in the first X cylinders of the hard drive, rebooted from rescue and reinstalled.

So, Ubuntu is installed across SSD and Boot, with churning stuff on Boot (/home, /var, /tmp, /swap) and more static stuff on the SSD. SSDs have more limited write-life, so this is Good Use Of Technology.

Support for my RAID card appears to be a little broken. By a little broken, I mean that each drive attached to the RAID card is currently available as a separate block mountable device in the My Computer window, which is pretty much the exact opposite of anything I ever want to happen ever ever ever involving anything ever attached to a RAID card. Still don’t know how to fix that, got dmraid to create another device as a RAID access, and mounted that. So long as I always mount the *right* block device called “RAID” and don’t accidentally mount one of the component devices on its own or write to it and invalidate the integrity of the set, I should be fine. It’s not as if they’ve all got the same name and icon, or anything.

No, wait.

My phone was running low on battery so I plugged it in.

Okay, graphics then. Right. I have an ATI graphics card, duel head output, left side Benq FP73G mounted landscape, right side secondary Benq FP73G+ mounted portrait. Monitor Preferences would allow me to enable the second monitor in desktop extension mode, but if I attempted to rotate it would spontaneously reboot X and lose everything I was doing. So I installed the AMD non-free graphics drivers because I hate freedom and want my shit to work.

The drivers installed and I had to reboot. How quaint. Right, fine, kernel modules are complicated. Reboot.

Or, you know, not. Can’t mount /home. or /var. Nor /tmp or swap.

What?

Okay, so further research (involving command lines and fdisk) suggests that the drive referred to above as Boot is no longer /dev/sdd, but is instead /dev/sdf. Fine, some kind of updates happened. Change fstab, reboot.

X works, but it’s in mirrored display again, so I configure that to be duel head, and it tells me to reboot. It doesn’t mean reboot, it means restart X, so I press ctrl-alt-backspace, the time honoured method of “Get me out of here, X has gone crazy”. Apparently that doesn’t work either. Way to go. Ah well, give in, reboot. Take a recruiter call while it does so.

Can’t mount /home. or /var. Nor /tmp or swap.

What?

The drives are back to being /sdd again. Great.

Can you spot a running thread, readers? Do you see why the numbers are getting shuffled? Has the reason I included “an HTC Desire mobile telephone” in the list of hard drives started to make sense?

Yes, plugging my phone in and then rebooting causes the allocation of my hard-wired hard-drives to go haywire.

Awesome.

Eventually, I find out that the AMD drivers autodetect the refresh rate of my monitor wrong, causing it to give up and go black. I find that my motherboard does strictly unnecessary things that cause USB drives to mean SATA drives get renumbered, I find that my RAID card is a “FakeRAID” card, the RAID equivalent of a winmodem, except with claimed support from Linux, and I should be grateful that it works at all.

And none of this is Ubuntu’s fault, really. It’s AMD’s stupid drivers, or the motherboard, or badly made cheap RAID cards. And the later problems installing Ruby from source (because Rails3 needs 1.9.2, which isn’t packaged), having to edit a Ruby library file to get the debugger to run the right script, but I spent more time attempting to get all of the little component parts to work together than I could spend actually progressing towards actual Piracy Inc development work.

The most common FUD campaign slogan against free software is that it’s only “free” if you don’t value the time you spend faffing with it to get it to work, and that’s a lot less true than it used to be, but the time I spend faffing with the above kind of crap is still far, far too much.

So it’s not the year of the linux desktop now, either.

Categories
Ubuntu

Mass vhosting

My small server currently hosts a number of websites. Too many, really, I should get a bigger server. However, I long ago got bored of creating separate site files for every website I host, so I use MassVHost to make that go away. The same file runs on my dev servers, and it means that to create a new domain all I do is point DNS at it (via hosts, wildcard or whatever) and create a directory with the same name as the site. So, for example, I create /var/www/hosts/unhelpfulclue.aqxs.net/htdocs and http://unhelpfulclue.aqxs.net/ automatically points there.

This is what that looks like:

(That file is in /etc/apache2/sites-available as “vhosting”, then enabled with a2ensite. This is all under Debian. You’ll also need the vhosting module installed, enabled and working. )

One of the most common things you also need to do is automatically redirect people who go to “www.domain.tld” to “domain.tld” or vice versa depending on your religion. In this world, the canonical name of the site is whatever the directory is called. The thing with the 404 errors and the EverythingIsCatchingOnFire (Spot the reference for five points) stuff means that by default 404s go to this script, which in the event of a “This domain doesn’t exist”, it looks for an appropriate domain and sends you there:

(Meaning not only does http://piracyinc.com/ go to the right place, but http://www.piracyinc.info/ does too)