“We had no code and no art assets,” Blizzard 3D Art Director Brian Sousa confirmed to Ars Technica. The 2017 project’s entire art pipeline was “eyeballed,” Sousa said, with recovered concept artwork, sketches, and original boxes and manuals used as reference materials. Not all code was missing, as Blizzard has been issuing patches to the original game’s code base for nearly 20 years. Also, a member of the sound team thankfully had backups of the original sound and voice recordings, which are now reprocessed in higher-fidelity 44,100Hz format.

I’d heard the majority of the original Starcraft code had been lost, years ago, but I figured it was just a rumour. Sounds like the team of Starcraft: Remastered had a big task to recreate the game in a way some of its biggest fans would appreciate.

External link: StarCraft Remastered devs unveil price, explain how much is being rebuilt

I’ve listened to a lot of 40K podcasts over the last couple of years. Over that time I’ve slowly winnowed my subscriptions down to just a handful.

  1. Forge The Narrative – my favourite 40K podcast of the last few years.
  2. Chapter Tactics – from Frontline Gaming, but distinct enough from their other shows to merit its own subscription
  3. Frontline Gaming – this is the main Frontline Gaming Podcast – the feed also includes Chapter Tactics and some other smaller shows
  4. Ashes of the Imperium – this one is new, but it’s by the team behind the very good Bad Dice AoS Podcast

My biggest gripe with most 40K podcasts tends to be length. Sorry, but unless you’re very, very compelling to listen to, I am not going to listen to a Podcast episode which is 2-3 hours long (or more!). The Podcasts above tend to clock-in at around an hour to an hour and a half, which I find to be perfect to my listening habits.

Bonus: Podcasts I’m Evaluating:

8th Edition has brought about a few new Podcasts, some of which I’m still deciding if they’re going to stay in my subscriptions list.

Bonus 2: Some Age of Sigmar podcasts

For a while I found the quality of the AOS podcasts to be in general higher than most 40K podcasts, with only a couple of exceptions. Sadly, my favourite AOS podcast — Heelanhammer — has recently gone on hiatus so I’m not including it here.

  • Bad Dice
  • Facehammer – can be a bit sweary, so proceed under advisory if that’s not your thing.

For various reasons I prefer to remove the www part from my personal-use domains. Setting up Caddy to serve the site from just domain.com is as simple as:

domain.com {
    root /path/to/site/files
    # other directives
}

But this set-up doesn’t provide any way to redirect from www to non-www, meaning anyone who types www.domain.com into the address bar is out of luck. So what to do? Well, Caddy provides a redir directive. Combine with a new site directive and a placeholder like this:

# Original non-WWW site:
domain.com {
    root /path/to/site/files
    # other directives
}
# New, additional "site", for doing the redir
www.domain.com {
    redir domain.com{uri}
}

An XML Sitemap can be useful for optimising your site with Google, particularly if you make use of their Webmaster Tools. Jekyll doesn’t come with one out-of-the-box, but it is easy to add one. There’s probably a plugin out there which will automate things, but I just used a normal Jekyll-generated file for mine, based on code found on Robert Birnie’s site.

The only modification I made was to exclude feed.xml from the sitemap. Because this is auto-generated by a plugin I couldn’t add any front-matter to a file to exclude it in the same way as other files.

Create a file called sitemap.xml in the root of your site, and paste the following code into it:

---
layout: null
sitemap:
  exclude: 'yes'
---
<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
  {% for post in site.posts %}
    {% unless post.published == false %}
    <url>
      <loc>{{ site.url }}{{ post.url }}</loc>
      {% if post.sitemap.lastmod %}
        <lastmod>{{ post.sitemap.lastmod | date: "%Y-%m-%d" }}</lastmod>
      {% elsif post.date %}
        <lastmod>{{ post.date | date_to_xmlschema }}</lastmod>
      {% else %}
        <lastmod>{{ site.time | date_to_xmlschema }}</lastmod>
      {% endif %}
      {% if post.sitemap.changefreq %}
        <changefreq>{{ post.sitemap.changefreq }}</changefreq>
      {% else %}
        <changefreq>monthly</changefreq>
      {% endif %}
      {% if post.sitemap.priority %}
        <priority>{{ post.sitemap.priority }}</priority>
      {% else %}
        <priority>0.5</priority>
      {% endif %}
    </url>
    {% endunless %}
  {% endfor %}
  {% for page in site.pages %}
    {% unless page.sitemap.exclude == "yes" or page.url=="/feed.xml" %}
    <url>
      <loc>{{ site.url }}{{ page.url | remove: "index.html" }}</loc>
      {% if page.sitemap.lastmod %}
        <lastmod>{{ page.sitemap.lastmod | date: "%Y-%m-%d" }}</lastmod>
      {% elsif page.date %}
        <lastmod>{{ page.date | date_to_xmlschema }}</lastmod>
      {% else %}
        <lastmod>{{ site.time | date_to_xmlschema }}</lastmod>
      {% endif %}
      {% if page.sitemap.changefreq %}
        <changefreq>{{ page.sitemap.changefreq }}</changefreq>
      {% else %}
        <changefreq>monthly</changefreq>
      {% endif %}
      {% if page.sitemap.priority %}
        <priority>{{ page.sitemap.priority }}</priority>
      {% else %}
        <priority>0.3</priority>
      {% endif %}
    </url>
    {% endunless %}
  {% endfor %}
</urlset>

If you want fine-control over what appears in the sitemap, you can use any of the following front-matter variables.

sitemap:
  lastmod: 2014-01-23
  priority: 0.7
  changefreq: 'monthly'
  exclude: 'yes'

As an example, I use this in my feed.json template to exclude the generated file from the sitemap:

sitemap:
  exclude: 'yes'

And this in my index/archive pages for a daily change frequency:

sitemap:
  changefreq: 'daily'

It’s super simple. Just include a push directive in your site definition. You can leave it as just that, and Caddy will use any Link HTTP headers to figure it out.

If you want more control, you can expand the directive and specify both the path and associated resources, like so:

example.com {
    root /var/www/example
    push / {
        /assets/css/site.min.css
        /assets/img/logo.png
        /assets/js/site.min.js
    }
}

What this block does is say “for every request with a base of / (i.e. every request), Push the following 3 files.” You can customise the base path if you want to, and add more files if you need, but a block like the one above is what I’m using for this site.

You can find out full details in the Caddy Push documentation.

Lately I’ve been feeling a pull to return to my Warhammer 40,000 Flesh Tearers army, which I started around 4 years ago (and promptly only completed one unit of). I had an idea of a small strike-force that was basically just a load of Jump Pack Assault Squads, supported by Land Speeders (with some Death Company elements thrown in). It wouldn’t have been very “competetive” but it would have been thematic and fun. I didn’t progress the idea very far, as the Blood Angels codex in 7th Edition was… very not good; it also took away the ability to field Assault Squads as a troops choice — rendering the entire idea invalid.

Now we’re in 8th Edition, I can build the army as I imagined it, using the new detachments in the rule book. By getting back to a small “passion project” of mine, I’m hoping I’ll be able to revive my motivation for hobby projects which has been worryingly low recently. Who knows — I might even add some Primaris Inceptors to the mix for some mobile firepower.

Shadowgate was a formative experience in my early youth. A brutally difficult NES RPG, it was the first time I played what was effectively a video game version of the Choose-Your-Own-Adventure books I’d been enjoying. When I say it was difficult, I mean it — it took me more than one sitting to get through the door at the very beginning of the game! I don’t think I ever managed to complete the game, despite my efforts.

I’d heard about the 2014 remake of the game, but never got round to playing it until a few days ago. The artwork is miles ahead of the original (obviously). It might not to be everyone’s tastes – it was very “concept art” style in many places. I found this led to many aspects of a room being missed at first inspection. The story is pretty much the same, perhaps with a few tweaks. There’s a little more “world building” than the original, I think?

The biggest departure was the difficulty. Despite the game retaining many of the same “frustratingly non-obvious solution” mechanics of the original, I managed to complete it in one sitting. I only died twice! (stupid Goblin…) Granted, it did extend into the early hours of the next morning, and I have over 20 years extra problem solving experience than I did when playing the original, but still…

At the current Steam price of <£3 for the edition that comes with all sorts of extras, it still gets a recommendation of worth your time if you’re nostalgic, or just fancy a new RPG game. I’m not sure I’d spend much more than that, given how short it turned out to be, but who am I to tell you what to do with your money?

You can find Shadowgate on Steam, here

Nintendo have announced the (predicted) SNES version of their Classic Mini. I’ve already registered to be notified of the preorder. The list of 20 games included on the system has some of my facourite games of all time. There’s a previously unreleased Star Fox 2 too. Even if it hadn’t had 7 games I absolutely love, I’d have preordered based on how much fun we’ve had with last year’s NES version.

Hopefully it’s easier to get hold of one this time around.

External link: Nintendo announces the Nintendo Classic Mini: Super Nintendo Entertainment System

This blog is generated by Jekyll, running on Caddy HTTP/2 server, and hosted on the lowest-tier Digital Ocean “droplet” (virtual private server). Self-hosting isn’t for everyone, but if you’re the sort of person who wants complete control over your content and how it is delivered – and who might like to tinker every so often, then read on.

The basic steps to setting up are:

  1. Prepare the Droplet
  2. Install Caddy
  3. Setup Jekyll and your workflow

Thankfully for me, other people have already written up their own guides for each of these steps!

To create the droplet that will host your blog, you’ll need a Digital Ocean account. If you don’t have one already, sign-up using my referral link to get $10 in credit.


1. Prepare the Droplet

Create a new Ubuntu 16.04 droplet through the Digital Ocean dashboard, then follow this guide to initial server setup. This should give you a nice base to work with. One thing I like to add to this initial setup is Fail2Ban, which will automatically ban the IPs of connections trying to login with wrong SSH credentials (which will be anyone but you):

$ sudo apt-get update
$ sudo apt-get install fail2ban
# Fail2Ban should automatically start. Check it with the line below:
$ systemctl status fail2ban

One more thing you can do (not neccesarily required, as you setup <code>ufw</code> firewall on the server) is enable a Digital Ocean firewall from the dashboard, and limit connections to just ports <code>22</code>, <code>80</code>, and <code>443</code>.

2. Install Caddy

Installation of Caddy is covered by this guide. I followed the steps pretty much as-is, with only minorr changes to match my setup (different username, etc). The biggest difference in my setup was I installed a couple of plugins as part of my Caddy installation. To do this, change the command in Step 1 to the following:

$ curl https://getcaddy.com | bash -s http.minify,tls.dns.cloudflare

This will install the Minify and Cloudflare plugins. Check out the Caddy home page for more plugins.

I set my site to use the Auto-HTTPS feature of Caddy, which gives the site a SSL certificate via Let’s Encrypt. I also wanted to use Cloudflare in front of my site, which isn’t covered in the guide above. After a bit of trial-and-error, the steps I used are below. If you don’t plan to do this, skip to Step 3.

2.1 Using Caddy Auto-HTTPS with Cloudflare

First off, you need to setup some environment variables. To do this for the Service you will have created using the guide above, run the following command:

$ sudo systemctl edit caddy

This will open up an editor for you to override or add to the main service file. In the editor, enter the following:

[Service]
Environment=CLOUDFLARE_EMAIL="<CloudFlare login>"
Environment=CLOUDFLARE_API_KEY="<your Cloudflare CA API key>"

Save the file and exit. Next, edit your Caddyfile:

$ sudo nano /etc/caddy/Caddyfile

Modify to something similar to this:

example.com {
    root /var/www
    tls you@example.com {
        dns cloudflare
    }
}

Finally, in the Crypto section of your Cloudflare control panel, make sure to set the SSL type to Strict. If you don’t, you’ll end up with redirection errors

You should be ready to start/restart Caddy:

$ systemctl restart caddy
$ #Enter your password when prompted

All being well, your site should be available, with HTTPS enabled.

3. Set-up Jekyll and your Workflow

I followed this guide to setup Jekyll on my Droplet, and create the necessary Git components. If your local machine is OSX or Linux, the guide is all you need. If you’re running on Windows (like me) things are a little more difficult. I tried setting everything up using the Linux Subsystem for Windows, like in this guide, which is the route recommended by the official Jekyll site — but for some reason it didn’t work correctly.

I ended up having to install both RubyInstaller and add the necessary DevKit as the last step of the installation. From there, it should just be a case of gem install jekyll bundler and creating the Jekyll site in the normal manner (follow the first part of the guide linked at the start of this section if you need to).


Hopefully, if you’ve followed along this far, you should now have your own shiny new blog, hosted on your own server! Setting this up took me a single evening – not including the time I spent creating my own Jekyll layouts. But those are a topic for another time…

Lock Screen

Raise to Wake is a feature I’ve wanted for a while, so I love that. It sometimes seems a little sensitive, but I guess I’ll either get used to it, or it’ll be tweaked in a software update. The new behaviour of unlocking your phone without going to the Home Screen until you press the Home button seemed a bit unintuitive to me, I’ve changed a setting under General > Accessibility > Home Button to remove the press.

Notifications

Functionally, the new notifications are great, and will get better as more apps embrace the feature. Like others, I’m not a fan of the styling, which is very evocative of “Web 2.0”. Clear All is another minor feature I’ve wanted forever, so I’m glad that’s there; I just wish I hadn’t had to Google to discover it’s hidden behind a 3D Touch gesture. These hidden or unintuitive features and gestures are probably my biggest peeve with iOS 10 for now.

Related to the notification area, I don’t get why the “Today” widget area is duplicated here and to the left of the Home Screen. One or the other would’ve been better, at least in my opinion. Maybe because I never used the old “Today” screen, but did use the old search screen which used to be to the left of the Home Screen…

Messages

Overall I like the update, but I’ve found some of the new features to be really unintuitive to use. The message styles (hidden ink, balloons, etc) are hidden behind a 3D Touch of the send button – so if you don’t get it right you’ll find yourself accidentally sending the message before it’s finished. This is a very minor thing, but it does cause frustration. I also found the Digital Ink features to be confusing to use, and the associated gestures a bit hit-and-miss. “Playback” of these messages is also hit-and-miss: sometimes they play automatically, but most times they don’t.

This article from The Verge has a good rundown of the new features of iMessage and how they work.

Other

Being able to (finally) remove in-built apps is obviously something which has received some headlines. Surprisingly, I’ve removed fewer than I expected… I think it’s only Stocks, Tips, Find My Friends and weather. I’ve actually found myself switching to a couple of the in-built apps

Over the last couple of weeks, my iPhone 5S has been rebooting itself during the night. Once (last Saturday) it got stuck in a reboot loop on the Apple logo screen. Strangely, it seemed to be emitting some kind of tone every time it restarted… maybe that was my woken-at-3am brain imagining things, but I’m sure it also made a noise in the early hours of this morning when it rebooted.

The most annoying thing about this, is that it’s only happening at night, while I’m asleep. I know it’s happening because my lock screen tells me so, and I can’t use TouchID to unlock the phone. That, and the fact the display flashing up the stark white loading screen sometimes wakes me up. Throughout the day, everything appears fine. It’s really quite bizarre.

I’d reset the phone to factory settings, but there are a couple of security-related apps installed which would be a massive PITA to have to de-authorise and set up again.

Has anyone else experienced this?

Earlier on I was trying to find a way to “downgrade” a Google Apps account to a personal account. Well, I found a way. Kinda. Ok, not really – I slipped up and deleted my Google account.

I was a bit naive about what removing a Google Apps subscription entailed. In the absence of any clear documentation, I assumed hoped it would remove the baggage of Google Apps, leaving me with a normal Google personal account (especially as the account predated Apps). It didn’t actually remove Google Apps… but it did remove my access to pretty much every useful Google service. I was locked out of Drive/Docs, Browser Sync… everything I use on a regular basis.

It turns out, that if you want to delete Google Apps, cancelling your subscription is only a partial measure. Whereas in most services “cancel subscription” means “I’m done, so remove all my stuff and let me go” if you want to cancel Apps then you have to cancel, and then do the non-obvious step of explicitly deleting your domain from the service.

At this point, my choice was: buy a new subscription to Apps, putting me back to square one – only paying for it, or completely delete everything to do with the Apps account. So deletion it was.

Eventually I tracked down where in the mess that is the Apps admin area I could find the delete domain button, held my breath, and clicked.

Milliseconds later I was dumped out of Google Apps, and everything was gone. Everything.  Even the stuff you’d forgot about, like your Google+ profile, oAuth logins to other sites or logins on other devices, and accounts you forgot were merged, i.e. my YouTube account and subscriptions. My iPhone complained, WordPress complained, Feedly complained, Chrome complained, and so did many, many more! Years of settings, data, and integrations, gone in a button click.

Immediately I had a wave of regret, but also a slight sense of a weight being lifted. I no longer had to worry about the schizophrenic nature of my old account. If I wanted to try a new Google service, I didn’t have to wait for it to be Apps-enabled. Yes, a whole bunch of data was gone, but in a way, that was good. I would be starting over from scratch, without all the cruft that had accumulated over the many years.

So I guess it’s not that bad, really. Just a little inconvenient in the short-term. I’ve created a new account, relinked any complaining devices, and generally started rebuilding.

But please, Google, make the whole Apps/Account integration more user-friendly!

Note: I found this mini How-To while having a clean-up of my GitHub repositories. I figured it would be worth sharing on my blog. Hopefully it is of use to someone. Warning: bad ASCII art ahead!


The Problem

  1. I have my repository hosted on GitHub
  2. I have an internal Git server used for deployments
  3. I want to keep these synchronised using my normal workflow

Getting Started

Both methods I’ll describe need a “bare” version of the GitHub repository on your internal server. This worked best for me:

cd ~/projects/repo-sync-test/
scp -r .git user@internalserver:/path/to/sync.git

Here, I’m changing to my local working directory, then using scp to copy the .git folder to the internal server over ssh.

More information and examples this can be found in the online Git Book:

4.2 Git on the Server – Getting Git on a Server

Once the internal server version of the repository is ready, we can begin!

The Easy, Safe, But Manual Method:

+---------+ +----------+ /------>
| GitHub  | | internal | -- deploy -->
+---------+ +----------+ \------>
^                     ^
|                     |
|     +---------+     |
\-----|   ME!   | ----/
      +---------+

This one I have used before, and is the least complex. It needs the least setup, but doesn’t sync the two repositories automatically. Essentially we are going to add a second Git Remote to the local copy, and push to both servers in our workflow:

In your own local copy of the repository, checked out from GitHub, add a new remote a bit like this:

git remote add internal user@internalserver:/path/to/sync.git

This guide on help.github.com has a bit more information about adding Remotes.

You can change the remote name of “internal” to whatever you want. You could also rename the remote which points to GitHub (“origin”) to something else, so it’s clearer where it is pushing to:

git remote rename origin github

With your remotes ready, to keep the servers in sync you push to both of them, one after the other:

git push github master
git push internal master
  • Pros: Really simple
  • Cons: It’s a little more typing when pushing changes

The Automated Way:

+---------+            +----------+ /------>
| GitHub  |   ======>  | internal | -- deploy -->
+---------+            +----------+ \------>
^
|
|              +---------+
\------------- |   ME!   |
               +---------+

The previous method is simple and reliable, but it doesn’t really scale that well. Wouldn’t it be nice if the internal server did the extra work?

The main thing to be aware of with this method is that you wouldn’t be able to push directly to your internal server – if you did, then the changes would be overwritten by the process I’ll describe.

Anyway:

One problem I had in setting this up initially, is the local repositories on my PC are cloned from GitHub over SSH, which would require a lot more setup to allow the server to fetch from GitHub without any interaction. So what I did was remove the existing remote, and add a new one pointing to the https link:

(on the internal server)
cd /path/to/repository.git
git remote rm origin
git remote add origin https://github.com/chrismcabz/repo-syncing-test.git
git fetch origin

You might not have to do this, but I did, so best to mention it!

At this point, you can test everything is working OK. Create or modify a file in your local copy, and push it to GitHub. On your internal server, do a git fetch origin to sync the change down to the server repository. Now, if you were to try and do a normal git merge origin at this point, it would fail, because we’re in a “bare” repository. If we were to clone the server repository to another machine, it would reflect the previous commit.

Instead, to see our changes reflected, we can use git reset (I’ve included example output messages):

git reset refs/remotes/origin/master

Unstaged changes after reset:
M LICENSE
M README.md
M testfile1.txt
M testfile2.txt
M testfile3.txt

Now if we were to clone the internal server’s repository, it would be fully up to date with the repository on GitHub. Great! But so far it’s still a manual process, so lets add a cron task to stop the need for human intervention.

In my case, adding a new file to /etc/cron.d/, with the contents below was enough:

*/30 * * * * user cd /path/to/sync.git && git fetch origin && git reset refs/remotes/origin/master > /dev/null

What this does is tell cron that every 30 minutes it should run our command as the user user. Stepping through the command, we’re asking to:

  1. cd to our repository
  2. git fetch from GitHub
  3. git reset like we did in our test above, while sending the messages to /dev/null

That should be all we need to do! Our internal server will keep itself up-to-date with our GitHub repository automatically.

  • Pros: It’s automated; only need to push changes to one server.
  • Cons: If someone mistakenly pushes to the internal server, their changes will be overwritten

Credits

With all the cool new stuff constantly being released by recently, it can be very easy to end up with a large hobby backlog. When this happens it’s possible to get overwhelmed by your “to do list,” and it starts to become a mental drag; when this kicks in, your hobby no longer feels fun and instead feels like working a job you hate. Sometimes it’s just best to declare something a lost cause and just start over afresh.

I went through this very recently. My backlog had grown too big for me to see sight of the end of it – especially with the glacial pace I paint at! When I took stock of what was in the queue I had 2 full armies: a jump-heavy Flesh Tearers list, and a mechanised Tempestus Scions list. Not counting fun stuff like vehicles and characters, I had well over 100 models to prepare, assemble and paint… and these are just the army projects! Throw in various starter boxes for other games, and other sundry small projects, and the list was nearer 400.

Too. Damn. Many.

What to do? My initial plan was to freeze buying anything new until I’d whittled the backlog down to a more manageable level. Such a sensible plan might work for many a struggling hobbyist, butnfortunately, it was not the right plan for me. Despite several months of not buying any new figures1, I made zero impact on the pile of miniatures I had to work through. On top of that, I found myself losing all inspiration for certain projects. Some of that came down to gnawing insecurities about being able to achieve the vision I had in my head, others from indecision about what that vision even was any more. In the end there was just a pile of boxes and sprues causing me to feel terrible every time I thought about it. This was no longer a hobby, it was a chore. Something had to give, and it would be great if it wasn’t me.

In the tech world, there’s a popular approach to email management called Inbox Zero. The idea is to have your email inbox as empty as possible, so the amount of time your brain is occupied by email is as close to zero as possible. The intention is to reduce the distraction and stress caused by an overwhelmingly full inbox. Related to Inbox Zero, is Email Bankruptcy – the practice of deleting all email older than a certain date (often that day) due to being completely overwhelmed.

One day I realised I needed to declare something similar – Hobby Bankruptcy – or I was going to drive myself out of a hobby I’ve loved for over 20 years.

https://twitter.com/atChrisMc/status/585733864183767042

How was I going to do this? Throwing out hundreds2 of pounds of miniatures would be insane, especially if I changed my mind about something. Selling would take too long, and was subject to the fickleness of others. The simplest (non-destructive) solution won out: I took everything 40K/WHFB related, and stashed it in the loft. Out of sight; out of mind. Literally. The only survivors of the “purge” were source books and the limited edition 25th anniversary Crimson Fists miniature.

https://twitter.com/atChrisMc/status/585836777702875136

https://twitter.com/atChrisMc/status/585841570697609217

I can’t express just how much of a weight off doing this has been. I’m no longer under (self imposed) pressure to work through a massive backlog I no longer had the enthusiasm for, and yet, if I rediscover that enthusiasm, I can pull individual kits from the loft to work on as and when I want to.

In the meantime though, I am free to start work on new projects3

And yes, I do know I’m crazy.


  1. And growing increasing anxious about not getting the cool new shinies. 
  2. OK, maybe it’s higher… 
  3. Obviously, any new projects will have much more strict rules around the number of models allowed in the queue at once. No more buying entire armies in one go! 

I’ve had my GMail address for several years now; I don’t really use it for anything more than legacy accounts, logins, or as a spam trap. For the most part it just sits there in the background, silently passing on any messages it receives to my “proper” account, which is email with a custom domain hosted on Fastmail.

Over the last 12-18 months, I’ve been receiving a slow-but-steady stream of mail clearly meant for someone else: newsletters mostly, but occasionally something personal, and the odd booking confirmation. At first I put these down someone mistyping an email address now and then, or something to do with how GMail has fun with dots (“.”) in email addresses1. Whatever the cause, at first I would just delete them as soon as I realised they weren’t intended for me.

Over time though, it became apparent someone genuinely thought my GMail address was theirs. The nature of the emails became more personal, and there was an increasing variety of individuals and organisations mailing the address, and increasingly with information you wouldn’t want to miss. I’m guessing from the nature of the mail that they are older, but that’s just a guess. The profile I’ve built up is (I’ve written some details more vague than I know them to be, and excluded others):

  • They live in an area of North London
  • They are a member of a residents committee
  • They have an elderly/sick family member or friend they wanted to keep up to date on
  • They used to use Eurostar semi-regularly
  • They recently decided to get their garage converted

Where before I used to just delete immediately, now I have taken to responding to certain mails, to let senders know they have a wrong address – in the hope they can let the intended recipient know they’re giving out the wrong address. Beyond this, I don’t know what to do… it’s not like I can email them to say!


  1. If you didn’t know, you can place a dot anywhere in a GMail address, and it will still resolve to your address. Another tip is you can “extend” your email address with a plus (“+”) and anything you like which gives you potentially unlimited addresses for the price of one. For example, test+something@gmail.com will resolve to test@gmail.com. I would use it for potential “throwaway” addresses 

Ads and websites which automatically redirect your iPhone to the App Store1 need to stop being a thing.

I’m seeing more and more instances of this user-hostile behaviour happening when I’m following a link on my phone. Usually it’s caused by an ad unit on the page, but now and again, it’s a site publisher who really, really, wants you to install their app.

Here’s the thing: if I wanted your app, I’d likely already have it installed. If I open a link to your website, I expect to (and am happy to) access your content there. Redirecting me to the App Store is a massive inconvenience and interruption; it takes me out of the app I was already using – often after I’ve already started reading your content – and puts me somewhere I wasn’t expecting to be. It breaks my concentration as my brain switches from reading your content to looking at the app download page. Assuming I still want to read your content after being treated like this, I now have to close the App Store, reopen the app I was just in, and hope I can pick up where I left off. The publishers who treat their users in this way seem to think I’ll:

  • Download the app, and wait for it to install
  • Create the usually mandatory account
  • Validate said account by switching to my email
  • Reopen the app, and try to find the content I’d clicked through to read in the first place
  • Read it (at last!)

Err, how about “no”? I was already reading your content. If you want to pimp your app to me, put a button or mention of it at the end of the article.

When this kidnapping of my attention is caused by an ad, I’ll sometimes go back to the site to finish reading, or I’ll go back to where I found the link, and send it to Pocket to read later instead (and without the ads to interrupt me). When it’s the publisher itself, chances are I’ll be annoyed enough I won’t return. You had your chance, and you chose to send me elsewhere instead. Either way, I sure as heck won’t install any app advertised using this method.

So can we please put a stop to this? It’s even worse than interrupting me to beg for an app review.


  1. This probably applies to Android and the Play Store as well, but I’m on an iPhone and so that’s where I have experience of this problem happening. 

I’ve written previously about how the archives of my blog were less full than they should be – that, between domain changes, server/CMS moves, and times when I simply didn’t care, there were potentially hundreds of posts missing from the early years in particular.

Back up your crap, people – including your blog.

For the last couple of years I’ve had an on-off project to restore as much of this personal history as possible. Every so often I’d go ferreting through old hard disks, or exploring the Internet Archive’s Wayback Machine for old content I could salvage. At first I had limited success, turning up only a handful of posts. Of those, I was fussy and only restored the “worthwhile” posts – usually longer posts about big events, or technical in nature.

This last weekend though, I revised my stance on this. If I was going to recreate my blogging history, I couldn’t – shouldn’t – just cherry-pick. I should include as much as I could possibly recover: the good, the bad, the plain inane. Anything less would feel a bit dishonest, and undermine the raison d’etre of the whole endeavour: saving the past.

The only exception would be posts which were so incomplete due to missing assets (images mainly) that any body text made no sense, or posts which were completely unintelligible out of context of the original blog – entries about downtime, for example. Also excluded were my personal pet peeve – posts “apologising” for the time between updates1!

A Brief Synopsis of the “How”:

To bring the past kicking and screaming into the present, I dove back into the Wayback Machine, going as far back on my first domain as I could. From there I worked as methodically as I could: working from the furthest back onwards, post-by-post. The basic process was:

  • Copy the post text and title to the WordPress new post screen
  • Adjust the post date to match the original
  • Where possible, match the original publishing time. Where this wasn’t available, approximate based on context (mentions of morning/afternoon/evening, number of other posts that day, etc)
  • Check any links in the post (see below)
  • Add any recovered assets – which was rare
  • Turn off WordPress social sharing
  • Publish

I started on the Friday afternoon, and manually “imported” around 50 posts in the first batch.

Turning off social sharing was done so I didn’t flood my Twitter followers with a whole load of links to the posts – some over a decade old. One thing I didn’t anticipate though, and which I had zero control over, was WordPress emailing the old posts to those who had subscribed to email notifications. It wasn’t until a friend IM’d me about her full inbox that I realised what was happening – so if you found your mail filled with notifications as a result of this exercise, I apologise!

To get around this, I ended up creating a new, private WordPress blog to perform the initial manual process, so I could later export a file to import into this blog.

Between Saturday, Sunday, and Monday evenings, I tracked down and copied over a further 125 or so posts. Due to the vagaries of the Wayback Machine, not every post could be recovered. Generally speaking, it was reliable in having a copy of the first page of an archive section, but no further pages. Sometimes I could access “permalink” pages for the other posts, but this was really hit-or-miss. A lot of the time the page the WBM had “saved” was a 404 page from one of my many blog reorganisations over the years, or in other cases, it would have maybe one post out of eight.

I made a rule not to change the original posts in any way – no fixing of typo’s/correcting something I was wrong about. The only thing I would do, was mark where there was a missing asset with an “Editors Note” of some sort, when appropriate. The only content I did have to consider changing were links.

Dealing with Links

One thing I had to consider was what to do about links which might have changed or disappeared over time. When copying from the WBM, links had already been written to point to a (potentially non-existent) WBM archive page, but if the original still existed, I wanted to point to that instead. In the end I would have to check pretty much every link by hand – if the original existed, I would point to that page; if not, I would take a chance with the Wayback Machine. In some cases I had to consider what to do where the page existed, but had different content or purpose to the original. I dealt with these on a case-by-case basis.

For internal links, I pointed to an imported version, if it existed, or removed it if there was none and context allowed.

Wrapping Up

In total, I imported around 175 previously “lost” blog entries, covering 2002-2006, with the majority from 2005. These years have gone from having a handful of entries, to having dozens. Overall, this has grown the archives by roughly 50% – a not so insubstantial amount!

At some point I will go back and appropriately tag them all, but that’s a lower priority job for another time.

2007-2010 were years when my writing output dropped a lot, so while I will look for missing entries from this period, I don’t expect to find many at all.

Side Note: History Repeats

I discovered, in the process of doing all this, that I had gone through the same exercise before, roughly 10 years ago!

Over the last few days, I’ve been working on the archives of my old site; cleaning and recategorising them. Today, I have added them to the archives of Pixel Meadow.

These additions represent everything that was left of ChrisMcLeod.Net. Over the course of its life many changes occured and data was lost – so these additions don’t represent everything that I’ve written there over the years.

You would think I might have learned from this mistake back then, but obviously not! Fingers crossed it’s finally sunk in.


  1. Though only where they had no other content to the post. 

Once upon a time I viewed myself as only a developer. I didn’t like support, and tried to avoid it as much as possible – even though I knew it was in the customer service side where I would always learn the most in my day-to-day job. I put it down to the stubborn “programmer” in me! Then I moved into a role which was 90% support work, and I had an awakening of sorts: I really like support work. More than that, I loved working in support. I haven’t really talked much1 about this shift in mindset, so this post is part of an attempt to rectify that.

For 3 years, leading up to early 2014, I led a small support team within a major oil and gas company. We were tasked with looking after a complex health and safety-related web application which had users from all across the globe The support team itself was spread out internationally, so I quickly had to get used to working remotely and communicating with both users and team-mates over email, IM, screen-sharing and other ways of coordinating as a distributed team.

A bit of extra background: I initially took on the support of this application by myself. The application was a virtual unknown within the IT structures of the client, and entirely unknown to my employer who was tasked with assuming responsibility for it from the original developers. I had a very, very small window of opportunity to learn from the original developers, and precious little documentation to work with afterwards. On top of this, the main stakeholders of the application were steadfastly against the move to a new support team.

Things did not get off to a good start. The support needs had been grossly underestimated by all involved in the planning sessions which had led to me being hired. It was taking longer than I’d have liked to learn the application, and soon there were red flags being raised about the reduction in support performance. When my manager and I discussed what was going wrong and how to turn things around, the first thing we agreed to was expanding the team. Initially one more person was added, with the goal of adding a second once the first had been given enough knowledge to free up enough of my time to train them both in the more in-depth aspects of the application. I should point out that at this moment I was still learning a lot of nuances of the application myself! Eventually the core team grew to four.

Our users were very diverse, ranging from someone who only touched a computer when they had to, right through to very highly technical users who spent all day, every day within the application – and this was before taking into account regional and cultural differences – so I learned to adapt myself to each person I was talking to. Sometimes even a subtle shift in tone or language could help a user understand something they’ve been struggling with for hours (or days in some of the requests we got). My communication skills levelled up immensely over this time period!

Empathy was one of the other key skills I used every day, and in turn was something I try to instil in the rest of my team. I think it’s essential to a support role, and credit it as one of the reasons I ended up as successful as I was in this particular role. Many times I found insight into a problem by pausing the technical analysis until I’ve asked the user why it is they’re trying to achieve something, and even in some cases why what they think is a problem is a problem to them (i.e. results vs. expectations). In my experience, the most important questions you could ask a user was not “What is the problem? What were the symptoms?” but rather “what is it you’re trying to do?” and “why do you want to do it?”

Through a combination of hard work, honest, friendly engagement with the users (beyond just their support tickets), and a willingness to “go the extra mile”, we went from being the unknown, out-sourced support team “forced” on them, to trusted colleagues who were experts in the application and would always do right by the users – which is something I’m very proud of.

What I learned over all of this was that I am happiest when I am solving problems and helping others with something which is bothering them — and customer service work gave me the opportunity to combine both in to one job. Put this together with the challenges which I overcame in the role, and this was one of the most rewarding and satisfying periods of my career so far. I still like to write code, but I no longer feel it is where my heart truly belongs.

Eventually, due to the shifting sands of Enterprise contracts and budgets, my team had to be disbanded, and the application handed over to a new team from a new supplier. I used the lessons I had learned in my own adoption of the application and its users to prepare the new team as best as I possibly could, training the incoming team of six with an intensive crash-course as we only had a fixed amount of time. After all, their success or failure would be a reflection on how well I had understood my application and users. I’ve only had limited contact with the client since I left, but from what I understand, things have been good in objective, results-driven manner, but my team and I have been missed for the extra attention we put into talking and listening to the user community.


  1. I don’t really talk much about work in general these days. Partly it’s the nature of the beast; at any given point I’m under a number of NDAs which reduce what I can say to nearly nothing worth writing about. 

My girlfriend and I were watching the first episode of new series of Tabletop yesterday, which introduced us to a board game called Tokaido.

pic1474980_md

Almost immediately we both agreed it was an amazingly beautiful game. I could quite happily frame the board itself to display. The illustrator, Xavier Gueniffey-Durin, has done an amazing job.

The game play of Tokaido seemed to be that winning combination of simple to learn, but with enough depth to make it a challenge to master.

tokaido2Maybe I should buy two copies – one to frame the artwork, and the other to play? Or one to give to my sister, as I think the art would be right up her street.

tokaido3