I like to think of myself as generally a smart person. I have my weaknesses, but I’m usually pretty good at figuring something out – particularly if it’s tech related. Problem solving is generally one of my strong points.

So why, oh why, can I not figure out how to “downgrade” or migrate a Google Apps account to a “normal” Google account?

For background, I have a legacy Google Apps account, from when I used to run my own-domain email account through the service. I switched to Fastmail a couple of years ago, but by this point the Apps account was my “main” Google account – the one I was logged into all the time and thus had my data attached to.

I wanted to get rid of the Apps part of the account, as it causes some weird issues now and again, doesn’t work with all Google services, and I don’t use it for the intended purpose any more.

But it’s increasingly looking like this might not be possible. I can think of a number of enterprise-y reasons why not, but I can also think of a few use cases where it should be possible to at least allow it. I’ll keep hunting for now.

Note: I found this mini How-To while having a clean-up of my GitHub repositories. I figured it would be worth sharing on my blog. Hopefully it is of use to someone. Warning: bad ASCII art ahead!


The Problem

  1. I have my repository hosted on GitHub
  2. I have an internal Git server used for deployments
  3. I want to keep these synchronised using my normal workflow

Getting Started

Both methods I’ll describe need a “bare” version of the GitHub repository on your internal server. This worked best for me:

cd ~/projects/repo-sync-test/
scp -r .git user@internalserver:/path/to/sync.git

Here, I’m changing to my local working directory, then using scp to copy the .git folder to the internal server over ssh.

More information and examples this can be found in the online Git Book:

4.2 Git on the Server – Getting Git on a Server

Once the internal server version of the repository is ready, we can begin!

The Easy, Safe, But Manual Method:

+---------+ +----------+ /------>
| GitHub  | | internal | -- deploy -->
+---------+ +----------+ \------>
^                     ^
|                     |
|     +---------+     |
\-----|   ME!   | ----/
      +---------+

This one I have used before, and is the least complex. It needs the least setup, but doesn’t sync the two repositories automatically. Essentially we are going to add a second Git Remote to the local copy, and push to both servers in our workflow:

In your own local copy of the repository, checked out from GitHub, add a new remote a bit like this:

git remote add internal user@internalserver:/path/to/sync.git

This guide on help.github.com has a bit more information about adding Remotes.

You can change the remote name of “internal” to whatever you want. You could also rename the remote which points to GitHub (“origin”) to something else, so it’s clearer where it is pushing to:

git remote rename origin github

With your remotes ready, to keep the servers in sync you push to both of them, one after the other:

git push github master
git push internal master
  • Pros: Really simple
  • Cons: It’s a little more typing when pushing changes

The Automated Way:

+---------+            +----------+ /------>
| GitHub  |   ======>  | internal | -- deploy -->
+---------+            +----------+ \------>
^
|
|              +---------+
\------------- |   ME!   |
               +---------+

The previous method is simple and reliable, but it doesn’t really scale that well. Wouldn’t it be nice if the internal server did the extra work?

The main thing to be aware of with this method is that you wouldn’t be able to push directly to your internal server – if you did, then the changes would be overwritten by the process I’ll describe.

Anyway:

One problem I had in setting this up initially, is the local repositories on my PC are cloned from GitHub over SSH, which would require a lot more setup to allow the server to fetch from GitHub without any interaction. So what I did was remove the existing remote, and add a new one pointing to the https link:

(on the internal server)
cd /path/to/repository.git
git remote rm origin
git remote add origin https://github.com/chrismcabz/repo-syncing-test.git
git fetch origin

You might not have to do this, but I did, so best to mention it!

At this point, you can test everything is working OK. Create or modify a file in your local copy, and push it to GitHub. On your internal server, do a git fetch origin to sync the change down to the server repository. Now, if you were to try and do a normal git merge origin at this point, it would fail, because we’re in a “bare” repository. If we were to clone the server repository to another machine, it would reflect the previous commit.

Instead, to see our changes reflected, we can use git reset (I’ve included example output messages):

git reset refs/remotes/origin/master

Unstaged changes after reset:
M LICENSE
M README.md
M testfile1.txt
M testfile2.txt
M testfile3.txt

Now if we were to clone the internal server’s repository, it would be fully up to date with the repository on GitHub. Great! But so far it’s still a manual process, so lets add a cron task to stop the need for human intervention.

In my case, adding a new file to /etc/cron.d/, with the contents below was enough:

*/30 * * * * user cd /path/to/sync.git && git fetch origin && git reset refs/remotes/origin/master > /dev/null

What this does is tell cron that every 30 minutes it should run our command as the user user. Stepping through the command, we’re asking to:

  1. cd to our repository
  2. git fetch from GitHub
  3. git reset like we did in our test above, while sending the messages to /dev/null

That should be all we need to do! Our internal server will keep itself up-to-date with our GitHub repository automatically.

  • Pros: It’s automated; only need to push changes to one server.
  • Cons: If someone mistakenly pushes to the internal server, their changes will be overwritten

Credits

I’m in the market for a new computer1, but I have no idea what way to go. I’ve been making do with older kit for the last few years, but all of it is pretty much at the end of its usable life.

I recently set up a new “office” area in the house, and the way I did it allows me to swap between my work-supplied laptop, and a computer of my own, just by plugging into the right monitor input and swapping a USB cable. This setup also allows my son to make use of the desk if he needs to.

Until recently, the computer I used most around the house was a 9 year old Dell Latitude laptop which I had made usable by putting an SSD into it, and building a lightweight Arch Linux installation. This was primarily because a laptop was all I had space for. Actually, I tell a lie – the “computer” I use most is my iPhone, but for times the iPhone can’t cut it (for whatever reason) I used the Dell2. While this arrangement worked, it showed its age, and it was fiddly at times.

I’ve had a 6 year old Mac Mini lying around for a while, doing nothing. It’s only barely more powerful than the Dell3, and the one time I had it plugged into the living room TV, it was just plain awkward to use. With the new office I was able to plug it in to a proper monitor/keyboard/mouse arrangement which made it more viable. So this past weekend I took the SSD from the Dell, put it in the Mac, and made that my “home computer.” It’s just fast enough to not induce rage when trying to do anything more taxing than surf the web and other light duties.

Now I’ve got a “proper” desk and space, I’ve been thinking I should look getting something which will last me another few years. The cheapest upgrade I could do is to spend ~£60 and double the RAM in the Mac Mini, going from 4GB to 8GB. I’m sure that will give a noticable boost to OS X, but it doesn’t really change the fact the system is on borrowed time. It could buy me another 6-12 months, but at some point, likely soon, something is going to fail. The way I see it, my choices are:

  1. Buy a newer Mac, probably a laptop for flexibility (plus that’s where all their non-iOS/Watch innovation seems to be going).
  2. Buy a Windows laptop.
  3. Build a custom PC.

Of the choices, #3 is likely the most satisfying, and would have the most upgrade potential further down the line, though I would be constrained later by choices I made now. It also has the potential to get very expensive; I priced up a high-end Mini-ITX system for a bit of fun, and it came to roughly £1000 before choosing a suitable graphics card. I could definitely price something for less, and would probably have to, but it would have to be weighed against longevity of usable performance and upgradability. I am a little space constrained, so a massive tower is never going to be practical, but there are plenty options between Mini-ITX and mATX nowadays.

A Windows laptop feels like it would be a cop-out, and there’s not much out there I feel inspired enough to part with my money for. There’s a couple of nice laptops I’ve seen4, but none I feel would last as long as I’d like them to.

Getting a new Mac has been the direction I’ve been leaning towards for a while, but I’ve always struggled to justify it vs. other spending priorities. Plus, when you factor in how fast Apple iterate their hardware, the lack up after-sale upgradability, and you’re always hoping to “time it right”. That said, as an iPhone/iPad owner there’s a lot of upside to getting a Mac, for example: close integration through Handover/Continuity (granted, which I can’t currently use with the Mini), and iCloud Photo Library. I guess I could set up something more “cross-platform” for the photo library, using Dropbox, but I found Apple’s solution to be that little bit nicer to work with.

So the jist of this much-longer-than-I-planned stream of consciousness is that I need to start thinking about replacing the old and almost busted computer kit I have with something new. I don’t know what that will be yet, and I’d hoped getting my thoughts out would help me focus my mind on what I want to do.

No such luck though. Any ideas?


  1. Anyone who knows me probably knows I’ve actually been talking about it for ~4 years. 
  2. And what of my iPad? I mainly just use it for Hearthstone and Games Workshop rulebooks. Since iOS 8 (I think), my iPad has taken a huge hit in performance, and just isn’t as capable as it once felt. 
  3. On paper, at least. In practice it was severely hamstrung by the old-school HDD and running OS X. 
  4. My work laptop is quite nice; it’s a Dell Ultrabook, thin, light, and performant enough. But the consumer pricing is higher than I’d value it at. 

With all the cool new stuff constantly being released by recently, it can be very easy to end up with a large hobby backlog. When this happens it’s possible to get overwhelmed by your “to do list,” and it starts to become a mental drag; when this kicks in, your hobby no longer feels fun and instead feels like working a job you hate. Sometimes it’s just best to declare something a lost cause and just start over afresh.

I went through this very recently. My backlog had grown too big for me to see sight of the end of it – especially with the glacial pace I paint at! When I took stock of what was in the queue I had 2 full armies: a jump-heavy Flesh Tearers list, and a mechanised Tempestus Scions list. Not counting fun stuff like vehicles and characters, I had well over 100 models to prepare, assemble and paint… and these are just the army projects! Throw in various starter boxes for other games, and other sundry small projects, and the list was nearer 400.

Too. Damn. Many.

What to do? My initial plan was to freeze buying anything new until I’d whittled the backlog down to a more manageable level. Such a sensible plan might work for many a struggling hobbyist, butnfortunately, it was not the right plan for me. Despite several months of not buying any new figures1, I made zero impact on the pile of miniatures I had to work through. On top of that, I found myself losing all inspiration for certain projects. Some of that came down to gnawing insecurities about being able to achieve the vision I had in my head, others from indecision about what that vision even was any more. In the end there was just a pile of boxes and sprues causing me to feel terrible every time I thought about it. This was no longer a hobby, it was a chore. Something had to give, and it would be great if it wasn’t me.

In the tech world, there’s a popular approach to email management called Inbox Zero. The idea is to have your email inbox as empty as possible, so the amount of time your brain is occupied by email is as close to zero as possible. The intention is to reduce the distraction and stress caused by an overwhelmingly full inbox. Related to Inbox Zero, is Email Bankruptcy – the practice of deleting all email older than a certain date (often that day) due to being completely overwhelmed.

One day I realised I needed to declare something similar – Hobby Bankruptcy – or I was going to drive myself out of a hobby I’ve loved for over 20 years.

https://twitter.com/atChrisMc/status/585733864183767042

How was I going to do this? Throwing out hundreds2 of pounds of miniatures would be insane, especially if I changed my mind about something. Selling would take too long, and was subject to the fickleness of others. The simplest (non-destructive) solution won out: I took everything 40K/WHFB related, and stashed it in the loft. Out of sight; out of mind. Literally. The only survivors of the “purge” were source books and the limited edition 25th anniversary Crimson Fists miniature.

https://twitter.com/atChrisMc/status/585836777702875136

https://twitter.com/atChrisMc/status/585841570697609217

I can’t express just how much of a weight off doing this has been. I’m no longer under (self imposed) pressure to work through a massive backlog I no longer had the enthusiasm for, and yet, if I rediscover that enthusiasm, I can pull individual kits from the loft to work on as and when I want to.

In the meantime though, I am free to start work on new projects3

And yes, I do know I’m crazy.


  1. And growing increasing anxious about not getting the cool new shinies. 
  2. OK, maybe it’s higher… 
  3. Obviously, any new projects will have much more strict rules around the number of models allowed in the queue at once. No more buying entire armies in one go! 

Quoting:

“”

Twin 1: What are you eating?
Me: Cheesecake.
Twin 2: Does it have peanuts in it?
Me: No.
Twin 1: Does it have chocolate in it?
Me: Yes.
Twin 1: Can I try some?
Twin 2: Can I try some?
Me: No.
Me: This is my reward for putting up with your crazy behaviour today. So no.

Thanks to using 2 cork stoppers to elevate the back of the laptop up about an inch.

Laptop cork legs

Typing on this thing (a Toshiba R500) has been abysmal for the 4 years I’ve had this laptop. The keys are slidey, mushy, inconsistent, and generally just a mess of bad design and ergonomics. Tilting the laptop at least makes it more comfortable. Thankfully it’s being replaced soon, but boy do I wish I’d thought of this a lot sooner!

phishing

From the misspelt “From”, to the poor grammar and different typography of the phishing “hook” (the “please confirm your account” bit)… it’s like they’re not even trying anymore. I did notice it’s only the sign-in button which is a phishing link; all the others are legitimate Amazon URLs – which is probably how it got past the spam filter.

Reminder: never click any links in an email asking you to verify your account so that something bad won’t happen.

I’ve had my GMail address for several years now; I don’t really use it for anything more than legacy accounts, logins, or as a spam trap. For the most part it just sits there in the background, silently passing on any messages it receives to my “proper” account, which is email with a custom domain hosted on Fastmail.

Over the last 12-18 months, I’ve been receiving a slow-but-steady stream of mail clearly meant for someone else: newsletters mostly, but occasionally something personal, and the odd booking confirmation. At first I put these down someone mistyping an email address now and then, or something to do with how GMail has fun with dots (“.”) in email addresses1. Whatever the cause, at first I would just delete them as soon as I realised they weren’t intended for me.

Over time though, it became apparent someone genuinely thought my GMail address was theirs. The nature of the emails became more personal, and there was an increasing variety of individuals and organisations mailing the address, and increasingly with information you wouldn’t want to miss. I’m guessing from the nature of the mail that they are older, but that’s just a guess. The profile I’ve built up is (I’ve written some details more vague than I know them to be, and excluded others):

  • They live in an area of North London
  • They are a member of a residents committee
  • They have an elderly/sick family member or friend they wanted to keep up to date on
  • They used to use Eurostar semi-regularly
  • They recently decided to get their garage converted

Where before I used to just delete immediately, now I have taken to responding to certain mails, to let senders know they have a wrong address – in the hope they can let the intended recipient know they’re giving out the wrong address. Beyond this, I don’t know what to do… it’s not like I can email them to say!


  1. If you didn’t know, you can place a dot anywhere in a GMail address, and it will still resolve to your address. Another tip is you can “extend” your email address with a plus (“+”) and anything you like which gives you potentially unlimited addresses for the price of one. For example, test+something@gmail.com will resolve to test@gmail.com. I would use it for potential “throwaway” addresses 

Ads and websites which automatically redirect your iPhone to the App Store1 need to stop being a thing.

I’m seeing more and more instances of this user-hostile behaviour happening when I’m following a link on my phone. Usually it’s caused by an ad unit on the page, but now and again, it’s a site publisher who really, really, wants you to install their app.

Here’s the thing: if I wanted your app, I’d likely already have it installed. If I open a link to your website, I expect to (and am happy to) access your content there. Redirecting me to the App Store is a massive inconvenience and interruption; it takes me out of the app I was already using – often after I’ve already started reading your content – and puts me somewhere I wasn’t expecting to be. It breaks my concentration as my brain switches from reading your content to looking at the app download page. Assuming I still want to read your content after being treated like this, I now have to close the App Store, reopen the app I was just in, and hope I can pick up where I left off. The publishers who treat their users in this way seem to think I’ll:

  • Download the app, and wait for it to install
  • Create the usually mandatory account
  • Validate said account by switching to my email
  • Reopen the app, and try to find the content I’d clicked through to read in the first place
  • Read it (at last!)

Err, how about “no”? I was already reading your content. If you want to pimp your app to me, put a button or mention of it at the end of the article.

When this kidnapping of my attention is caused by an ad, I’ll sometimes go back to the site to finish reading, or I’ll go back to where I found the link, and send it to Pocket to read later instead (and without the ads to interrupt me). When it’s the publisher itself, chances are I’ll be annoyed enough I won’t return. You had your chance, and you chose to send me elsewhere instead. Either way, I sure as heck won’t install any app advertised using this method.

So can we please put a stop to this? It’s even worse than interrupting me to beg for an app review.


  1. This probably applies to Android and the Play Store as well, but I’m on an iPhone and so that’s where I have experience of this problem happening. 

I’ve written previously about how the archives of my blog were less full than they should be – that, between domain changes, server/CMS moves, and times when I simply didn’t care, there were potentially hundreds of posts missing from the early years in particular.

Back up your crap, people – including your blog.

For the last couple of years I’ve had an on-off project to restore as much of this personal history as possible. Every so often I’d go ferreting through old hard disks, or exploring the Internet Archive’s Wayback Machine for old content I could salvage. At first I had limited success, turning up only a handful of posts. Of those, I was fussy and only restored the “worthwhile” posts – usually longer posts about big events, or technical in nature.

This last weekend though, I revised my stance on this. If I was going to recreate my blogging history, I couldn’t – shouldn’t – just cherry-pick. I should include as much as I could possibly recover: the good, the bad, the plain inane. Anything less would feel a bit dishonest, and undermine the raison d’etre of the whole endeavour: saving the past.

The only exception would be posts which were so incomplete due to missing assets (images mainly) that any body text made no sense, or posts which were completely unintelligible out of context of the original blog – entries about downtime, for example. Also excluded were my personal pet peeve – posts “apologising” for the time between updates1!

A Brief Synopsis of the “How”:

To bring the past kicking and screaming into the present, I dove back into the Wayback Machine, going as far back on my first domain as I could. From there I worked as methodically as I could: working from the furthest back onwards, post-by-post. The basic process was:

  • Copy the post text and title to the WordPress new post screen
  • Adjust the post date to match the original
  • Where possible, match the original publishing time. Where this wasn’t available, approximate based on context (mentions of morning/afternoon/evening, number of other posts that day, etc)
  • Check any links in the post (see below)
  • Add any recovered assets – which was rare
  • Turn off WordPress social sharing
  • Publish

I started on the Friday afternoon, and manually “imported” around 50 posts in the first batch.

Turning off social sharing was done so I didn’t flood my Twitter followers with a whole load of links to the posts – some over a decade old. One thing I didn’t anticipate though, and which I had zero control over, was WordPress emailing the old posts to those who had subscribed to email notifications. It wasn’t until a friend IM’d me about her full inbox that I realised what was happening – so if you found your mail filled with notifications as a result of this exercise, I apologise!

To get around this, I ended up creating a new, private WordPress blog to perform the initial manual process, so I could later export a file to import into this blog.

Between Saturday, Sunday, and Monday evenings, I tracked down and copied over a further 125 or so posts. Due to the vagaries of the Wayback Machine, not every post could be recovered. Generally speaking, it was reliable in having a copy of the first page of an archive section, but no further pages. Sometimes I could access “permalink” pages for the other posts, but this was really hit-or-miss. A lot of the time the page the WBM had “saved” was a 404 page from one of my many blog reorganisations over the years, or in other cases, it would have maybe one post out of eight.

I made a rule not to change the original posts in any way – no fixing of typo’s/correcting something I was wrong about. The only thing I would do, was mark where there was a missing asset with an “Editors Note” of some sort, when appropriate. The only content I did have to consider changing were links.

Dealing with Links

One thing I had to consider was what to do about links which might have changed or disappeared over time. When copying from the WBM, links had already been written to point to a (potentially non-existent) WBM archive page, but if the original still existed, I wanted to point to that instead. In the end I would have to check pretty much every link by hand – if the original existed, I would point to that page; if not, I would take a chance with the Wayback Machine. In some cases I had to consider what to do where the page existed, but had different content or purpose to the original. I dealt with these on a case-by-case basis.

For internal links, I pointed to an imported version, if it existed, or removed it if there was none and context allowed.

Wrapping Up

In total, I imported around 175 previously “lost” blog entries, covering 2002-2006, with the majority from 2005. These years have gone from having a handful of entries, to having dozens. Overall, this has grown the archives by roughly 50% – a not so insubstantial amount!

At some point I will go back and appropriately tag them all, but that’s a lower priority job for another time.

2007-2010 were years when my writing output dropped a lot, so while I will look for missing entries from this period, I don’t expect to find many at all.

Side Note: History Repeats

I discovered, in the process of doing all this, that I had gone through the same exercise before, roughly 10 years ago!

Over the last few days, I’ve been working on the archives of my old site; cleaning and recategorising them. Today, I have added them to the archives of Pixel Meadow.

These additions represent everything that was left of ChrisMcLeod.Net. Over the course of its life many changes occured and data was lost – so these additions don’t represent everything that I’ve written there over the years.

You would think I might have learned from this mistake back then, but obviously not! Fingers crossed it’s finally sunk in.


  1. Though only where they had no other content to the post.