I caught the first trailer for the new Marvel’s Inhumans TV series. The whole thing looked so stiff, awkward, and sterile. I’m not sure what I was expecting, but based purely on the trailer I have zero desire to watch the show. Marvel’s movies and Netflix series (mostly) manage to feel somewhat anchored to the “real world” despite how fantastical the plot or setup might be… but Inhumans had none of that quality on show.
“We had no code and no art assets,” Blizzard 3D Art Director Brian Sousa confirmed to Ars Technica. The 2017 project’s entire art pipeline was “eyeballed,” Sousa said, with recovered concept artwork, sketches, and original boxes and manuals used as reference materials. Not all code was missing, as Blizzard has been issuing patches to the original game’s code base for nearly 20 years. Also, a member of the sound team thankfully had backups of the original sound and voice recordings, which are now reprocessed in higher-fidelity 44,100Hz format.
I’d heard the majority of the original Starcraft code had been lost, years ago, but I figured it was just a rumour. Sounds like the team of Starcraft: Remastered had a big task to recreate the game in a way some of its biggest fans would appreciate.
External link: StarCraft Remastered devs unveil price, explain how much is being rebuilt
There’s some really good learning to be had here, even if the videos themselves are “old.”
External link: RailsCasts Pro episodes are now free!
I’ve listened to a lot of 40K podcasts over the last couple of years. Over that time I’ve slowly winnowed my subscriptions down to just a handful.
- Forge The Narrative – my favourite 40K podcast of the last few years.
- Chapter Tactics – from Frontline Gaming, but distinct enough from their other shows to merit its own subscription
- Frontline Gaming – this is the main Frontline Gaming Podcast – the feed also includes Chapter Tactics and some other smaller shows
- Ashes of the Imperium – this one is new, but it’s by the team behind the very good Bad Dice AoS Podcast
My biggest gripe with most 40K podcasts tends to be length. Sorry, but unless you’re very, very compelling to listen to, I am not going to listen to a Podcast episode which is 2-3 hours long (or more!). The Podcasts above tend to clock-in at around an hour to an hour and a half, which I find to be perfect to my listening habits.
Bonus: Podcasts I’m Evaluating:
8th Edition has brought about a few new Podcasts, some of which I’m still deciding if they’re going to stay in my subscriptions list.
Bonus 2: Some Age of Sigmar podcasts
For a while I found the quality of the AOS podcasts to be in general higher than most 40K podcasts, with only a couple of exceptions. Sadly, my favourite AOS podcast — Heelanhammer — has recently gone on hiatus so I’m not including it here.
- Bad Dice
- Facehammer – can be a bit sweary, so proceed under advisory if that’s not your thing.
For various reasons I prefer to remove the www part from my personal-use domains. Setting up Caddy to serve the site from just domain.com
is as simple as:
domain.com {
root /path/to/site/files
# other directives
}
But this set-up doesn’t provide any way to redirect from www to non-www, meaning anyone who types www.domain.com
into the address bar is out of luck. So what to do? Well, Caddy provides a redir directive. Combine with a new site directive and a placeholder like this:
# Original non-WWW site:
domain.com {
root /path/to/site/files
# other directives
}
# New, additional "site", for doing the redir
www.domain.com {
redir domain.com{uri}
}
Having just spent faaaar too long to get a sample Liquid code block to not be parsed by Jekyll, I thought I better make note of this, for my own benefit:
When posting Liquid code, make use of the raw
tag. Which I can’t seem to post an example of using, because it creates some sort of Inception effect or something…
An XML Sitemap can be useful for optimising your site with Google, particularly if you make use of their Webmaster Tools. Jekyll doesn’t come with one out-of-the-box, but it is easy to add one. There’s probably a plugin out there which will automate things, but I just used a normal Jekyll-generated file for mine, based on code found on Robert Birnie’s site.
The only modification I made was to exclude feed.xml
from the sitemap. Because this is auto-generated by a plugin I couldn’t add any front-matter to a file to exclude it in the same way as other files.
Create a file called sitemap.xml in the root of your site, and paste the following code into it:
If you want fine-control over what appears in the sitemap, you can use any of the following front-matter variables.
sitemap:
lastmod: 2014-01-23
priority: 0.7
changefreq: 'monthly'
exclude: 'yes'
As an example, I use this in my feed.json
template to exclude the generated file from the sitemap:
sitemap:
exclude: 'yes'
And this in my index/archive pages for a daily change frequency:
sitemap:
changefreq: 'daily'
It’s super simple. Just include a push
directive in your site definition. You can leave it as just that, and Caddy will use any Link HTTP headers to figure it out.
If you want more control, you can expand the directive and specify both the path and associated resources, like so:
example.com {
root /var/www/example
push / {
/assets/css/site.min.css
/assets/img/logo.png
/assets/js/site.min.js
}
}
What this block does is say “for every request with a base of / (i.e. every request), Push the following 3 files.” You can customise the base path if you want to, and add more files if you need, but a block like the one above is what I’m using for this site.
You can find out full details in the Caddy Push documentation.
Lately I’ve been feeling a pull to return to my Warhammer 40,000 Flesh Tearers army, which I started around 4 years ago (and promptly only completed one unit of). I had an idea of a small strike-force that was basically just a load of Jump Pack Assault Squads, supported by Land Speeders (with some Death Company elements thrown in). It wouldn’t have been very “competetive” but it would have been thematic and fun. I didn’t progress the idea very far, as the Blood Angels codex in 7th Edition was… very not good; it also took away the ability to field Assault Squads as a troops choice — rendering the entire idea invalid.
Now we’re in 8th Edition, I can build the army as I imagined it, using the new detachments in the rule book. By getting back to a small “passion project” of mine, I’m hoping I’ll be able to revive my motivation for hobby projects which has been worryingly low recently. Who knows — I might even add some Primaris Inceptors to the mix for some mobile firepower.
Apple have released the first public beta for the next version of iOS. I’ll probably hold off installing it on my iPhone for the time being, but I’m tempted to throw it onto my iPad Pro, to get some of those sweet new features I’ve seen some people raving about.
External link: Apple Beta Programme
Apple have released the first public beta for the next version of iOS. I’ll probably hold off installing it on my iPhone for the time being, but I’m tempted to throw it onto my iPad Pro, to get some of those sweet new features I’ve seen some people raving about.
External link: Apple Beta Programme
If you step back and think about it, Games Workshop produce a staggering amount of new products not only per year, but per month. It’s something I don’t think they get enough credit for.
New models across multiple game systems and ranges. New boxed games. New source material for those games. New paints and other “hobby products.” New novels, novella’s, short stories, and audio dramas. A new issue of White Dwarf. Not all of these categories will get something every month, but many will get several.
That’s impressive, no matter how you feel about GW.
Shadowgate was a formative experience in my early youth. A brutally difficult NES RPG, it was the first time I played what was effectively a video game version of the Choose-Your-Own-Adventure books I’d been enjoying. When I say it was difficult, I mean it — it took me more than one sitting to get through the door at the very beginning of the game! I don’t think I ever managed to complete the game, despite my efforts.
I’d heard about the 2014 remake of the game, but never got round to playing it until a few days ago. The artwork is miles ahead of the original (obviously). It might not to be everyone’s tastes – it was very “concept art” style in many places. I found this led to many aspects of a room being missed at first inspection. The story is pretty much the same, perhaps with a few tweaks. There’s a little more “world building” than the original, I think?
The biggest departure was the difficulty. Despite the game retaining many of the same “frustratingly non-obvious solution” mechanics of the original, I managed to complete it in one sitting. I only died twice! (stupid Goblin…) Granted, it did extend into the early hours of the next morning, and I have over 20 years extra problem solving experience than I did when playing the original, but still…
At the current Steam price of <£3 for the edition that comes with all sorts of extras, it still gets a recommendation of worth your time if you’re nostalgic, or just fancy a new RPG game. I’m not sure I’d spend much more than that, given how short it turned out to be, but who am I to tell you what to do with your money?
You can find Shadowgate on Steam, here
Nintendo have announced the (predicted) SNES version of their Classic Mini. I’ve already registered to be notified of the preorder. The list of 20 games included on the system has some of my facourite games of all time. There’s a previously unreleased Star Fox 2 too. Even if it hadn’t had 7 games I absolutely love, I’d have preordered based on how much fun we’ve had with last year’s NES version.
Hopefully it’s easier to get hold of one this time around.
External link: Nintendo announces the Nintendo Classic Mini: Super Nintendo Entertainment System
This blog is generated by Jekyll, running on Caddy HTTP/2 server, and hosted on the lowest-tier Digital Ocean “droplet” (virtual private server). Self-hosting isn’t for everyone, but if you’re the sort of person who wants complete control over your content and how it is delivered – and who might like to tinker every so often, then read on.
The basic steps to setting up are:
- Prepare the Droplet
- Install Caddy
- Setup Jekyll and your workflow
Thankfully for me, other people have already written up their own guides for each of these steps!
To create the droplet that will host your blog, you’ll need a Digital Ocean account. If you don’t have one already, sign-up using my referral link to get $10 in credit.
1. Prepare the Droplet
Create a new Ubuntu 16.04 droplet through the Digital Ocean dashboard, then follow this guide to initial server setup. This should give you a nice base to work with. One thing I like to add to this initial setup is Fail2Ban
, which will automatically ban the IPs of connections trying to login with wrong SSH credentials (which will be anyone but you):
$ sudo apt-get update
$ sudo apt-get install fail2ban
# Fail2Ban should automatically start. Check it with the line below:
$ systemctl status fail2ban
One more thing you can do (not neccesarily required, as you setup <code>ufw</code>
firewall on the server) is enable a Digital Ocean firewall from the dashboard, and limit connections to just ports <code>22</code>
, <code>80</code>
, and <code>443</code>
.
2. Install Caddy
Installation of Caddy is covered by this guide. I followed the steps pretty much as-is, with only minorr changes to match my setup (different username, etc). The biggest difference in my setup was I installed a couple of plugins as part of my Caddy installation. To do this, change the command in Step 1 to the following:
$ curl https://getcaddy.com | bash -s http.minify,tls.dns.cloudflare
This will install the Minify and Cloudflare plugins. Check out the Caddy home page for more plugins.
I set my site to use the Auto-HTTPS feature of Caddy, which gives the site a SSL certificate via Let’s Encrypt. I also wanted to use Cloudflare in front of my site, which isn’t covered in the guide above. After a bit of trial-and-error, the steps I used are below. If you don’t plan to do this, skip to Step 3.
2.1 Using Caddy Auto-HTTPS with Cloudflare
First off, you need to setup some environment variables. To do this for the Service you will have created using the guide above, run the following command:
$ sudo systemctl edit caddy
This will open up an editor for you to override or add to the main service file. In the editor, enter the following:
[Service]
Environment=CLOUDFLARE_EMAIL="<CloudFlare login>"
Environment=CLOUDFLARE_API_KEY="<your Cloudflare CA API key>"
Save the file and exit. Next, edit your Caddyfile:
$ sudo nano /etc/caddy/Caddyfile
Modify to something similar to this:
example.com {
root /var/www
tls you@example.com {
dns cloudflare
}
}
Finally, in the Crypto section of your Cloudflare control panel, make sure to set the SSL type to Strict. If you don’t, you’ll end up with redirection errors
You should be ready to start/restart Caddy:
All being well, your site should be available, with HTTPS enabled.
3. Set-up Jekyll and your Workflow
I followed this guide to setup Jekyll on my Droplet, and create the necessary Git components. If your local machine is OSX or Linux, the guide is all you need. If you’re running on Windows (like me) things are a little more difficult. I tried setting everything up using the Linux Subsystem for Windows, like in this guide, which is the route recommended by the official Jekyll site — but for some reason it didn’t work correctly.
I ended up having to install both RubyInstaller and add the necessary DevKit as the last step of the installation. From there, it should just be a case of gem install jekyll bundler
and creating the Jekyll site in the normal manner (follow the first part of the guide linked at the start of this section if you need to).
Hopefully, if you’ve followed along this far, you should now have your own shiny new blog, hosted on your own server! Setting this up took me a single evening – not including the time I spent creating my own Jekyll layouts. But those are a topic for another time…
I’ve not written much here since the start of the year. I’d started off with such good intentions. This isn’t one of those “sorry I haven’t been posting” blog posts, so don’t worry. I don’t apologise for it… it is what it is.
What’s happened is my brain has been mainly filled with three things the last few months: work, politics, and hobby. I don’t want to write about politics, not really, though it’s definitely something I could provide a running commentary on (this might please my Twitter followers most, if I were move politics to here). Work… well it’s work – I’ve been doing my best to keep it at the office, as it’s been very intense of late. There’s been a few very interesting technical challenges I could write about, but I can’t go into some of the specifics necessary, due to the nature of what I do. Plus, normally by the time I get home I don’t want to sit in front of a computer again. So that’s left hobby, and I havehad a whole other blog for that… although I’ve been concentrating on the practical side of the hobby for once, so haven’t been writing much there either.
So in short, we’re in one of my regular “blogging takes a backseat” phases. You should be used to them by now! I do still feel “the pull” to write, and regularly feel like I should be writing here more (as opposed to venting on Twitter), it’s just not happening for a variety of reasons.
C’est la vie.
To paraphrase Good Ol’ JR: “business will eventually pick up.”
Good to know I’ve been “doing it right” all along!
[Source: Daring Fireball]
The algorithm-driven Instagram feed was rolled out a while ago, but it’s only recently I’ve noticed much of a difference. Unfortunately the difference, particularly in the last couple of weeks, has been increasingly negative. So much so I really wish there was a way to opt-out!
Basically it comes down to I’m not seeing what I want to see at the time I want to see it, often leading me to just close the app after scrolling down a little bit. So as a way of “increasing engagement” it utterly fails.
A trivial example: I follow WWE on Instagram. Every Monday and Tuesday night, they post 6-12 photos from the goings on at Monday Night Raw, and Smackdown Live. Every Tuesday/Wednesday morning, I would like to open up Instagram and be able to scroll through to see what happened. This used to work, but some time in the last few weeks it changed so these photos show up randomly in my feed over the next 2-4 days – after I’ve already got the information from other sources, and definitely past the point I want the photos to show up at the top. The photos never show in chronological order, and never show as a batch of more than 1-2 at a time.
For the accounts I follow who aren’t “brands” (i.e. friends, shared interest accounts, etc), often it’s the people I like or comment on the least who appear near the top, and often the most trite, uninteresting photos they’ve posted. Why show me the video of a friend’s baby’s adorable first laugh, another friend’s stunning macro photography, or a popular post from an interest account, when 4 out of 6 of the photos at the top of my feed are #MondayMotivation meme nonsense? With the other 2 being drinks/food from someone’s night out 3 days ago?
Is it just me? I don’t think so, but maybe it’s just particularly bad on my feed? What’re your experiences with Instagram lately?
Hunter Walk with a neat idea for dealing with Twitter trolls I’ve not seen suggested anywhere else:
Basically the concept that when an @name is inserted into the tweet, it becomes targeted, the difference between just expressing an opinion about a person and the desire for that person to see the opinion. For example imagine these two tweets:
“Hunter Walk is an asshole” vs “@hunterwalk is an asshole”
The former doesn’t appear in my mentions. The latter does. I never see the first unless I’m actively searching for my name on Twitter. The latter does regardless of my desire to interact with the sender. Accordingly, once an @name is included, the standard for harassment should be lower, because intent can be assumed.
Source: Twitter Trolls Should Lose Ability To Include @Names in Tweets | Hunter Walk
So after the preamble, which should give you a frame of reference to what I’m aiming to do in this mini-series of posts about improving my online privacy and security, this short post will talk about the first steps I’m taking to tighten everything up. As this is all at the very beginning of my learning journey, all of these might change in the future. If they do, I will update the post and add a comment below.
In this post I look at two of the fundamentals of privacy on the web: the web browser and search engine. I’m mainly looking at the desktop for now, rather than mobile, mainly because it’s simpler to focus on one thing while I wrap my head around this stuff!
A Change of Browser
I’ve been using Chrome for years, after it usurped Firefox as the “fast, alternative” browser for Windows. These days, Chrome has become seriously bloated – it’s routinely consuming multiple gigabytes of RAM on my desktop. It may be (usually) fast despite of that, but it slows the rest of the computer. What’s more, it’s so deeply wired into Google’s ecosystem that it’s arguably as much a data hoover for Google as it is a browser.
So I was in the market for a new browser to begin with, and I was looking into alternatives like Chromium or Opera. But once I started diving into things a bit more, pretty much every recommendation for privacy-minded software recommended good-old Firefox, so that’s what I’ve gone with. I followed the configuration guide at PrivacyTools.io, as well as:
- Turn on Do Not Track
- Set Firefox to never remember my browsing/download/search/form history
- Never accept third-party cookies
- Only keep cookies until I close the browser
- Never remember logins for sites
- Turned off Firefox Health Report, Telemetry, and Crash Reporter
Extensions
Most of the extensions I had installed in Chrome were privacy-minded anyway, so were equally applicable to Firefox. Some additions came recommended. At the moment I am using the following:
- uBlock Origin
- HTTPS Anywhere
- Self-Destructing Cookies (possibly redundant due to browser settings)
- Decentraleyes
- Privacy Badger (possibly redundant)
- Random Agent Spoofer
- 1Password
- Google Search Link Fix (just in case… see the search engine section)
Mobile
The situation on mobile (in my case, iOS) is a bit less clear. For now I’m not using the Chrome iOS app, reverting to Safari with the addition of a content blocker.
Downsides
The biggest issue with the above setup is it removes a few conveniences: remembering pinned tabs between browser sessions; having to login to websites every time you visit; having to retrace your steps to find a page in the future, if you don’t bookmark it at the time… that sort of thing. I might do a little tuning on this, relaxing the settings a little, but overall I think this might be one of those things that I need to live with.
A Change of Search Engine
Apart from a brief flirtation with DuckDuckGo a few years back, I’ve always used Google as my search engine. It’s constantly been the most reliable, fastest, and all-round best at what it does.
Even so, I’ve never been 100% happy with the fact that Google collects just about every data point they can, that it’s all wrapped up in your Google account, linked to everything you do in their other services, and made available for advertisement targetting (amongst who knows how many other things). As someone who’s had a Gmail account since they were invite only, I know Google has a fucktonne of data on me already; the genie is well and truly out of the bottle in that regard.
That doesn’t mean I can’t stop giving them more data. Sure, they’ll get the odd bit here and there when I use YouTube, or the odd email that hits my old, pretty much unused Gmail account, but that’s really it – if I change my search engine to somewhere else.
The obvious thing to do would be to revert back to DuckDuckGo, as I already have experience of it, and it’s accurate enough… but I wanted to try something different for the moment, while I’m still in the learning phase of this little project.
I tried all the recommendations at PrivacyTools.io. Searx generally gave me terrible results, but is an interesting idea; Qwant gave me some decent web results, but the included News results were mostly irrelevant, and I couldn’t find a way to turn these off. StartPage had been recommended in other places too, and overall was the best performing of the bunch – possibly not surprising, as it’s effectively a proxy for Google search, so seems like a win-win in this case. For now, I’ve set it as the default search engine in Firefox.
Mobile
For searches on my iPhone, I’ve set the default search engine to DuckDuckGo, as it’s the best of those available.
In 2017 I’m trying to be be a bit more privacy and security-minded when using the web (on all devices). I’ve been increasingly interested in these areas for a few years, and especially since the Snowden revelations, and recent events like the IP Bill, aka the “Snoopers Charter,” in the UK have pushed me further towards them. Over the next few weeks I’m going to look into (and try to document here) various things I can do to increase my security, decrease the amount of information applications and services can collect on me, and generally “take back control” of my online privacy.
I work in the tech industry, I’m fairly conscious about this stuff, and understand a few of the elements and technologies, but it’s really a very basic understanding. What I do know might be out of date. At this stage it might be too little too late… right now I don’t really know.
Upfront: I fully recognise that if the police/MI5/NSA/FSB/whoever really wanted my data, nothing I could do would be able to stop them.
Also upfront: even with that in mind, whatever I put in place won’t be considered “perfect.” What I’m looking to do is balance convenience, practicality, and security. If something is too difficult or fiddly to use, it will end up not being used.
Thinking specifically about the IP Bill, far too many agencies for my liking will have complete, unfettered access to what I get up to on the internet. Beyond that one example, the amount of web ad trackers we have to contend with nowadays is snowballing, as are the services amassing data to pay for those “free” apps we enjoy.
While it might be that none of these data collectors have nefarious purposes in mind (if you’re trusting), data security breaches are becoming bigger and more frequent. Data being stored is likely to leak or be stolen at some point, so the best you can hope for is to limit the amount of potentially harmful data1 being held.
On a lighter note, here’s a great spoof from Cassetteboy about the IP Bill
So all this is a bit of a long-winded preamble to saying look out for the future posts where I talk about what I have learned, how I’m applying it, any recommendations I have, and how you can do the same. The first post on some of the basics, and links to reading materials will be coming today/tomorrow. In the meantime, are there any tips or good sources you’ve come across? Feel free to share in the comments.
- Insert definition of what you would consider “harmful data if leaked” ↩
A nice, slightly tongue-in-cheek look at how easy it is to fall off the blogging wagon after making a resolution to “blog more” in the New Year. This rings true for me in 2016!
Source: Jon Galloway – How to Talk Yourself out of your New Year’s Blogging Resolution… One Day At A Time