A powerful recount of Coraline Ada Ehmke’s terrible treatment at GitHub. Please take some time to read it.

I think back on the lack of options I was given in response to my mental health situation and I see a complete lack of empathy. I reflect on the weekly one-on-ones that I had with my manager prior to my review and the positive feedback I was getting, in contrast to the surprise annual review. I consider the journal entries that I made and all the effort I put in on following the PIP and demonstrating my commitment to improving, only to be judged negatively for the most petty reasons. I think about how I opened up to my manager about my trauma and was accused of trying to manipulate her feelings. I remember coming back from burying my grandmother and being told that I was fired.

GitHub has made some very public commitments to turning its culture around, but I feel now that these statements are just PR. I am increasingly of the opinion that in hiring me and other prominent activists, they were attempting to use our names and reputations to convince the world that they took diversity, inclusivity, and social justice issues seriously. And I feel naive for having fallen for it.

This isn’t the first time GitHub have run afoul of having a toxic internal culture. Perhaps ironically, it appears from my view that it was the “corporate” controls implemented after the previous fallout, combined with cultural issues and inexperience of how those controls are meant to be applied that led to the horrible experience.

I’ve done Staff Management in companies hundreds of times the size of GitHub; I recognise every tool and process mentioned in Coraline’s recount, and have been part of them numerous times. Each instance in this retelling seems to be a perversion of what is meant to be applied. Tools designed to help and protect everyone in involved in the process were turned against the party with the least power.

After my last post, I came across some discussion about implementing JSON Feed, and whether using a template is a good way to implement JSON Feed in a site. The concensus seems to be “No,” with one of the authors of the spec weighing in. For the most part, I do agree – a template is more likely to break under some edge case than a proper serializer – but until there is more support for the spec I see it as a pragmattic short-term solution. So proceed at your own risk for now!

I can’t take credit for this – I found the code below (from vallieres) after a quick search for adding feed.json to Jekyll without plugins. The only thing I’ve added is the sitemap front-matter which will exclude the output file from our sitemap.xml

---
layout: null
sitemap:
  exclude: 'yes'
---
{
    "version": "https://jsonfeed.org/version/1",
    "title": "{{ site.title | xml_escape }}",
    "home_page_url": "{{ "/" | absolute_url }}",
    "feed_url": "{{ "/feed.json" | absolute_url }}",
    "description": {{ site.description | jsonify }},
    "icon": "{{ "/apple-touch-icon.png" | absolute_url }}",
    "favicon": "{{ "/favicon.ico" | absolute_url }}",
    "expired": false,
    {% if site.author %}
    "author": {% if site.author.name %} {
        "name": "{{ site.author.name }}",
        "url": {% if site.author.url %}"{{ site.author.url }}"{% else %}null{% endif %},
        "avatar": {% if site.author.avatar %}"{{ site.author.avatar }}"{% else %}null{% endif %}
    },{% else %}"{{ site.author }}",{% endif %}
    {% endif %}
"items": [
    {% for post in site.posts limit:36 %}
        {
            "id": "{{ post.url | absolute_url | sha1 }}",
            "title": {{ post.title | jsonify }},
            "summary": {{ post.seo_description | jsonify }},
            "content_text": {{ post.content | strip_html | strip_newlines | jsonify }},
            "content_html": {{ post.content | strip_newlines | jsonify }},
            "url": "{{ post.url | absolute_url }}",
            {% if post.image.size > 1 %}"image": {{ post.image | jsonify }},{% endif %}
            {% if post.link.size > 1 %}"external_url": "{{ post.link }}",{% endif %}
            {% if post.banner.size > 1 %}"banner_image": "{{ post.banner }}",{% endif %}
            {% if post.tags.size > 1 %}"tags": {{ post.tags | jsonify }},{% endif %}
            {% if post.enclosure.size > 1 %}"attachments": [ {
              "url": "{{ post.enclosure }}",
              "mime_type": "{{ post.enclosure_type }}",
              "size_in_bytes": "{{ post.enclosure_length }}"
            },{% endif %}
            "date_published": "{{ post.date | date_to_xmlschema }}",
            "date_modified": "{{ post.date | date_to_xmlschema }}",
            {% if post.author %}
                "author": {% if post.author.name %} {
                "name": "{{ post.author.name }}",
                "url": {% if post.author.url %}"{{ post.author.url }}"{% else %}null{% endif %},
                "avatar": {% if post.author.avatar %}"{{ post.author.avatar }}"{% else %}null{% endif %}
                }
                {% else %}"{{ post.author }}"{% endif %}
            {% else %}
                "author": {% if site.author.name %} {
                "name": "{{ site.author.name }}",
                "url": {% if site.author.url %}"{{ site.author.url }}"{% else %}null{% endif %},
                "avatar": {% if site.author.avatar %}"{{ site.author.avatar }}"{% else %}null{% endif %}
                }
                {% else %}
                "{{ site.author }}"
                {% endif %}
            {% endif %}
        }{% if forloop.last == false %},{% endif %}
    {% endfor %}
    ]
}

Lately I’ve found myself enjoying the YouTube channel Outside XBox (and their sister channel, Outside Xtra). Normally what will happen is I’ll put a YouTube video on the TV as “background noise” and it ends up playing videos from the same channel for a few hours; in this case I ended up going down a rabbit hole of their Hitman videos. Whether it was the normal play throughs, the challenge modes, or the “3 ways to play”, I was hooked and wanted to play the game for myself. But I refuse to pay full price for a game that’s been out for a while, so it sat on my Steam Wishlist until last week, when it was discounted by 60% on the evening before the Steam Summer Sale.

I finally got round to playing Hitman today, and so far I’d say it’s got the potential to be the first game to really hold my attention in since Metal Gear Solid 5. I loved Hitman 2, and kept tabs on the ups and downs of the series after that, so having another game in the series feel as good as that one did is certainly welcome.

So far I’ve only completed the prologue missions (though I did replay the very first mission several times), so these are very early impressions. Fingers crossed the game holds up as I get further through it!

I caught the first trailer for the new Marvel’s Inhumans TV series. The whole thing looked so stiff, awkward, and sterile. I’m not sure what I was expecting, but based purely on the trailer I have zero desire to watch the show. Marvel’s movies and Netflix series (mostly) manage to feel somewhat anchored to the “real world” despite how fantastical the plot or setup might be… but Inhumans had none of that quality on show.

“We had no code and no art assets,” Blizzard 3D Art Director Brian Sousa confirmed to Ars Technica. The 2017 project’s entire art pipeline was “eyeballed,” Sousa said, with recovered concept artwork, sketches, and original boxes and manuals used as reference materials. Not all code was missing, as Blizzard has been issuing patches to the original game’s code base for nearly 20 years. Also, a member of the sound team thankfully had backups of the original sound and voice recordings, which are now reprocessed in higher-fidelity 44,100Hz format.

I’d heard the majority of the original Starcraft code had been lost, years ago, but I figured it was just a rumour. Sounds like the team of Starcraft: Remastered had a big task to recreate the game in a way some of its biggest fans would appreciate.

External link: StarCraft Remastered devs unveil price, explain how much is being rebuilt

I’ve listened to a lot of 40K podcasts over the last couple of years. Over that time I’ve slowly winnowed my subscriptions down to just a handful.

  1. Forge The Narrative – my favourite 40K podcast of the last few years.
  2. Chapter Tactics – from Frontline Gaming, but distinct enough from their other shows to merit its own subscription
  3. Frontline Gaming – this is the main Frontline Gaming Podcast – the feed also includes Chapter Tactics and some other smaller shows
  4. Ashes of the Imperium – this one is new, but it’s by the team behind the very good Bad Dice AoS Podcast

My biggest gripe with most 40K podcasts tends to be length. Sorry, but unless you’re very, very compelling to listen to, I am not going to listen to a Podcast episode which is 2-3 hours long (or more!). The Podcasts above tend to clock-in at around an hour to an hour and a half, which I find to be perfect to my listening habits.

Bonus: Podcasts I’m Evaluating:

8th Edition has brought about a few new Podcasts, some of which I’m still deciding if they’re going to stay in my subscriptions list.

Bonus 2: Some Age of Sigmar podcasts

For a while I found the quality of the AOS podcasts to be in general higher than most 40K podcasts, with only a couple of exceptions. Sadly, my favourite AOS podcast — Heelanhammer — has recently gone on hiatus so I’m not including it here.

  • Bad Dice
  • Facehammer – can be a bit sweary, so proceed under advisory if that’s not your thing.

For various reasons I prefer to remove the www part from my personal-use domains. Setting up Caddy to serve the site from just domain.com is as simple as:

domain.com {
    root /path/to/site/files
    # other directives
}

But this set-up doesn’t provide any way to redirect from www to non-www, meaning anyone who types www.domain.com into the address bar is out of luck. So what to do? Well, Caddy provides a redir directive. Combine with a new site directive and a placeholder like this:

# Original non-WWW site:
domain.com {
    root /path/to/site/files
    # other directives
}
# New, additional "site", for doing the redir
www.domain.com {
    redir domain.com{uri}
}

Having just spent faaaar too long to get a sample Liquid code block to not be parsed by Jekyll, I thought I better make note of this, for my own benefit:

When posting Liquid code, make use of the raw tag. Which I can’t seem to post an example of using, because it creates some sort of Inception effect or something…

An XML Sitemap can be useful for optimising your site with Google, particularly if you make use of their Webmaster Tools. Jekyll doesn’t come with one out-of-the-box, but it is easy to add one. There’s probably a plugin out there which will automate things, but I just used a normal Jekyll-generated file for mine, based on code found on Robert Birnie’s site.

The only modification I made was to exclude feed.xml from the sitemap. Because this is auto-generated by a plugin I couldn’t add any front-matter to a file to exclude it in the same way as other files.

Create a file called sitemap.xml in the root of your site, and paste the following code into it:

---
layout: null
sitemap:
  exclude: 'yes'
---
<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
  {% for post in site.posts %}
    {% unless post.published == false %}
    <url>
      <loc>{{ site.url }}{{ post.url }}</loc>
      {% if post.sitemap.lastmod %}
        <lastmod>{{ post.sitemap.lastmod | date: "%Y-%m-%d" }}</lastmod>
      {% elsif post.date %}
        <lastmod>{{ post.date | date_to_xmlschema }}</lastmod>
      {% else %}
        <lastmod>{{ site.time | date_to_xmlschema }}</lastmod>
      {% endif %}
      {% if post.sitemap.changefreq %}
        <changefreq>{{ post.sitemap.changefreq }}</changefreq>
      {% else %}
        <changefreq>monthly</changefreq>
      {% endif %}
      {% if post.sitemap.priority %}
        <priority>{{ post.sitemap.priority }}</priority>
      {% else %}
        <priority>0.5</priority>
      {% endif %}
    </url>
    {% endunless %}
  {% endfor %}
  {% for page in site.pages %}
    {% unless page.sitemap.exclude == "yes" or page.url=="/feed.xml" %}
    <url>
      <loc>{{ site.url }}{{ page.url | remove: "index.html" }}</loc>
      {% if page.sitemap.lastmod %}
        <lastmod>{{ page.sitemap.lastmod | date: "%Y-%m-%d" }}</lastmod>
      {% elsif page.date %}
        <lastmod>{{ page.date | date_to_xmlschema }}</lastmod>
      {% else %}
        <lastmod>{{ site.time | date_to_xmlschema }}</lastmod>
      {% endif %}
      {% if page.sitemap.changefreq %}
        <changefreq>{{ page.sitemap.changefreq }}</changefreq>
      {% else %}
        <changefreq>monthly</changefreq>
      {% endif %}
      {% if page.sitemap.priority %}
        <priority>{{ page.sitemap.priority }}</priority>
      {% else %}
        <priority>0.3</priority>
      {% endif %}
    </url>
    {% endunless %}
  {% endfor %}
</urlset>

If you want fine-control over what appears in the sitemap, you can use any of the following front-matter variables.

sitemap:
  lastmod: 2014-01-23
  priority: 0.7
  changefreq: 'monthly'
  exclude: 'yes'

As an example, I use this in my feed.json template to exclude the generated file from the sitemap:

sitemap:
  exclude: 'yes'

And this in my index/archive pages for a daily change frequency:

sitemap:
  changefreq: 'daily'

It’s super simple. Just include a push directive in your site definition. You can leave it as just that, and Caddy will use any Link HTTP headers to figure it out.

If you want more control, you can expand the directive and specify both the path and associated resources, like so:

example.com {
    root /var/www/example
    push / {
        /assets/css/site.min.css
        /assets/img/logo.png
        /assets/js/site.min.js
    }
}

What this block does is say “for every request with a base of / (i.e. every request), Push the following 3 files.” You can customise the base path if you want to, and add more files if you need, but a block like the one above is what I’m using for this site.

You can find out full details in the Caddy Push documentation.

Lately I’ve been feeling a pull to return to my Warhammer 40,000 Flesh Tearers army, which I started around 4 years ago (and promptly only completed one unit of). I had an idea of a small strike-force that was basically just a load of Jump Pack Assault Squads, supported by Land Speeders (with some Death Company elements thrown in). It wouldn’t have been very “competetive” but it would have been thematic and fun. I didn’t progress the idea very far, as the Blood Angels codex in 7th Edition was… very not good; it also took away the ability to field Assault Squads as a troops choice — rendering the entire idea invalid.

Now we’re in 8th Edition, I can build the army as I imagined it, using the new detachments in the rule book. By getting back to a small “passion project” of mine, I’m hoping I’ll be able to revive my motivation for hobby projects which has been worryingly low recently. Who knows — I might even add some Primaris Inceptors to the mix for some mobile firepower.

If you step back and think about it, Games Workshop produce a staggering amount of new products not only per year, but per month. It’s something I don’t think they get enough credit for.

New models across multiple game systems and ranges. New boxed games. New source material for those games. New paints and other “hobby products.” New novels, novella’s, short stories, and audio dramas. A new issue of White Dwarf. Not all of these categories will get something every month, but many will get several.

That’s impressive, no matter how you feel about GW.

Shadowgate was a formative experience in my early youth. A brutally difficult NES RPG, it was the first time I played what was effectively a video game version of the Choose-Your-Own-Adventure books I’d been enjoying. When I say it was difficult, I mean it — it took me more than one sitting to get through the door at the very beginning of the game! I don’t think I ever managed to complete the game, despite my efforts.

I’d heard about the 2014 remake of the game, but never got round to playing it until a few days ago. The artwork is miles ahead of the original (obviously). It might not to be everyone’s tastes – it was very “concept art” style in many places. I found this led to many aspects of a room being missed at first inspection. The story is pretty much the same, perhaps with a few tweaks. There’s a little more “world building” than the original, I think?

The biggest departure was the difficulty. Despite the game retaining many of the same “frustratingly non-obvious solution” mechanics of the original, I managed to complete it in one sitting. I only died twice! (stupid Goblin…) Granted, it did extend into the early hours of the next morning, and I have over 20 years extra problem solving experience than I did when playing the original, but still…

At the current Steam price of <£3 for the edition that comes with all sorts of extras, it still gets a recommendation of worth your time if you’re nostalgic, or just fancy a new RPG game. I’m not sure I’d spend much more than that, given how short it turned out to be, but who am I to tell you what to do with your money?

You can find Shadowgate on Steam, here

Nintendo have announced the (predicted) SNES version of their Classic Mini. I’ve already registered to be notified of the preorder. The list of 20 games included on the system has some of my facourite games of all time. There’s a previously unreleased Star Fox 2 too. Even if it hadn’t had 7 games I absolutely love, I’d have preordered based on how much fun we’ve had with last year’s NES version.

Hopefully it’s easier to get hold of one this time around.

External link: Nintendo announces the Nintendo Classic Mini: Super Nintendo Entertainment System

This blog is generated by Jekyll, running on Caddy HTTP/2 server, and hosted on the lowest-tier Digital Ocean “droplet” (virtual private server). Self-hosting isn’t for everyone, but if you’re the sort of person who wants complete control over your content and how it is delivered – and who might like to tinker every so often, then read on.

The basic steps to setting up are:

  1. Prepare the Droplet
  2. Install Caddy
  3. Setup Jekyll and your workflow

Thankfully for me, other people have already written up their own guides for each of these steps!

To create the droplet that will host your blog, you’ll need a Digital Ocean account. If you don’t have one already, sign-up using my referral link to get $10 in credit.


1. Prepare the Droplet

Create a new Ubuntu 16.04 droplet through the Digital Ocean dashboard, then follow this guide to initial server setup. This should give you a nice base to work with. One thing I like to add to this initial setup is Fail2Ban, which will automatically ban the IPs of connections trying to login with wrong SSH credentials (which will be anyone but you):

$ sudo apt-get update
$ sudo apt-get install fail2ban
# Fail2Ban should automatically start. Check it with the line below:
$ systemctl status fail2ban

One more thing you can do (not neccesarily required, as you setup <code>ufw</code> firewall on the server) is enable a Digital Ocean firewall from the dashboard, and limit connections to just ports <code>22</code>, <code>80</code>, and <code>443</code>.

2. Install Caddy

Installation of Caddy is covered by this guide. I followed the steps pretty much as-is, with only minorr changes to match my setup (different username, etc). The biggest difference in my setup was I installed a couple of plugins as part of my Caddy installation. To do this, change the command in Step 1 to the following:

$ curl https://getcaddy.com | bash -s http.minify,tls.dns.cloudflare

This will install the Minify and Cloudflare plugins. Check out the Caddy home page for more plugins.

I set my site to use the Auto-HTTPS feature of Caddy, which gives the site a SSL certificate via Let’s Encrypt. I also wanted to use Cloudflare in front of my site, which isn’t covered in the guide above. After a bit of trial-and-error, the steps I used are below. If you don’t plan to do this, skip to Step 3.

2.1 Using Caddy Auto-HTTPS with Cloudflare

First off, you need to setup some environment variables. To do this for the Service you will have created using the guide above, run the following command:

$ sudo systemctl edit caddy

This will open up an editor for you to override or add to the main service file. In the editor, enter the following:

[Service]
Environment=CLOUDFLARE_EMAIL="<CloudFlare login>"
Environment=CLOUDFLARE_API_KEY="<your Cloudflare CA API key>"

Save the file and exit. Next, edit your Caddyfile:

$ sudo nano /etc/caddy/Caddyfile

Modify to something similar to this:

example.com {
    root /var/www
    tls you@example.com {
        dns cloudflare
    }
}

Finally, in the Crypto section of your Cloudflare control panel, make sure to set the SSL type to Strict. If you don’t, you’ll end up with redirection errors

You should be ready to start/restart Caddy:

$ systemctl restart caddy
$ #Enter your password when prompted

All being well, your site should be available, with HTTPS enabled.

3. Set-up Jekyll and your Workflow

I followed this guide to setup Jekyll on my Droplet, and create the necessary Git components. If your local machine is OSX or Linux, the guide is all you need. If you’re running on Windows (like me) things are a little more difficult. I tried setting everything up using the Linux Subsystem for Windows, like in this guide, which is the route recommended by the official Jekyll site — but for some reason it didn’t work correctly.

I ended up having to install both RubyInstaller and add the necessary DevKit as the last step of the installation. From there, it should just be a case of gem install jekyll bundler and creating the Jekyll site in the normal manner (follow the first part of the guide linked at the start of this section if you need to).


Hopefully, if you’ve followed along this far, you should now have your own shiny new blog, hosted on your own server! Setting this up took me a single evening – not including the time I spent creating my own Jekyll layouts. But those are a topic for another time…

I’ve not written much here since the start of the year. I’d started off with such good intentions. This isn’t one of those “sorry I haven’t been posting” blog posts, so don’t worry. I don’t apologise for it… it is what it is.

What’s happened is my brain has been mainly filled with three things the last few months: work, politics, and hobby. I don’t want to write about politics, not really, though it’s definitely something I could provide a running commentary on (this might please my Twitter followers most, if I were move politics to here). Work… well it’s work – I’ve been doing my best to keep it at the office, as it’s been very intense of late. There’s been a few very interesting technical challenges I could write about, but I can’t go into some of the specifics necessary, due to the nature of what I do. Plus, normally by the time I get home I don’t want to sit in front of a computer again. So that’s left hobby, and I havehad a whole other blog for that… although I’ve been concentrating on the practical side of the hobby for once, so haven’t been writing much there either.

So in short, we’re in one of my regular “blogging takes a backseat” phases. You should be used to them by now! I do still feel “the pull” to write, and regularly feel like I should be writing here more (as opposed to venting on Twitter), it’s just not happening for a variety of reasons.

C’est la vie.

To paraphrase Good Ol’ JR: “business will eventually pick up.”

The algorithm-driven Instagram feed was rolled out a while ago, but it’s only recently I’ve noticed much of a difference. Unfortunately the difference, particularly in the last couple of weeks, has been increasingly negative. So much so I really wish there was a way to opt-out!

Basically it comes down to I’m not seeing what I want to see at the time I want to see it, often leading me to just close the app after scrolling down a little bit. So as a way of “increasing engagement” it utterly fails.

A trivial example: I follow WWE on Instagram. Every Monday and Tuesday night, they post 6-12 photos from the goings on at Monday Night Raw, and Smackdown Live. Every Tuesday/Wednesday morning, I would like to open up Instagram and be able to scroll through to see what happened. This used to work, but some time in the last few weeks it changed so these photos show up randomly in my feed over the next 2-4 days – after I’ve already got the information from other sources, and definitely past the point I want the photos to show up at the top. The photos never show in chronological order, and never show as a batch of more than 1-2 at a time.

For the accounts I follow who aren’t “brands” (i.e. friends, shared interest accounts, etc), often it’s the people I like or comment on the least who appear near the top, and often the most trite, uninteresting photos they’ve posted. Why show me the video of a friend’s baby’s adorable first laugh, another friend’s stunning macro photography, or a popular post from an interest account, when 4 out of 6 of the photos at the top of my feed are meme nonsense? With the other 2 being drinks/food from someone’s night out 3 days ago?

Is it just me? I don’t think so, but maybe it’s just particularly bad on my feed? What’re your experiences with Instagram lately?

Hunter Walk with a neat idea for dealing with Twitter trolls I’ve not seen suggested anywhere else:

Basically the concept that when an @name is inserted into the tweet, it becomes targeted, the difference between just expressing an opinion about a person and the desire for that person to see the opinion. For example imagine these two tweets:

“Hunter Walk is an asshole” vs “@hunterwalk is an asshole”

The former doesn’t appear in my mentions. The latter does. I never see the first unless I’m actively searching for my name on Twitter. The latter does regardless of my desire to interact with the sender. Accordingly, once an @name is included, the standard for harassment should be lower, because intent can be assumed.

Source: Twitter Trolls Should Lose Ability To Include @Names in Tweets | Hunter Walk

So after the preamble, which should give you a frame of reference to what I’m aiming to do in this mini-series of posts about improving my online privacy and security, this short post will talk about the first steps I’m taking to tighten everything up. As this is all at the very beginning of my learning journey, all of these might change in the future. If they do, I will update the post and add a comment below.

In this post I look at two of the fundamentals of privacy on the web: the web browser and search engine. I’m mainly looking at the desktop for now, rather than mobile, mainly because it’s simpler to focus on one thing while I wrap my head around this stuff!

A Change of Browser

I’ve been using Chrome for years, after it usurped Firefox as the “fast, alternative” browser for Windows. These days, Chrome has become seriously bloated – it’s routinely consuming multiple gigabytes of RAM on my desktop. It may be (usually) fast despite of that, but it slows the rest of the computer. What’s more, it’s so deeply wired into Google’s ecosystem that it’s arguably as much a data hoover for Google as it is a browser.

So I was in the market for a new browser to begin with, and I was looking into alternatives like Chromium or Opera. But once I started diving into things a bit more, pretty much every recommendation for privacy-minded software recommended good-old Firefox, so that’s what I’ve gone with. I followed the configuration guide at PrivacyTools.io, as well as:

  • Turn on Do Not Track
  • Set Firefox to never remember my browsing/download/search/form history
  • Never accept third-party cookies
  • Only keep cookies until I close the browser
  • Never remember logins for sites
  • Turned off Firefox Health Report, Telemetry, and Crash Reporter

Extensions

Most of the extensions I had installed in Chrome were privacy-minded anyway, so were equally applicable to Firefox. Some additions came recommended. At the moment I am using the following:

Mobile

The situation on mobile (in my case, iOS) is a bit less clear. For now I’m not using the Chrome iOS app, reverting to Safari with the addition of a content blocker.

Downsides

The biggest issue with the above setup is it removes a few conveniences: remembering pinned tabs between browser sessions; having to login to websites every time you visit; having to retrace your steps to find a page in the future, if you don’t bookmark it at the time… that sort of thing. I might do a little tuning on this, relaxing the settings a little, but overall I think this might be one of those things that I need to live with.

A Change of Search Engine

Apart from a brief flirtation with DuckDuckGo a few years back, I’ve always used Google as my search engine. It’s constantly been the most reliable, fastest, and all-round best at what it does.

Even so, I’ve never been 100% happy with the fact that Google collects just about every data point they can, that it’s all wrapped up in your Google account, linked to everything you do in their other services, and made available for advertisement targetting (amongst who knows how many other things). As someone who’s had a Gmail account since they were invite only, I know Google has a fucktonne of data on me already; the genie is well and truly out of the bottle in that regard.

That doesn’t mean I can’t stop giving them more data. Sure, they’ll get the odd bit here and there when I use YouTube, or the odd email that hits my old, pretty much unused Gmail account, but that’s really it – if I change my search engine to somewhere else.

The obvious thing to do would be to revert back to DuckDuckGo, as I already have experience of it, and it’s accurate enough… but I wanted to try something different for the moment, while I’m still in the learning phase of this little project.

I tried all the recommendations at PrivacyTools.io. Searx generally gave me terrible results, but is an interesting idea; Qwant gave me some decent web results, but the included News results were mostly irrelevant, and I couldn’t find a way to turn these off. StartPage had been recommended in other places too, and overall was the best performing of the bunch – possibly not surprising, as it’s effectively a proxy for Google search, so seems like a win-win in this case. For now, I’ve set it as the default search engine in Firefox.

Mobile

For searches on my iPhone, I’ve set the default search engine to DuckDuckGo, as it’s the best of those available.

In 2017 I’m trying to be be a bit more privacy and security-minded when using the web (on all devices). I’ve been increasingly interested in these areas for a few years, and especially since the Snowden revelations, and recent events like the IP Bill, aka the “Snoopers Charter,” in the UK have pushed me further towards them. Over the next few weeks I’m going to look into (and try to document here) various things I can do to increase my security, decrease the amount of information applications and services can collect on me, and generally “take back control” of my online privacy.

I work in the tech industry, I’m fairly conscious about this stuff, and understand a few of the elements and technologies, but it’s really a very basic understanding. What I do know might be out of date. At this stage it might be too little too late… right now I don’t really know.

Upfront: I fully recognise that if the police/MI5/NSA/FSB/whoever really wanted my data, nothing I could do would be able to stop them.

security

Also upfront: even with that in mind, whatever I put in place won’t be considered “perfect.” What I’m looking to do is balance convenience, practicality, and security. If something is too difficult or fiddly to use, it will end up not being used.

Thinking specifically about the IP Bill, far too many agencies for my liking will have complete, unfettered access to what I get up to on the internet. Beyond that one example, the amount of web ad trackers we have to contend with nowadays is snowballing, as are the services amassing data to pay for those “free” apps we enjoy.

While it might be that none of these data collectors have nefarious purposes in mind (if you’re trusting), data security breaches are becoming bigger and more frequent. Data being stored is likely to leak or be stolen at some point, so the best you can hope for is to limit the amount of potentially harmful data1 being held.

On a lighter note, here’s a great spoof from Cassetteboy about the IP Bill

So all this is a bit of a long-winded preamble to saying look out for the future posts where I talk about what I have learned, how I’m applying it, any recommendations I have, and how you can do the same. The first post on some of the basics, and links to reading materials will be coming today/tomorrow. In the meantime, are there any tips or good sources you’ve come across? Feel free to share in the comments.


  1. Insert definition of what you would consider “harmful data if leaked”