@rustlang has piqued my interest. Gonna have to explore it some time soon.
I use the browser developer tools’ network tab to help determine the size of a page (usually accessed through the F12 key). Disable the cache before reloading, and most will tell you the combined size of every request which makes up the page, and the amount of data sent over the network. You can also (imperfectly) test the page under different network speed conditions if you want.
For example, Firefox says the post I’m replying to comes in at ~796KB, including all resources (uncached). 299KB of that is your header image, and 38KB is the HTML itself. My entire home page was 1.5MB, until I turned off embedding Tweets and Instagram widgets a moment ago. Now it’s around 492KB (but only text). It just goes to show you how much those external resources can pump the size up!
Page weight and excessive resources is something I’ve tried to stay conscious of when developing my site. Previous iterations have been better at this than what it is now – which is pretty far from “heavy” – but I’m still hoping to trim things back further.
“A collection of color palettes used in order to easily colorize new creative coding projects. New palettes will be continuously added, and all existing ones are subject to change.”
Thanks to Joe for the tip-off.
3 hours on some weird PHP/MySQL auto-increment bug is not how I wanted to spend my evening.
I mentioned over on Micro.blog that I’d managed to get a workable edit/commit/push workflow on my iPad. Naturally, I’m now considering a keyboard to complete the setup.
Does anyone have any recommendations for a good keyboard to use with a 9.7” iPad Pro?
The last time I had a similar setup, I used an Apple Wireless Keyboard (in an Origami case/stand) as the Bluetooth keyboard + case combos available at the time all felt horrible to type on. I do have a spare ultra-compact mechanical keyboard I could try with a USB adapter, but thinking more about it, that’s probably not going to work well when I don’t have a table to work on.
So, if anyone has any suggestions, I’d love to hear them!
“It’s hard to tell when Apple is listening. They speak concisely, infrequently, and only when they’re ready, saying absolutely nothing in the meantime, even when we’re all screaming about a product line as if it’s on fire. They make great progress, but often with courageous losses that never get reversed, so an extended silence because we’re stuck with a change forever is indistinguishable from an extended silence because the fix isn’t ready yet.”
I’ve started using my new(ish) test site to build my new IndieWeb WordPress theme. It’s very early in the process, and I’m getting the markup in place first, before I go anywhere near a stylesheet – so it all looks very 1996 right now.
The lessons learned on “K” will be put into use with this theme, along with several ideas I’ve picked up along the way. I’ve already reused some of the more useful bits of K to give me a head-start, so I know that stuff like the feed and post microformats should be pretty robust (if not yet 100% complete). The main improvement I want to make over K is in flexibility – i.e. it’s not just usable by me, or locks me/the user into a particular setup.
If there is anything you would like to see in an IndieWeb WordPress theme, or any other suggestions, please file an issue in the repository. I can’t promise I’ll implement it, but at this stage, the chances are high 😉
I’ve taken the decision to switch my site away from the custom theme called “K” I was building, and for now I’m using the excellent Autonomie by Matthias Pfefferle instead1. Development of K had already slowed to a standstill, and realistically, I’m not going to go back to it anytime soon. It feels a little like a failure, a little like giving up, but I think it’s ultimately the right thing to do.
I made a lot of mistakes while building K, which overcomplicated things, made development more difficult, and ultimately led me into a dead-end. I thought “for simplicity” that I would use the Bootstrap frontend framework, as it would give me a robust foundation to build on. It did, but I had to bend and twist WordPress in increasingly hacky ways to get the output to “play nice.” A large chunk of the K codebase was being taken up by code solely tasked with massaging the output of WordPress to add the right Bootstrap classes or container markup. It felt increasingly fragile and hacky, and it was a bad sign. K would work for my setup, but I couldn’t ensure it would work for everyone.
Bootstrap added other complications: to properly manage my CSS “overrides” I had to create a build system that would compile a whole lot of SASS files together. When I started K, the CSS was stripped down to the bare minimum needed, and came to a few KB. After a while I ended up including the whole Bootstrap framework, just to make the build process easier.
There were no “options” to speak of, so it couldn’t be tailored to suit someone else using the WordPress customiser. I didn’t even want to think about Gutenberg support.
Microformats always felt like whack-a-mole. I’d get them working, then make adjustments somewhere else, and promptly see things break again. I put this on my need to make so many adjustments to the WordPress output – inadvertant issues kept creeping in.
Then there were the visual design choices. K grew out of the simple design I employed when I was writing 1-2 short posts a month, in the traditional format. In that scenario it worked fine. Once I started to use the various post kinds, things became more problematic. Now I was posting several posts per day, most of which weren’t in a traditional blog format. The home page became cluttered; it started to remind me of a badly thought-out notification area, rather than a well designed blog. The archive page was a disaster and I had no good ideas on how to fix it. There were a thousand other little niggles.
None of this is meant as a knock against Bootstrap, or WordPress, or build systems, or any other tool or technology I used to get to this point. K failed because of my decisions, rather than deficiencies in the tools. There were things I liked about K… it used zero JavaScript, and I did my best to stop unnecessary plugin resources from loading. Markup was as minimal as I could make it (within the constraints of what I could remove from WordPress output, microformats, and what Bootstrap needed). It worked well across browsers and devices (thanks Bootstrap!), and the accent colours were fun. I learned a lot about the excellent Post Kinds plugin during the development process. I’m filing it away as a failed experiment, which I’ll learn from and apply the lessons to the IndieWeb WordPress theme I’m still intending to create. One that will hopefully work for more people than just me ?
I put the source code to K on GitHub right back near the start of the project. If you want to take a look, steal any code, rework it into something usable – feel more than welcome to!
1 As a result, a couple of things that were setup specifically for K are broken. I’ll look into fixing those over the next few days. ⤴️
I’m trying to pick up a project I first worked on a year ago, and it’s a) fasciniating, and b) mystifying. I think that because I dind’t expect to leave it for so long, I’ve not documented how I’d got to the stage I was at, or what my intentions were. Was I still in the exploratory stage, or had I picked a direction? Did I understand what I was trying to do, or had I got things wrong?
Based on what I’m looking at right now, I’m tempted to start over.
I need to take a break from the merry-go-round of mf2/parser compatibility. I excitedly thought it was fixed. But it wasn’t. I’ve made some further changes, and it might be fixed, but there’s a good chance it’s still broken in some obscure way… IndieWebify.me refuses to recognise my Like and Bookmark posts properly, even though every other parser I throw at example URLs comes back fine? Last I checked, IndieNews still refuses to return anything but “error: no link found.” Update: something I did today must have fixed this… I fixed a typo in an earlier post, and suddenly it was on IndieNews ?♂️
It’s getting a bit stressful, to be honest, and that’s means it’s time to move on to another task before it burns me out on the whole project. I’ll come back to it again in a while, hopefully have a break-through and iron out the kinks.
In better news, I do have some custom gallery markup up and running, and the h-card in the sidebar is now a fully-fledged widget. Baby steps…
(Skip to the end for the TL;DR summary)
After an evening of debugging and rewriting sections of the HTML in “K”, I think I’ve fixed the markup and parsing issues I mentioned yesterday.
It turns out that X-Ray, the parsing engine used by IndieNews, Aperture, and probably others, was only finding the sidebar h-card in my markup. The rest of the content was being ignored. I’m not entirely sure why this is, to be honest, but it gave me a place to start.
Working from the (admittedly shakey) basis that if the parser was only going to find one mf2 entity on the page, then I’d want it to be the main h-feed or h-entry… so I started moving around some blocks of HTML and a few classes, and stripped out a few likely redundant pieces of HTML.
This… worked! The feed would show up in the X-Ray output instead of the h-card, and wasn’t all that different in the Pin13 parser compared to yesterday’s results. But it was far from ideal. The authorship information on every feed entry was screwed up; I’d made a change yesterday so only one full h-card was on the page (the sidebar) and followed the recommendation to markup the h-entry author details with u-author
instead. Now came the conundrum: do I add back in a dedicated h-card to every h-entry, and by doing so re-break some of the other parsers looking for a single “representitive” h-card? I tried out adding them back in, just to see what happened. X-Ray was still fixed, but IndieWebify.me complained about it, and the IndieWeb Webring still couldn’t work out who I was.
I could have left it here. X-Ray was the main target, IndieWebify might not have liked it but could at least still see some details, and IndieWeb Webring was a “nice to have” in a way. But truth be told, it would have nagged at me. What if these “minor” issues were the proverbial canary? I want to achieve the widest possible compatibility now, to reduce potential issues at a later date.
It was around about this point that I remembered that an h-feed itself could have its own embedded h-card, which could potentially solve the issue. After moving my ‘h-feed’ class to the body
element, instead of the main
I’d been using up to then (so now it would use the sidebar h-card to represent the feed), it more or less did solve the issue.
It threw me at first that X-Ray didn’t list a separate h-card item like Pin13 did, but instead used the feed h-card for the authorship of every nested h-entry. Removing the now redundant author h-card from the entries stopped IndieWebify from grousing about these multiples. Oh, and here’s my new profile page on the IndieWeb Webring. Even my test microformat-based feed in Aperture/Monocle started displaying posts almost immediately after applying the change.
So, TL;DR: I moved my main h-card inside the h-feed, instead of it being a distinct entity on the page. By doing so I fixed pretty much all of the microformat parsing issues I was experiencing, which means “K” has taken a big leap forward… and I can stop pulling my hair out ?
Shared to IndieNews (maybe) and IndieWeb.xyz.
I’ve been chipping away at several things over the last two weeks, mostly focussing on markup, presentation, and theme file organisation. I want to get these finalised before I look at theme customisation options. If you’ve visited the home page, you might have noticed the display of certain post types has been evolving, as I search for a pleasing balance of information, appearance, and not overwhelming a visitor with a wall of text. I don’t think I’m quite there yet, so expect a few more iterations. My current thinking is to treat the home page a bit like an “activity feed,” where action-type posts such as Likes are displayed in summary manner to give more emphasis to the written posts.
Of course, if you’re subscribed to the site RSS or JSON feeds in a reader of some description, you’ve probably not seen any difference!
The most challenging issue I’m facing is the markup of posts and other page elements to be compliant with the specs of h-entry, h-card, and the various post kinds such as: Like, Bookmark, Reply, Repost, and so on.
Everytime I think I have the markup nailed down, something comes along to show me it’s broken in some way. I liked a post on Aaron’s site earlier, and instead of showing as the like I intended it became a regular webmention showing my avatar as a photo, as I’ve clearly messed up the h-card and u-like-of markup in the last round of edits. So sorry to Aaron for mistakenly filling his responses with my face! The Pin13 parser shows the right elements as being present, but IndieWebify.me and Webmention.io both fail to pick them up. I’m guessing it’s an issue with how I’ve nested things, and/or some stray classes from previous experiments that I’ve not tidied up? I’ll try to get some time to look into it more tonight.
For other – minor – examples, IndieWebring also refuses to pick up my representitive h-card, even though IndieWebify.me tells me I have this setup correctly. Aperture doesn’t seem to pick up anything other than my h-card when I use the microformats feed instead of RSS or JSON.
If the markup isn’t right then IndieWeb features are unlikely to work correctly – so fixing this is key for an “IndieWeb integrated” theme.
As an aside, and while I’m on the subject of frustrations, I’m having a hell of a time with the Webmentions plugin. Most of the time it feels like they just don’t get sent, as I frequently have to manually ping sites (such as with the earlier like post). There’s a chance this is related to the above markup issues; if the receiving site can’t parse the post that mentioned it, it might just throw the mention away? That feels like a bit of a stretch though.
I need to come up with a better way of testing these things, rather than “just give it a try on here and see if it’s worked or not…”
But anyway, “K” is progressing, even if it sometimes feels like one step forward/two steps back. I’d hoped to have a proper “release” ready for some time in February, but at the moment I think March or April are more likely. I’m only getting an hour or two a week to tinker at the moment, and I know I’m going to be busier with other things in February.
Syndicated to Indieweb.xyz and IndieNews (hopefully!)
Updated to add – IndieNews still doesn’t like my site. “Error: no_link_found”, every time.
Note: I found this mini How-To while having a clean-up of my GitHub repositories. I figured it would be worth sharing on my blog. Hopefully it is of use to someone. Warning: bad ASCII art ahead!
The Problem
- I have my repository hosted on GitHub
- I have an internal Git server used for deployments
- I want to keep these synchronised using my normal workflow
Getting Started
Both methods I’ll describe need a “bare” version of the GitHub repository on your internal server. This worked best for me:
cd ~/projects/repo-sync-test/ scp -r .git user@internalserver:/path/to/sync.git
Here, I’m changing to my local working directory, then using scp
to copy the .git folder to the internal server over ssh
.
More information and examples this can be found in the online Git Book:
4.2 Git on the Server – Getting Git on a Server
Once the internal server version of the repository is ready, we can begin!
The Easy, Safe, But Manual Method:
+---------+ +----------+ /------> | GitHub | | internal | -- deploy --> +---------+ +----------+ \------> ^ ^ | | | +---------+ | \-----| ME! | ----/ +---------+
This one I have used before, and is the least complex. It needs the least setup, but doesn’t sync the two repositories automatically. Essentially we are going to add a second Git Remote to the local copy, and push to both servers in our workflow:
In your own local copy of the repository, checked out from GitHub, add a new remote a bit like this:
git remote add internal user@internalserver:/path/to/sync.git
This guide on help.github.com has a bit more information about adding Remotes.
You can change the remote name of “internal” to whatever you want. You could also rename the remote which points to GitHub (“origin”) to something else, so it’s clearer where it is pushing to:
git remote rename origin github
With your remotes ready, to keep the servers in sync you push to both of them, one after the other:
git push github master git push internal master
- Pros: Really simple
- Cons: It’s a little more typing when pushing changes
The Automated Way:
+---------+ +----------+ /------> | GitHub | ======> | internal | -- deploy --> +---------+ +----------+ \------> ^ | | +---------+ \------------- | ME! | +---------+
The previous method is simple and reliable, but it doesn’t really scale that well. Wouldn’t it be nice if the internal server did the extra work?
The main thing to be aware of with this method is that you wouldn’t be able to push directly to your internal server – if you did, then the changes would be overwritten by the process I’ll describe.
Anyway:
One problem I had in setting this up initially, is the local repositories on my PC are cloned from GitHub over SSH, which would require a lot more setup to allow the server to fetch from GitHub without any interaction. So what I did was remove the existing remote, and add a new one pointing to the https link:
(on the internal server) cd /path/to/repository.git git remote rm origin git remote add origin https://github.com/chrismcabz/repo-syncing-test.git git fetch origin
You might not have to do this, but I did, so best to mention it!
At this point, you can test everything is working OK. Create or modify a file in your local copy, and push it to GitHub. On your internal server, do a git fetch origin
to sync the change down to the server repository. Now, if you were to try and do a normal git merge origin
at this point, it would fail, because we’re in a “bare” repository. If we were to clone the server repository to another machine, it would reflect the previous commit.
Instead, to see our changes reflected, we can use git reset
(I’ve included example output messages):
git reset refs/remotes/origin/master Unstaged changes after reset: M LICENSE M README.md M testfile1.txt M testfile2.txt M testfile3.txt
Now if we were to clone the internal server’s repository, it would be fully up to date with the repository on GitHub. Great! But so far it’s still a manual process, so lets add a cron
task to stop the need for human intervention.
In my case, adding a new file to /etc/cron.d/
, with the contents below was enough:
*/30 * * * * user cd /path/to/sync.git && git fetch origin && git reset refs/remotes/origin/master > /dev/null
What this does is tell cron that every 30 minutes it should run our command as the user user. Stepping through the command, we’re asking to:
cd
to our repositorygit fetch
from GitHubgit reset
like we did in our test above, while sending the messages to/dev/null
That should be all we need to do! Our internal server will keep itself up-to-date with our GitHub repository automatically.
- Pros: It’s automated; only need to push changes to one server.
- Cons: If someone mistakenly pushes to the internal server, their changes will be overwritten
Credits
- Automatic synchronization of 2 git repositories – from where the bulk of the Automated Way was adapted from.
- Adding A Remote
- 4.2 Git on the Server – Getting Git on a Server