Skip to content

Site design standards

Posted by on his .
Last updated . Changelog.
2252 words; a read

This site may look simple on the surface, but I put a lot of thought into it. I hold myself to a long list of requirements concerning accessibility, compatibility, privacy, security, and machine-friendliness.

Note: all references to "pixels" (px) in this section refer to CSS pixels.

Accessibility statement

I’ve made every effort to make as accessible as possible. More information about the accessibility-related work for is in my post .

Conformance status

The Web Content Accessibility Guidelines (WCAG) defines requirements for designers and developers to improve accessibility for people with disabilities. It defines three levels of conformance: Level A, Level AA, and Level AAA. I’ve made sure is fully conformant with WCAG 2.2 level AA.

Fully conformant means that the content fully conforms to the accessibility standard without any exceptions.

Additional accessibility considerations

Additionally, I strive to conform to WCAG 2.2 level AAA wherever applicable. I comply with all AAA criteria except for the following:

SC 2.4.9 Link Purpose (Link Only)
I’m actually trying to follow this criterion, but it’s a work in progress. Let me know if any link names can be improved! Link purpose in context always makes sense.
SC 3.1.5 Reading Level
The required reading ability often exceeds the lower secondary education level, especially on more technical articles.
SC 3.1.6 Pronunciation
I do not yet provide any pronunciation information.

I have only tested WCAG compliance in mainstream browser engines (Blink, Gecko, WebKit). Full details on how I meet every WCAG success criterion are on a separate page: Details on WCAG 2.2 conformance

I also go further than WCAG in many aspects:

  • Rather than follow SC 2.5.5’s requirement to achieve a minimum tap target size of 44 by 44 pixels, I follow Google’s more strict guidelines. These guidelines mandate that targets are at least 48-by-48 pixels, with no overlap against any other targets in a 56-by-56 pixel range. I try to follow this guideline for any interactive element that isn’t a hyperlink surrounded by body text.

  • I ensure at least one such 56-by-56 px non-interactive region exists on the page, for users with hand tremors or or anyone who wants to tap the screen without clicking something.

  • With the exception of in-text borders, I only set custom colors in response to the prefers-color-scheme: dark media query. These custom colors pass APCA contrast ratios, all being close to the ideal lightness contrast of 90. They are also autism- and overstimulation-friendly colors: the yellow links are significantly de-saturated to reduce harshness.

  • I ensure that the page works on extremely narrow viewports without triggering two-dimensional scaling. It should work at widths well below 200 CSS pixels.

Assessment and evaluation

I test each WCAG success criterion myself using the mainstream browser engines (Blink, Gecko, WebKit). I test using multiple screen readers: Orca (primary, with Firefox and Epiphany), NVDA (with Firefox and Chromium), Windows Narrator (with Microsoft Edge), Apple VoiceOver (with desktop and mobile Safari), and Android TalkBack (with Chromium).

I also accept user feedback. Users are free to contact me through any means linked on my About page.

Finally, I supplement manual testing with the following automated tools:

WAVE reports no errors; AXE is unable to determine certain contrast errors, but it otherwise reports no errors; IBM Equal Access reports no errors but some items that need review.

I regularly run axe-core, the IBM Equal Access Accessibility Checker, the Nu HTML Checker (local build, latest commit), and webhint on every page in my sitemap. After filtering out false-positives (and reporting them upstream), I receive no errors.

Due to issue 1008 in IBM Equal Access Checker, I remove all instances of content-visibility from my site’s CSS before running achecker from the command line.

Compatibility statement

The website is built on well structured, semantic, polygot XHTML5 (including WAI-ARIA and DPUB-ARIA extensions where appropriate), enhanced with CSS for styling. The website does not rely on modern development practices such as CSS Grid, Flexbox, SVG 2, Web fonts, and JavaScript; this should improve support in older browsers such as Internet Explorer 11. No extra plugins or libraries should be required to view the website.

This site sticks to Web standards. I regularly run a local build of the Nu HTML Checker, xmllint, and html proofer on every page in my sitemap, and see no errors. I do filter out false Nu positives and report them upstream when I can.

I also perform cross-browser testing for both HTML and XHTML versions of my pages. I test with, but do not necessarily endorse, a large variety of browsers:

Mainstream engines
I maintain excellent compatibility with mainstream engines: Blink (Chromium, Edge, QtWebEngine), WebKit (Safari, Epiphany), and Gecko (Firefox).
Tor Browser
My Tor hidden service also works well with the Tor Browser, with the exception of a page containing an <audio> element. The <audio> element can’t play in the Tor Browser due to a bug involving NoScript and Firefox’s handling of the sandbox CSP directive. To work around the issue, I include link to download the audio.
Mainstream engine forks
Pale Moon and recent versions of K-Meleon use Goanna, a single-threaded fork of Firefox’s Gecko engine. Ultralight is a proprietary, source-available, fork of WebKit focused on lightweight embedded webviews. My site should work in both engines without any noticeable issues.
Alternative engines
I test compatibility with current alternative engines: the SerenityOS browser, Servo, NetSurf, Kristall, and litehtml. I have excellent compatibility with litehtml and Servo. The site is usable in NetSurf, and the SerenityOS browser. Only Servo supports <details>. The SerenityOS browser doesn’t support ECDSA certificates, but the Tildeverse mirror works fine. The SerenityOS browser also has some issues displaying my SVG avatar; it does not attempt to use the PNG fallback.
Textual browsers
The site works well with textual browsers. Lynx and Links2 are first-class citizens for which all features work as intended. I also test in felinks (an ELinks fork), edbrowse, and w3m. w3m doesn’t support soft hyphens, but the site is still otherwise usable in it. I maintain compatibility with these engines by making CSS a strictly-optional progressive enhancement and using semantic markup. I occasionally try Edbrowse too. In all textual browsers, the aforementioned incomplete <details> handling applies.
Abandoned engines
I occasionally test abandoned engines, sometimes with a TLS-terminating proxy if necessary. These engines include Tkhtml, KHTML, Dillo,note 1 Internet Explorernote 2 (with and without compatibility mode), Netscape Navigator, old Presto-based Opera versions,note 3 and outdated versions of current browsers. The aforementioned issue with <details> applies to all of these choices. I use Linux, but testing in browsers like Internet Explorer depends on my access to a Windows machine. Besides the <details> issues, the site works perfectly well in Internet Explorer 11 and Opera Presto. The site has layout issues but remains usable in Tkhtml, KHTML, and Netscape.

I strive to maintain compatibility to the following degrees:

  • Works without major issues in mainstream engines, the Tor browser, Goanna, and Ultralight.
  • Fully operable in textual browsers, litehtml, and NetSurf. Some issues (e.g. missing <details>) might make the experience unpleasant, but no major functionality will be disabled.
  • Baseline functionality in abandoned engines, Dillo, the SerenityOS browser. Some ancillary features may not work (e.g. forms for Webmentions and search), but you’ll still be able to browse and read.

Some engines I have not yet tested, but hope to try in the future:


I think making a site machine-friendly is a great alternative perspective to traditional SEO, the latter of which I think tends to incentivise low-quality content and makes searching difficult. It’s a big part of what I’ve decided to call “agent optimization”.

This site is parser-friendly. It uses well-formed polygot (X)HTML5 markup containing microdata, microformats2, and legacy microformats. Microformats are useful for IndieWeb compatibility; microdata is useful for various forms of content-extraction (such as “reading mode” implementations) and search engines. I’ve also sprinkled in some Creative Commons vocabulary using RDFa syntax.

I make Atom feeds available for articles and notes, and have a combined Atom feed for both. These feeds are enhanced with Ostatus and ActivityStreams XML namespaces.

All HTML pages have an XHTML5 counterpart, which is currently the same except for the content-type HTTP header. To see this counterpart, add “index.xhtml” to the end of a URL or request a page with an Accept header containing application/xhtml+xml but not text/html. All pages parse correctly using all the XHTML browser parsers I could try.

Reading mode compatibility

The aforementioned metadata (microdata, microformats) has improved reading-mode compatibility.

This site should fully support the Readability algorithm. The Readability algorithm is used by Firefox and Vivaldi. It’s the basis of one of multiple distillers used by Brave; Brave typically uses its Readability-based logic on Readability is the only article distillation algorithm I try to actively support.

This site happens to fully support Apple’s Reader Mode and Azure Immersive Reader (AIR), the latter of which powers Microsoft Edge’s reading mode. Unfortunately, AIR applies a stylesheet atop the extracted article that makes figures difficult to read: it centers text in figures, included pre-formatted blocks. I filed an issue on AIR’s feedback forum, but that forum was subsequently deleted.

This site works well in the Diffbot article extractor. Diffbot powers a variety of services, including Instapaper.

This site does not work well in Chromium’s DOM Distiller’s flawed distillation techniques. Regions with high link-densities, such as citations, get filtered out. DOM Distiller also does not show footnotes, and sometimes cuts off final DPUB-ARIA sections (acknowledgements, conclusions).

Static IndieWeb

One of my goals for this site was to see just how far I could take IndieWeb concepts on a fully static site with ancillary services to handle dynamism. Apart from the search-results page, this site is static on the back-end (all pages are statically-generated). All pages, including the search-results page, are fully static on the front-end (no JS).

The IndieMark page lists all the ways you can “IndieWeb-ify” your site.

Features I have already implemented

  • IndieAuth compatibility, using the external service.

  • Microformats: representative h-card, in-text h-card and h-cite when referencing works, h-feed.

  • Sending and receiving Webmentions. I receive Webmentions with webmentiond, and send them from my own computer using Pushl.

  • Displaying Webmentions: linkbacks, IndieWeb “likes” (not silo likes), and comments. I based their appearance on Tumblr’s display of interactions.

  • Backfeeding content from silos: I’m only interested in backfilled content containing discussion, not “reactions” or “likes”. Powered by Bridgy.

Features I am not interested in

  • Authoring tools, either through a protocol (e.g. MicroPub) or a dynamic webpage: I prefer writing posts in my $EDITOR and deploying with git push, letting a CI job build and deploy the site with make deploy-prod. This allows me to participate with the social Web using the same workflow I use for writing code, avoiding the need to adopt and learn new tools.

  • Full silo independence: I want to treat my site as a “filtered” view of me to keep searchable and public. On other silos I might shitpost or post short-lived, disposable content. These aren’t private, but I want them to remain less prominent. I POSSE content to other places, but I don’t exclusively use POSSE.

  • Sharing my likes, favorites, reposts: I find these a bit too shallow for I prefer “bookmarks” where I can give an editorialized description of the content I wish to share along with any relevant tags. I’ll keep simple likes and reposts to silos.

  • Rich reply-contexts: I’d rather have users click a link to visit the reply and use quoted text to respond to specific snippets, similar to interleaved-style email quoting. Most of my replies are to Fediverse posts; on the Fediverse, people are often (understandably!) averse to scraping and archiving content. For that reason: I only show a tiny excerpt of content, and I ask for permission to POSSE replies to unlisted posts by #nobot accounts.

Features I am interested in

  • WebSub. I had some issues with Superfeedr; I think I’ll resort to running my own single-user hub.

  • Automatic POSSE to the Fediverse (would be difficult with reply-contexts, and Bridgy doesn’t support non-Mastodon features like Markdown).

  • Taxonomies (tags).

Low-priority features I have some interest in

  • RSVPs: I don’t attend many events, let alone events for which I would broadcast my attendance. A page for this would be pretty empty.

  • Event posts: same reason as above.

  • Running my own IndieAuth authorization endpoint to replace the external IndieLogin service.

  • Some sort of daemon to replace Bridgy. It sounds like a large undertaking because I’d have to implement it from scratch: Bridgy is written in Python, but I want every service on my server to be a statically-linked native executable.


This site is privacy-respecting. Its CSP blocks all scripts, third-party content, and other problematic features. I describe how I go out of my way to reduce the information you can transmit on this site in my privacy policy.


  1. Although there’s no official announcement of Dillo’s demise, the browser development has been inactive for a while. The official site, including its repository, is down; I mirrored the Dillo repository. 

  2. Internet Explorer’s engine isn’t abandoned, but the consumer version I have access to is. 

  3. Opera Presto isn’t really abandoned. Opera Mini’s “Extreme” mode still uses a server-side Presto rendering engine; see . That being said, I do test with the outdated desktop Presto engine in a sandboxed environment. 



This site supports Webmentions, a backlink-based alternative to traditional comment forms.

Publish a response on your own website and share the link here to send me a webmention! Include a link to this page's canonical location for it to be accepted.

This post does not have any approved Webmentions yet.

Feel free to contact me directly with feedback; here’s my contact info