<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://echolevel.co.uk/feed.xml" rel="self" type="application/atom+xml" /><link href="https://echolevel.co.uk/" rel="alternate" type="text/html" /><updated>2026-03-02T23:03:05+00:00</updated><id>https://echolevel.co.uk/feed.xml</id><title type="html">echolevel</title><subtitle>Brendan O&apos;Callaghan Ratliff - composer, sound engineer and game developer specialising in realtime dynamic music systems. Audio Director at Cardboard Sword.</subtitle><entry><title type="html">Metron SDK</title><link href="https://echolevel.co.uk/blog/2026-02-28-metronsdk-ue5-audio/" rel="alternate" type="text/html" title="Metron SDK" /><published>2026-02-28T00:00:00+00:00</published><updated>2026-02-28T00:00:00+00:00</updated><id>https://echolevel.co.uk/blog/MetronSDK-UE5-audio</id><content type="html" xml:base="https://echolevel.co.uk/blog/2026-02-28-metronsdk-ue5-audio/"><![CDATA[<p>Metron is a dynamic music system for games that I’ve been developing over the last few years as an in-house audio tool at Cardboard Sword. It comprises a tracker-style DAW editor for composing and exporting music assets, and a static SDK library for playing that music back at runtime in a games engine. Crucially, the music is synthesised in realtime rather than rendered to a ‘flat’ audio format like WAV or MP3 which means that all of the individual notes, expression controls and arrangement events are available to the game as rich data for bi-directional sync.</p>

<p>That means you can:</p>
<ul>
  <li>sync game events to the music’s beat, or fractions of a beat</li>
  <li>play/mute/modify individual instruments or sounds when game events occur</li>
  <li>bind visual state to a particular instrument’s characteristics continuously in realtime (e.g. material emissiveness follows the kick drum volume envelope, or colour value follows a filter sweep)</li>
  <li>smoothly transpose or tempo-adjust the music according to some narrative state</li>
  <li>swap one instrument or sound with another on the fly</li>
  <li>control a song’s mix state or arrangement position via player achievements</li>
</ul>

<p>and so on - there’s potential for incredibly finely controlled granular stuff as well as zooming out for hands-off, logic-driven dynamic music.</p>

<p><br /></p>
<figure class="post-video">
  <video controls="" playsinline="" preload="metadata" poster="/assets/video/metron_sdk_example.jpg">
    <source src="/assets/video/metron_sdk_example.mp4" type="video/mp4" />
    Your browser does not support the video tag.
  </video>

  <figcaption class="post-caption">
    A basic example of Metron Tracker to MetronUE workflow in Unreal Engine 5
  </figcaption>
</figure>

<h4 id="trackers-a-very-brief-history">Trackers (a very brief history)</h4>

<p>Why haven’t games always been doing this? Well, they used to! The first publicly available music tracker was The Ultimate Soundtracker written for the Commodore Amiga in the 1980s by Karsten Obarski, and an overwhelming majority of games for that platform used his .MOD format (or something similar) to exploit the Amiga’s then-revolutionary ability to play 4 simultaneous channels of decent quality sampledata in complex arrangements for almost zero CPU cost, thanks to the Amiga’s fast DMA and custom chipset. In the 90s as the format expanded on the PC, and as PCs began to be fitted with sound cards like the SoundBlaster 16, evolutions of the 4-channel .MOD format to 8 and 32-channel varieties with extended capabilities became popular as an alternative to MIDI or flat-rendered CD-ROM audio for games such as Crusader: No Remorse, Unreal Tournament, One Must Fall and Deus Ex.</p>

<p>Not all games took full advantage of the possibilities for using the rich data that tracked music replayer routines emitted as a by-product of playing the music, but where the choice was made to use these formats even into the early 00s, it was often due to one or more of these factors:</p>
<ul>
  <li>vastly less storage footprint than PCM/WAV, because whereas a tracker module stores e.g. a drum sample once and can play it a thousand times at no extra storage cost, a rendered song would effectively have to store it a thousand times sequentially on disk; a 5 minute song might be 30MB as an uncompressed, low-samplerate WAV but 3MB as a tracker module</li>
  <li>vastly less CPU usage to play and render in realtime versus then-nascent decompression algorithms for MP3 and similar</li>
  <li>lots of composers in the games industry were fluent in the tracker paradigm, especially if they’d been active in the Amiga/PC demoscene in Europe and various tracker scenes in the US</li>
  <li>it was often considered preferable to MIDI music which - while fondly remembered and also incredibly efficient for storage and CPU - targeted a wildly fragmented range of consumer-side hardware, all of which sounded different (lucky owners of a Roland MT-32 had a far superior Monkey Island experience to those who had to make to do with whatever crappy General MIDI synth was on their unbranded Soundblaster clone). Lots of composers didn’t want to roll the dice on how their music would sound to the player, and sampled sound can be <em>anything</em>, not just a fixed preset in a General MIDI synth.</li>
</ul>

<p>(There’s a whole lot more that could be written about how in-house European and Japanese game developers wrote music ‘drivers’ for console platforms with dedicated sound synthesis chips, or combined the sampledata and MIDI approaches into proprietary systems on a per-game basis right up until the PlayStation 2 era, but this article’s getting too long already.)</p>

<p>That only gets us up to the turn of the century… These days, in the 2020s, games often ship with a 60GB+ footprint where even a gargantuan quantity of music assets is a drop in the ocean. They can also be heavily compressed with mature codecs like OGG and Bink then decompressed at runtime on modern multi-core platforms without hitching the game thread, and usually without a player ever noticing a dip in framerate. Most composers work in the studio with exceptionally high quality synths, sample libraries and often real orchestras to produce arrangements and mixes that would still be prohibitively expensive to recreate in realtime on end-user platforms (especially while those platforms are already going flat-out trying to run the game itself) so for a lot of games, especially big-budget franchises, the correct strategy is to have music rendered into flat stems so that mixes and arrangements can be adjusted dynamically.</p>

<p><a href="#_" id="lb-assets-img-metron-metronscreen00-png" class="lightbox" aria-label="Close image">
  <img src="/assets/img/metron/MetronScreen00.png" alt="Metron Tracker's pattern editor screen" />
</a></p>

<figure class="post-figure">
  <a href="#lb-assets-img-metron-metronscreen00-png" class="lightbox-trigger">
    <img src="/assets/img/metron/MetronScreen00.png" alt="Metron Tracker's pattern editor screen" />
  </a>

  
    <figcaption class="post-caption">
      Metron Tracker's pattern editor screen
    </figcaption>
  
</figure>

<h4 id="why-use-tracker-music-in-games-in-2026">Why use tracker music in games in 2026?</h4>

<p>Plenty of reasons! Musical aesthetics play a part: modern tracked music <em>can</em> sound indistinguishable from ‘traditional’ flat-rendered stems (indeed, Metron can cheerfully play back those assets like any other modern dynamic music system), but it can also lean into a whole range of experimental, eccentric or nostalgic vibes. But by far the most compelling reason is to exploit the data-richness of a music format which <em>is</em> its own project file; the file (or module, in oldschool parlance) is a bunch of sampledata plus a bunch of binary musical event data. There’s no one-way lossy encoding - the sounds carry with them all the instructions required for them to be played back correctly.</p>

<p>We released two games in 2026 - The Siege and The Sandfox, for PC, and Transmission: Shortwave on Meta Quest 3. Both used Metron, and both exploited the data-rich bidirectional sync it offers. In Sandfox (a stealth-based metroidvania) we had tension risers, enemy proximity heartbeats and combat state percussion layers being triggered and modulated by gameplay state, quantised to correct musical time within the song, and dynamically participating in the score in a musical way that felt natural and atmospheric while also telegraphing crucial information to the player. Proximity to an enemy controlled the filter cutoff on a ‘warning’ bass throb; enemy suspicion states dictated the volume and complexity of percussion polyrhythms; audible stings for pickups or telegraphing safe/dangerous areas were incorporated into the music itself - recognisable enough to be consistent, but subtly pitched and timed to keep them in the right musical key and correctly quantised to the beat.</p>

<p>Transmission: Shortwave, a retro-futuristic VR delivery courier driving game with an emphasis on low stakes and chillout vibes, leant even more into Metron’s potential as the ‘clock’ of a rhythm game by syncing colours, geometry scale, car indicators and horn, even skybox star twinkling to the 90s jungle/drum and bass soundtrack. We set it up in a fully automated, fire-and-forget way to start with, but later one of our designers hand-authored specific assets’ sync responders to complement the characteristics of each song - a breakbeat here, a Reese bass there, hi-hats all over the place - and the result is a rhythmic environment where human curation keeps things feeling good, <em>breathing</em> to the beat, rather than allowing everything to go haywire.</p>

<p>Other proof-of-concept systems that’ll make it into future games include gameplay states and achievements moving song transport in and out of ‘loop traps’, where various rules can dictate how the game conducts the music; gameplay systems like weather states actually being directed by the score (because music shouldn’t always be subordinate to gameplay!); pickup/weapon/impact sounds being instantiated as melodic sequences in the currently-playing song, transposed to remain musically coherent (actually we do that in Sandfox, but it could go further).</p>

<p><a href="#_" id="lb-assets-img-metron-metronscreen01-png" class="lightbox" aria-label="Close image">
  <img src="/assets/img/metron/MetronScreen01.png" alt="Metron Tracker's multisample instrument editor" />
</a></p>

<figure class="post-figure">
  <a href="#lb-assets-img-metron-metronscreen01-png" class="lightbox-trigger">
    <img src="/assets/img/metron/MetronScreen01.png" alt="Metron Tracker's multisample instrument editor" />
  </a>

  
    <figcaption class="post-caption">
      Metron Tracker's multisample instrument editor
    </figcaption>
  
</figure>

<h4 id="sowhat-is-metron">So…what <em>is</em> Metron?</h4>

<p>Initially Metron was a Blueprint in Unreal Engine 4. Actually, about a dozen enormous spaghetti-Blueprints. It proved the concept and taught me a lot about Unreal Engine and its then-new Audio Mixer, but ran very inefficiently - not least because it was doing heavy string-comparison operations per tick on the game thread in a system that’s really not designed for that kind of thing. I’m quite proud that it actually ran, though! The sample instrument was very rudimentary and a lot of the music was being generated by Unreal Synth Components - a few of which are provided as examples by Epic Games in the engine.</p>

<p>It was cool, and (usually) more fun than aggravating, but it was butting up against some hard constraints of UE’s Audio Mixer pipeline - constraints that exist for extremely good reason (to keep the pipeline efficient and reliable for the insane demands that modern games place upon it) but which meant that the way Metron’s sound generators, effects chains and mix stages had to be structured was far too fragile and wonky to ship.</p>

<p>Some day I’ll write up the iteration process in full, because it was a wild ride (I had to teach myself C++ and the joys of threading along the way), but here’s the short version:</p>
<ol>
  <li>A bunch of Blueprints, built-in synth components, effects presets; UMG/Slate editing interface - no custom code or engine modifications</li>
  <li>A bunch of Blueprints, some custom synth components and effects, some C++ to instantiate a better effects/mix buss structure on the fly</li>
  <li>A Metron plugin for UE4: one enormous custom synth component containing all the music sequencer logic; instanced synth components and sample players; a <em>proper</em> tracker interface in Dear ImGui allowing music to be edited <em>in game</em> via a semi-transparent overlay (pretty cool! But not <em>that</em> useful!)</li>
  <li>Some profound refactoring to bring <em>ALL</em> DSP work inside the synth component, giving me proper performance metrics and allowing me to segment buffer-block processing for extremely tight timing</li>
  <li>A new standalone Metron tracker in C++/Cmake, completely separate from Unreal Engine(!), with all aspects of the Metron music system modularised into e.g. core, I/O, UI and with build profiles split into ‘standalone’ and SDK</li>
  <li>… then a <em>new</em> Unreal Engine plugin that can wrap the SDK, import music files saved by the standalone editor, and integrate them easily into a game</li>
</ol>

<p><a href="#_" id="lb-assets-img-metron-metronscreen02-png" class="lightbox" aria-label="Close image">
  <img src="/assets/img/metron/MetronScreen02.png" alt="Metron Tracker's audio mixer section" />
</a></p>

<figure class="post-figure">
  <a href="#lb-assets-img-metron-metronscreen02-png" class="lightbox-trigger">
    <img src="/assets/img/metron/MetronScreen02.png" alt="Metron Tracker's audio mixer section" />
  </a>

  
    <figcaption class="post-caption">
      Metron Tracker's audio mixer section
    </figcaption>
  
</figure>

<p>Right now, Metron Music System exists in two parts: a standalone tracker DAW app for Win32, MacOS and potentially Linux (I try to keep the codebase cross-platform); and an SDK consisting of static libraries for metron_core, metron_io and metron_log.</p>

<p>The standalone tracker is a 2.5MB executable that any composer with some experience of trackers (or none!) can use to write some music, save out that music, and send it to the game devs as a deliverable asset. This beats requiring a composer to download and install 50GB of Unreal Engine plus ~20GB of project, build it all, then run the entire game editor just to tweak some notes.</p>

<p>The SDK contains all the header includes necessary to write a wrapper for the game engine of your choice. I’ve made a wrapper for UE since that’s what we mostly use at Cardboard Sword. A wrapper for Unity or Godot should be straightforward too - as long as you can parse the Metron file format (a zip containing JSON data and PCM data) and pass that data to the core, then use the core to fill an audio callback buffer, you’ll be able to play the music and get all the realtime sync data you want in a threadsafe way.</p>

<p>Yeah, it <em>was</em> cool to have the tracker editor running in a Slate tab in UE, and as an overlay during PIE (Play In Editor) sessions and even in packaged builds, but iterating compositions in the standalone editor is a way better workflow, especially if you’re contracting the music out to freelancers.</p>

<p>So the very short version is that it’s a music middleware system, in-house to Cardboard Sword for the time being, but proven in two shipped titles and being actively developed.</p>

<p>Here’s yet another set of bullet points with some cool features of Metron:</p>
<ul>
  <li>The default instrument type is the SampleVoice - actually a multisampler with Kontakt-style key zones and velocity zones, FastTracker-style multistage volume/pitch/pan envelopes plus an assignable effect parameter envelope, internal phrase sequencers, plus the usual range of sample mangling pitch-and-time effects you’d expect from a tracker: sub-tick note cuts, microtiming, timestretching, retrigger, and so on.</li>
  <li>The custom synth engines are a hybrid 4-operator FM/PD synth; a Karplus-Strong string modelling synth; and a bit-accurate Atari YM2149 soundchip synth built on the ayumi AY chip emulator</li>
  <li>There’s a full suite of discrete DSP effects that I wrote from scratch to be as efficient as possible, but with enough flexibility to allow for comprehensive master buss chains, creative time/modulation chains and extensive realtime automation. They can be used in the same way as VST plugins in a variety of contexts and routing paths, but unlike most VST plugins they’re optimised for use in games where system resources are at a premium, and audio is not the sole priority like it is in a conventional DAW</li>
  <li>There are three fixed send effects (delay, reverb and chorus) that can be used by all instruments and group busses</li>
  <li>The group buss system allows an effects chain to be built on a group buss so that any channel’s output can be routed through it</li>
  <li>The master effects chain allows all channels, sends and group busses to be processed through e.g. compressors, EQs and limiters (or indeed anything you want)</li>
  <li>Polyphony is dictated by the channel count - since performance is critical, it’s important not to let polyphony run away from you, but also this means you can arbitrarily route an instrument to and from group busses by triggering them in different channels</li>
  <li>Obviously because this is all runtime sequence data and realtime DSP, transposing the pitch of a song without altering the tempo is essentially free in terms of CPU; while adjusting the tempo without altering the pitch is equally free.</li>
  <li>Aggressive optimisations throughout mean that, for example, <em>Transmission: Shortwave</em>’s soundtrack of reasonably complex 6 to 8 channel 90s jungle tunes with reverb, delay, chorus, compressor, limiter, EQ and some filters use an average of 8% of audio CPU budget on Meta Quest 3 hardware (ARM, Snapdragon XR2 Gen 2). Similar arrangements under Windows on a Ryzen 7 5800X are closer to 4%.</li>
</ul>

<h4 id="epilogue">Epilogue</h4>

<p>Thanks for reading! If you’ve been following Metron’s progress, you’ll know I’d like for Metron to be available to everyone in some way, at some point, but I’ve never made any promises or predictions because there are various things to figure out first (and there’s so rarely enough time to figure things out). I always appreciate the support and interest it gets when I post or talk about it, and even if I can’t immediately invite people to try it out and enjoy it, I hope that my occasional deep-dives on the development experience are of some use to others who are trying to do cool stuff in game audio generally, and Unreal Engine’s fantastic Audio Mixer in particular.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Metron is a dynamic music system for games that I’ve been developing over the last few years as an in-house audio tool at Cardboard Sword. It comprises a tracker-style DAW editor for composing and exporting music assets, and a static SDK library for playing that music back at runtime in a games engine. Crucially, the music is synthesised in realtime rather than rendered to a ‘flat’ audio format like WAV or MP3 which means that all of the individual notes, expression controls and arrangement events are available to the game as rich data for bi-directional sync.]]></summary></entry><entry><title type="html">ARM vs x86 memory ordering in Unreal Engine</title><link href="https://echolevel.co.uk/blog/2025-10-30-armvsx86-ue5-audio/" rel="alternate" type="text/html" title="ARM vs x86 memory ordering in Unreal Engine" /><published>2025-10-31T00:00:00+00:00</published><updated>2025-10-31T00:00:00+00:00</updated><id>https://echolevel.co.uk/blog/ARMvsx86-UE5-audio</id><content type="html" xml:base="https://echolevel.co.uk/blog/2025-10-30-armvsx86-ue5-audio/"><![CDATA[<p>I thought I’d write up the most infuriating bug I’ve had to deal with lately - how it manifested; the many things that could’ve caused it (but didn’t); what actually did cause it; and how I fixed it.</p>

<p>Metron, our realtime dynamic music middleware, is a portable C++ replayer core which is wrapped by a tracker-style DAW interface and can be used standalone or as a plugin for Unreal Engine. Writing Metron is effectively how I migrated my Java skills to C++ and it’s been a very rewarding experience on the whole. UE4’s Audio Mixer gave me a great sandbox in which to learn how to create DSP effects and synths, while breaking Metron out to raw C++ has in turn been a wild ride of learning to cope without all the safety nets UE gives you.</p>

<p>Learning realtime programming, you very quickly run into the various issues that certain well-established golden rules exist to prevent: to generalise, they mostly boil down to <em>“do not cause or allow anything to happen if you don’t know exactly how long it will take”</em>. With realtime audio, you’re not doing all your work 60 or 120 times per second like the graphics and gameplay folks are. You have to do it 48,000 times per second - and if you can’t, your game’s unplayable. People will forgive minor visual glitches a lot more willingly than stuttering audio. This means you have a tight time budget that you can calculate by estimating an idealised maximum time that your per-buffer DSP work can take before buffer underruns occur, e.g.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>(512 / 48000) * 1000 = 10.67ms
</code></pre></div></div>
<p>and then ensuring that you do all your work in slightly less than that time headroom - ideally a LOT less that time.</p>

<p>Keeping to this budget means you can’t have your core DSP loop sprinkled with calls to anything that happens in unbounded time, which is…actually quite a lot of stuff if you want your code to form part of any interactive application, game or VST plugin. So your core loop needs to be free of potentially latent or expensive things such as memory allocation (including vector/TArray resizing, which does memory allocation under the hood), UI updates, networking calls, input polling and so on; all those things need to be run on a different thread, and synchronisation between your realtime thread and your ‘everything else’ thread (let’s call it the UI thread) should be very carefully managed so it doesn’t a) happen way more often than it absolutely needs to and b) doesn’t happen in such a way that causes - <em>or could potentially cause due to factors outside your control</em> - a buffer underrun. Those factors, by the way, include almost everything else that’s going on in your game or the user’s operating system.</p>

<p>Safe communication between RT (realtime audio) and UI threads is something I had to learn pretty early on; Unreal Engine has some good tools for this and I was able to replicate some of those behaviours in raw C++. The idea is for the producing thread to queue up events which the consuming thread can dequeue at a safe interval of its choosing, so that data races can be avoided. If the UI thread gets a user interaction that should affect the sound (ie a knob is turned which should control a sound’s amplitude), then an event is pushed into the queue so that it can be dequeued on the RT side when it’s safe to do so - usually at the buffer block processing boundary, just before the next buffer block’s DSP work is done. The audio callback is called, the queue is flushed and all lambdas/etc are fired, then the buffer loop begins so that per-sample work can be done.</p>

<p>There’s a similar need for extreme care when passing data back to the UI thread from the RT thread, and in an audio context that’s often going to be the sort of telemetry that a user needs <em>in addition</em> to the audio they can hear: dB levels of each track, so a level meter can be drawn on screen; current sequencer transport position and playback state; that sort of thing. If the data is a single integer, float or bool, you’ll often get away with using a std::atomic variable that can be safely written one one side and read on the other side. There’s a range of memory order options to choose from. That’s foreshadowing…</p>

<p>Metron is a dynamic music system, so being able to communicate between UI and RT threads only once <em>per buffer</em> is inadequate; we need much tighter timing than that, so I segment the buffer into even smaller chunks according to musical sequencer timing (all of which is calculated on the audio thread). You could segment the buffer per-sample, ie run 512 processing loops in a 512-sample block with a dequeue from the UI thread between each one, but that would normally destroy your CPU budget. For a PPQN of 24 (which is low by modern DAW standards but adequate for Metron), even at high musical tempos we only need to segment each 512 or 1024 sample buffer a few times.</p>

<p>So timing is tight and the music system is very responsive to UI-rate control input from the game thread. The SampleVoice instrument which is core to Metron and makes it an extremely flexible sample-based synth works by having an instance per tracker-channel (which is how polyphony caps are kept under control) and then swapping in new sampledata pointers every time a new note is encountered, regardless of the note’s instrument. If the last note was a kick drum and the new one is a snare drum, that’s fine: just swap the sampledata pointer and read what’s there. If the pointer is wrong, or has died because it wasn’t properly kept alive, we’ll soon know about it because there’ll be a crash (or a load of telltale logs).</p>

<p>Why, then, was I getting a bug where an instrument would go silent on perhaps one note in a dozen, or perhaps once every 30 seconds it would start at the wrong sample offset? And why would this happen <em>only</em> on ARM, never on x86? With no errors or asserts firing, no absence of PCM data, no resource pressure (songs would be using around 4% of audio CPU budget)? It was massively annoying and it took me about 6 months, on and off, to track down. You can get away with this sort of bug sometimes with pad sounds or lots of staccato lead notes, but this game has a jungle/drum and bass soundtrack full of meticulously sequenced breakbeat chops: if one of those goes wrong, the whole flow is ruined. So I couldn’t ignore it.</p>

<p>After trying loads of stuff that didn’t work, I ended up going down some esoteric rabbit holes: thread migration seemed like a likely candidate. The main box it ticked was that it’s naturally something that’ll vary between CPU architectures. Also, UE <em>does</em> migrate the audio thread around quite a bit on x86 as far as I can tell: there’s nothing sacrosanct about the audio thread in terms of absolute thread ID, and it can jump around according to &lt;waves hands at UE’s bazillion lines of mysterious core engine code&gt;. But this worked fine on x86. I wasted a lot of time trying to demand at init time that the audio thread be constrained to a single core, or just two, because I could see that my buffer callbacks were alternating between cores 3, 4 and 5. But the target platform - Meta Quest 3 - doesn’t respect those requests, so nope.</p>

<p>Anyway, I wasn’t a million miles away. It <em>was</em> an ARM vs x86 thing, but it was…atomic memory read/write order, just as foreshadowed! x86 doesn’t really care about atomic read/write order because it can always see the entire state across all cores, and it knows which order things should be done in. It lets you get away with being sloppy. And because I didn’t truly understand the implications of the memory order args on my atomic load()/store() calls, I was very sloppy. ARM doesn’t give you a safety net. If you don’t plan your atomic operations properly, you’ll get valid but unexpected results and <em>no errors will be thrown</em>, especially if your code is very defensively written.</p>

<p>So the bug was that the state of a sample voice’s PCM-buffer reader was subject to tearing which could (safely!) invalidate a note right after it was triggered (hence the dropped notes) and/or load a stale start-offset position to the note (hence the wonky breakbeat chops). The solution was to hold an atomic state struct per PCM-buffer reader that was trivially copyable POD (plain old data); and to use the <strong>correct</strong> std::memory_order args when updating its PCM pointer or ready flag. Pointers/flags/vars in the reader now can’t be touched out of sequence and can’t become stale. All my stolen notes, returned.</p>

<p>Honestly, I still couldn’t explain every memory_order option if my life depended on it, but at least I’ve learnt that if my writer/producer uses release and my reader/consumer uses acquire, I’ll probably be okay on ARM. And I’ll also be okay on x86, which doesn’t really care.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I thought I’d write up the most infuriating bug I’ve had to deal with lately - how it manifested; the many things that could’ve caused it (but didn’t); what actually did cause it; and how I fixed it.]]></summary></entry><entry><title type="html">Dev blog</title><link href="https://echolevel.co.uk/blog/2025-10-29-newblog/" rel="alternate" type="text/html" title="Dev blog" /><published>2025-10-30T00:00:00+00:00</published><updated>2025-10-30T00:00:00+00:00</updated><id>https://echolevel.co.uk/blog/newblog</id><content type="html" xml:base="https://echolevel.co.uk/blog/2025-10-29-newblog/"><![CDATA[<p>Every few years, usually when I’m sick and can’t concentrate on proper work, I redesign this website. My ideal website would be no website, but even in this day and age it’s sort of obligatory for someone in my lines of work to have a place to dump their CV and some links to stuff they’ve done. Either that or it’s the sunk cost fallacy of having had this domain for about 20 years and not wanting to let go. Running a close second would be the simplest possible website, just a few bits of HTML and CSS with no database, no javascript and no CMS backend to maintain and patch. This yearning reached its zenith when I was doing ‘proper’ web dev professionally - building full-stack custom platforms for clients while the web dev world’s horrifying reliance on brittle packaging systems and quickly evolving framework hype trains started to really snowball in the mid-2010s.</p>

<p>Up until now I was hosting my very simple static HTML+CSS site on Firebase, mostly because I couldn’t be bothered to figure out GitHub Pages’ Jekyll implementation but also because I didn’t need to update it at any frequency greater than…annually-ish. Perhaps even less. When you’re working on games industry projects for years at a time, and almost everything is under NDA, there’s often nothing you can say about them. The catalyst for this latest overhaul (other than catching a chest infection and wanting to faff about with Jekyll as a form of low stakes non-work therapy) was that I thought it might be nice to occasionally make dev blog posts based on things I’ve learnt, bugs I’ve solved, and rationales for design/engineering strategies that I’ve arrived at in my work. If someone chances upon it in a desperate search result while experiencing some niche problem that I already managed to solve, great!</p>

<p>As for GitHub Pages/Jekyll - I like it, I think? I initially resented having to install Ruby for a local dev environment but it’s fine; the whole Liquid templating/logic thing gives me nice Tumblr vibes (oldskool Tumblr, not modern day, we’ll-inject-ads-into-your-blog Tumblr), and it’s nice to work locally in markdown and publish static pages with a git push to a server for whose security someone else is responsible.</p>

<p>I’ve catered for some basics like <code class="language-plaintext highlighter-rouge">inline code</code> and code snippets with syntax highlighting:</p>

<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kt">bool</span> <span class="nf">getIsTrue</span><span class="p">(</span><span class="k">const</span> <span class="kt">bool</span> <span class="n">bVal</span><span class="p">)</span>
<span class="p">{</span>
    <span class="k">return</span> <span class="n">bVal</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>

<p>and that’s probably all I’ll need. Maybe the occasional image or YouTube video.</p>

<iframe width="560" height="423" src="https://www.youtube.com/embed/ietdIDxznKA?si=dwh2xNNkHGfVujct" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen=""></iframe>

<p>Like that.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Every few years, usually when I’m sick and can’t concentrate on proper work, I redesign this website. My ideal website would be no website, but even in this day and age it’s sort of obligatory for someone in my lines of work to have a place to dump their CV and some links to stuff they’ve done. Either that or it’s the sunk cost fallacy of having had this domain for about 20 years and not wanting to let go. Running a close second would be the simplest possible website, just a few bits of HTML and CSS with no database, no javascript and no CMS backend to maintain and patch. This yearning reached its zenith when I was doing ‘proper’ web dev professionally - building full-stack custom platforms for clients while the web dev world’s horrifying reliance on brittle packaging systems and quickly evolving framework hype trains started to really snowball in the mid-2010s.]]></summary></entry></feed>