2026-04-16

Maybe good coding practice will still be a thing after all

"Any fool can write code that a computer can understand. Good programmers write code that humans can understand."
Martin Fowler

Consaider this (fictionalized) exchange about the abilities and limitation of LLM coding tools

Enthusist:
See, the model can write code for you!
Skeptic:
Well, it does seem to be correct, but the code quality isn't that great.
Enthusist:
It doesn't matter, those "code quality" rules were for people, and now the machines will write all the code.

And that might be the case, but I see two potential barriers. First, even if Claude is not as frail as—say—me, it does have limitations. Second, we don't yet know what the unsubsidized cost of machine coding will look like. And sitting at the intersection of those concerns: while it is technologically possible to work around some of the limitations those efforts probably drive the cost up. Alternately, if cost is your big concern you could try running locally but the cost-capability trade-off could be punishing.

Okay, the model can comprehend spaghetti code, but how much?

My experiment with machine coding have been pretty limited: any given session starts with a few hundred lines of human written, or machine-written-but-human-reviewed-and-tweaked code. Then as the session proceeds, the machine generates a few dozen lines of new or changed code. The generated code is sometime perfectly acceptable and sometimes not really up to production standards. But the interesting thing is that I can continue with the session without manually fixing so-so code generated in the last round, and the machine can comprehend it just fine. Nice

That's promising, the model is not confused by a small amount of spaghetti code. That said, I could handle that much (and in my younger days routinely did). I fix messy code pretty early in my work not because I can't work with it up to a point, but because I know it's easier to fix in small batches and it also prevents me from forgetting to go back to some section or another.

But there is a limit to how much bad code a human can deal with, and we should expect there to be a limit for the models as well (though it could by a bigger limit). And that is where size fo the code base comes in. A lot of coding class assignments and similar toy problems can be done in a few hundred lines. A hobby project might be a few thousand lines. A small, focused tool might be a few tens-of-thousand lines. A small production applications (like my main project at work) might be in the low hundred-thousands. Any seriously large project will exceed a million lines.

How much poorly organized code can a LLM (even a big one) actual maintain in the long run? I don't think our current experience is much of a guide to that yet.

And yes, we see regular reports of big, capable models working in largish code bases, but the code they're starting with was (at least initially) laid down by and organized by humans. And often humans with quite stringent standards at that. The model being able to work in that code base is a different thing from the model being able to work in one that isn't so carefully tended. Unless the models can write quality code, or they are subject to constant supervision by humans who can, projects they work on will succumb to entropy over time. And then we'll see.

Can you afford it?

It's widely reported that all the big LLM providers (OpenAI, Anthropic, etc) are still subsidizing users with venture capital. No surprise, really, the lock-in-then-enshitify strategy has been the way to make serious money in tech for decades. But it means that the last couple of years of corporate experience in the economic viability of machine coding may not be representative of the after-the-llm-market settles down situation.

Does code-quality matter to the model?

The interesting case for the code quality situation comes up when two things are true: (a) the model can comprehend and work in "nice" code better than "messy" code1, and (b) the cost situation isn't a complete blowout in either direction. Under those conditions, there is a clear incentive to keep the code base in a less expensive state. Either the models will have to learn the lessons we teach to human beginners, or the human supervisors will need to steadily manage the chaos introduced by machines that haven't learned better for themselves. Honestly, the latter possibility sounds like a brutal and unrewarding job.

It's early days yet, just you wait

We've been hearing some variation on "Today is the worst it will even be again" since the whole idea roared into the mainstream a few years ago. And that's not wrong, but it doesn't tell us anything about where (or if) the tech will plateau.

It is certainly true that, as time passes (and money gets spent like its going out of style), the state of the art models are getting bigger, and that models of every size are getting more capable. Though I may play the grumpy contrarian at times I am genuinely impressed. But if we're not looking at some kind of run-away self improvement, then there will be both hard limits and probably a cost function that grows rapidly as one approaches the hard limit. Either could represent a barrier to the dominance of machine coding.

Or not. But I'm not taking the hype machine as a guide. Their incentives are all too plain for them to be trustworthy.

Variations

Up to this point I've been focusing on understanding the future shape of programming on the assumption that the models will be good enough to write all the code. In that model, economically motivated programming will be shaped mostly by economic factors: things like "how much do models cost compared to skilled humans?" and "How does the cost of getting the model to do a nice job compared to the cost of letting it spend time grinding through a huge and complex context?". But we can also throw some other assumptions against the wall and see what the resulting mears look like.

Models master the small picture, but not the wide view

We can image that the models never really get the hang of both understanding a big project on the scale of interacting modules and then writing code on a modest scale with that design in mind. That when they work in a big project without supervision they make a mess by violating architectural separation that were designed in, by losing track of existing utility code and duplicating it locally, by putting routines in less than optimal parts of the module tree, and so on.2 In a world like that we might need to have human supervisors even if we're handing most or all of the coding over to the machines. And the codebase need to be comprehensible to those supervisors, so code quality will still be a thing that people care about.

The state-of-the-art is good enough, but it's expensive

In another scenario, the best models available can produce and maintain software as well as a expert team of programmers, but the cost per unit-code is higher than a typical human team supported by less able models. We might see the most valuable project done almost entirely by machines, while more marginal projects use non-trivial human input. We'll probably have a smaller industry and with different skills in demand: humans will need to drive and supervise models which are doing most of the grunt work.3 And here the code quality issue is driven by the understanding of who need to comprehend each individual project: if it's only models then we just pay what it costs if the readers include humans then we exercise the discipline. And man, if you have to downgrade a project from machine-only to machine-assisted there is going to be a heck of a bill...

A word of caution

Even if machine code is objectively worse in some sense it could crush human coding as an economic activity. There are multiple cases from the industrial revolution of craft industries being pushed almost completely out of existence by worse-but-cheaper machine-made goods. The small number of practitioners that stayed in business were serving either luxury markets or special use cases and many of them came under increasing pressure as the industry got better at the task. That is a thing that could happen to human coders, too.


1 I haven't seen anyone on-line addressing this question yet, and am just getting started on my own investigations of machine assisted code so I don't even have a feel for it yet. But it feels right to me: with a messy code base you're going to need more context for for the machine to address any given problem, and context means processing.

2 You know, like people do? Occasionally I assign people working on my project (including myself, of course) to look through the module they're working in, find any stray utility code and see if it needs replacing with centralized tools, or ought to be moved to the make it more widely available. Likewise, I look closely at change sets that touch build files for evidence of potential harmful added inter-connectedness. It's an ongoing effort because it's often easier to do the wrong thing than the right one.

3 My biggest worry in that kind of scenario is what does the pipeline for training new human experts look like. It's not clear that anyone knows yet and there could easily be a lean time as existing experts retire and an insufficient number of new experts are emerging to take their place.

2026-04-02

Side Quest: Server Fetch

I mentioned in a recent post that I embarked on a lengthy side-quest while configuring a JavaScript environment in Emacs for a hobby project. It might be time to explain what that was about.

Getting the quest

I picked up the quest when I went looking for a JavaScript LSP, and found that almost all the options require installation via npm. I do have a couple of npm based things installed, but I've been feeling sour about it after some recent supply chain attacks against the packaging system and have tried to work around needing it. I hasten to say that my worries weren't focused around a notion of npm as more of a security risk than other supply chains I use, but around the desultory and inexpert way I interact with it. In the case of my other supply chains, I update regularly and follow sources of on-line news that are relevant to the ecosystems in play. Not so with npm. I worried that I would be late to the party on knowing that I had a problem and taking steps to fix it.

The task at hand

I wanted to install a JavaScript language server without using npm.

To start with, it's not in the least surprising that most JavaScript language servers are distributed via npm. I mean the protocol specifies JSON, so there is every reason to write them (in whole or in part) in JavaScript, and in that case npm is the default distribution mechanism. But, as I indicated above, I was hoping to avoid adding that window of vulnerability to my systems.

If at first you don't succeed...

So, LSP being a microsoft controlled protocol, one of the few central locations that which a list of available servers is maintained by microsoft, but there are others. In any case, I found only a few currently maintained options:

  • typescript-language-server
  • flow
  • quick-lint-js
  • biome-lsp (a part of the greater biome project)
The installation instructions for the first two both start npm install. The third is available from my distribution's package repository and works, but there is a lot that it doesn't implement: it really is a linter than knows the language server protocol (and I don't want to diss that—being a server gives it flexibility in interaction modes—but it isn't all that I hoped for). That leaves biome, and actually, it's installation instructions also start by calling npm, but further down the README.md it says "Biome doesn't require Node.js to function.", and you can find build-from-source instructions on the project homepage. Frankly, they're not very good as they seem to assume that if you're asking that question you can guess your way through it, but I got it done.

Being my own worst enemy

The last hurdle on my quest was largely of my own making. You see, to run biome as a server you have to pass it the lsp-proxy command, which is emphatically not the spelling I would have chosen for that option if I'd been writing the tool. And for some reason, when I was configuring Emacs, I kept spelling it differently. I spent an increasingly frustrating half and hour making variations on the same mistake before noticing what I had been doing all along. Use the right spelling and it just works.

2026-03-23

A DeCSS shirt for the late 2020s

So, I see a lot of wittering and gnashing of teeth about online age verification laws. Both in general and specifically as the apply to Linux, BSD, and other open source operating environments. I want to talk about some practical issues around what technology will have to emerge to make them "work" and how easily even moderately technologically aware people can, to be blunt, screw the laws over.

And I want to propose a new fashion that might, just, catch on in the next few years.

What is going on

A few juristictions (including Brazil and California) have passed legislation pertaining to online age reporting and many other jurisdiction seem to be following suit.

Allegedly these are intended—as so many, many bad ideas have been in the past—to Protect The Childred (tm).

They're not going to work any better than content labeling of music, the v-chip, or video game content rating (just to name a few) did1. I recall a time when every stand-up comedian seemed to have a bit about how parents would have to get their seven-year-olds to program the v-chip just like the kid was the one who set up the VCR. But hey, we have to do something and this is something so obviously we have to do this. Not that I'm depressed by how predicatable all this is or anything.

Why it's weird for open source

There are a few of things going on here. One is structural, one is philosophical, and under all that is brute technological fact.

Structural

While Windows, MacOS, IOS, ChromeOS, and Android2 are controlled by large coorporate entities that decide what their customers get to install, Linux, BSD, and other open source operating system are, in principle, fully under control of the individual installing them. The weasle words are in there because few people build out their system from raw parts: they mostly use a distribution, which does have a central point of control (though many offer much more customization that the you get from Apple or MicroSoft).

This is not a funcdametal issue; the kernel and or the encrustation of supporting code could feasibly (I won't say "easily" because I'm not the one programming it) be altered to support the requirements of the laws. And those changes could be incorporated in upcoming distribution releases and make their way out to the mass of users.

But it's not like there is one place to go to try to enforce this decision. Or even ten places. Keep in mind that even if some major distribution (perhaps Ubuntu) were to comply nothing stops a downstream re-packager (say Elementary) from removing, disabling or defanging that support. More on that later.

By the way, there are litteraly scores of distributions originating on all the inhabited continents and from various points in Oceana.

Philosophical

As a generic term "open source" covers a lot of ground, but central to the that nebulous mass broadly known as the open-source/free-software movement we find Creative Common, the Open Source Iniitive, and the Free Software Foundation all of which of are organizations with some money, rather more precisely specified definitions, and some very strong opinions on matters of software control and human flourishing. And they're not the only ones. In fact the space is just crawling with various NGOs that provide legal support, lobbying services, publicity, and (obviously) software packaging.

The whole "the government is telling you how to build your software" thing isn't going down well. You may expect resistance at many levels. Anyone else own a DeCSS shirt?

The foundational reality of Open Source

Programmers program. In one sense that's a tautology, but it has profound implications.

Before delving into what it means for this issue, let's just talk about what it means for organizational cyber security. My employer has recently gone through a series of IT security exercises in an effort to lock down all the possible cyber threats. And they have a problem: what programers do on a day to day basis is indistinguishable from a large class of threats. We create new executables not known to the system and run them. Often dozens of times a day. And that is unavoidable: you can't have the benefits of what programmers do without having the relative chaos of programmers at work.

Similarly, you can't have open source and still be confident that everyone is running the nannyware you insist on. Remember that I said a downstream distributor could strip-out or neuter a reporting facility installed by an upstream provider? Well, in principle every single user is a downstream provider with that same capability. Worse, capable programers can provide tools to enable less capable people to perform the necessary modifications. Indeed, Ageless Linux is already pushing back against early complinace efforts on the part of systemd (the dominant, but often derided, init system on major Linux distros).

Legal aside

I think the intent is that anyone modifying the softawre is the "Provider" that the government enforcers can go after, but if that's just a couple of techy parents who don't want their machine identifying their minor childern to the wider internet, there is a "parent's rights" argument to hang a political and legal challange on.

But ... talk to an actual lawyer in your actual jurisdiction with actual expertese on the legal system you actually might be picked on by before counting on that kind of thing. K?

Speculation on implementation and countermeasures

From ten kilometer altitude, communication between a user's machine and a software store or other endpoint that might want to use a age signal can take one of two forms, and one of them is harder than the other for actual installed systems. You see, one machine has to initiate the conversation, and if that's the store's server, then many home and corporate firewalls will drop the packets on the floor.3 For that reason I suspect the industry will settle on a strategy where the user machine asks the server for a one-off token, hands that to a local age-reporting API which cryptographically mixes it with the answer, and the mixed data is then relayed back to the server for decoding. There are other things they could try, but they're all pretty fragile.

Anyway, on Linux the bit that builds the reply would either be built into the kernel itself or in a kernel module, but either way a savy user will be able to disable them. Then they just substitute a dummy system that respects the protocol, but always returns a least-interesting answer to every query (Yeah, this user is of age. Trust me.).

What Ageless does is more than that: it removes the infrastructure and storage that could be used to respond, which is a good thing, but the above is enough to stop making meaningful responses. And I'll bet a bottle of scotch that the dummy responder can be constructed with code that will fit on a t-shirt.

Intent

I'll write the thing as soon as the spec is available (or grab someone else's if it's avilable, because I'm not stuck about this sort of thing). And then I'll be printing shirts. And maybe hoodies, too. You never know.


1 Which is to say that (a) the kinds of parents who take the trouble to monitor their kids' media consumption will have another tool while other kinds will completely ignore it and (b) the kids will not only find ways around the tech, they'll use the system to advise them where the "good" content is.

2 Android is a little weird, because while Google (whatever name they're going by now) controls the system, many devices ship with manufacture customized versions. But it is still the case that there is a corporate entity for the government to go after.

3 And maybe report them to an intrusion detection system, but that's not really relevant here.

2026-03-22

JavaScript, PEBKAC, and LLMs

A couple of months ago there were, at last, some promising smoke signals from corporate suggesting that IT is getting around to really thinking about, just maybe, supplying us with something by way of security-minded LLM support.

Okay, so it was a little more than that. They actually bought licenses for a commercial product (something that comes with security guarantees backed by performance penalties) for supplying that kind of thing. For the managers.1 That was a bit disappointing because I couldn't explore that space on my employer's dime.

But it got me to start thinking seriously about trying out LLM supported programming workflows, and I decided to apply it to some aspects of my hobby projects. After some thought I concluded that the models I was likely to run were going to be uninspiring on C++ projects. To maximize the utility of the tool, I needed a project where (a) the model would have seen a lot appropriate of training data and (b) I was not already highly conversant with either the domain or the tooling.

I decided to start writing some little games for the browser in JavaScript.

And, brimming over with excitement for my new project I ... spent about four weeks of spare time mucking about with tooling. Yeah.

I did eventually gets Emacs configured to know about JavaScript (including an LSP, a major side-quest what with some self-imposed limitations, but that's another post) and chose one of the multiple LLM interfaces for the editor. There were a few hiccups along the way, but a I got some advice from a couple of LLMs which really helped. Oddly all the configuration work was making me feel optimistic about the project: the models weren't really handling much tedious boilerplate, but they were getting me past some ignorance-driven friction and probably (at least maybe) saving me time. So really yeah!

Then I watched a couple of on-line tutorial videos on JavaScript animation and game-dev, created a new project repository and started random hacking about while asking the models a bunch of questions and having them try their hands at writing bits of code. In about a-week-and-a-half of spare time I had a canvas with four distinct objects moving around it, each under their own set of rules for speed, gravity, wrapping around edges and bouncing. The code quality was awful, but not because the models wrote it. The parts I wrote were just as bad because this was a "just get something working" exercise. Almost everything was in global scope and free functions poked into the depths of the object collections to manipulate internal state. Classic.

The next thing to do, of course, was impose some order on the chaos. To channel interconnectedness through designed mechanism, and achieve a degree of uniformity in implementation.

I broke everything that worked to separate some concerns here, name elements of the modeling there and so on. And after burning another week of spare time on it, this afternoon I had all the same data in the program in a much more orderly manner and the first frame drawing as expected. But nothing was changing even though it looked like I had all the plumbing back in place. The application of some judicious console.log calls2 proved that the animate function was getting called, it was iterating through the object and telling them to update themselves, and they were iterating through their update action list and calling them. But the display was not updating.

And also, I hadn't asked the LLM anything for days of real-time and hours of coding time despite (if you believe Mrs. NoSwampCoolers) a steady stream of swearing at the code. I've been "a programmer" since sometime between 1982 and 1992, depending on what you want to credit with the title (1982 was first typing the classic 2-line "hello" infinite-loop in basic; 1992 was my first production code). I have deeply ingrained habits for both learning new things and tackling problems with code-bases. And "ask the AI" does not, currently, feature in those habits. Which is a me problem to the extent that "ask the AI" is better than my other approaches, but is still a thing to be overcome if a transition is to occur.

Anyway, I did eventually think to "ask the AI". I used my LLM interaction tool to pick four (quite short) files out of the project directory and asked why the actions called by Obj2d.update were failing to update the positions.

After telling me many thing about the code that I knew (because I had designed them in), and complaining about the bits of context I hadn't show it because they did matter, noticing (out loud) that they didn't matter, and deciding (out loud) to move on, it first proclaimed that there wasn't anything reason for issue and then said, "but wait". Turns out the joker who wrote this code didn't provide setters for some concepts he's trying to set in the bowels of the Vec2d class. Which was the correct resolution of the issue.3 Facepalm. Score one for the model.


1 Whose main day-to-day complaint (at least in my unit) is that they don't get enough time to do technical work what with all the people, contracts, and paper work they're saddled with.

2 Yes, there is a debugger in the browser, and yes I've largely figured out how to use it. But my "just hack on things" machine has a small display which (added to the unfamiliarity of the language and debugger) made print-debugging feel more comfortable. Presumably I'll get over it as I go.

3 I don't think I've mentioned on the blog before, but I really hate environments that silently fail. And Firefox will let you write v.x = new_value; when v has neither a public data member nor a setting named "x". Really. So the f'ing environment gets some of the blame here.

2026-02-28

Progress in the artificial "comprehension" of humor

A minor side-line in my on-going investigations of how well or poorly LLM perform has been teasing them with jokes. Back when I started they were consistently abysmal at explaining why jokes (even very simple ones) were funny, though they could fairly consistently categorize them into wordplay, dark humor and similar bins. Some models would confidently assure me that I wrote the jokes wrong or that they didn't make sense.1

When the models started to get a little better (able to do a passable job on the easy ones) I added a couple of relatively subtle jokes to my list and those promptly stonkered the largest models I had access to.

Until today.

Aside

My 64 GB RAM framework laptop (which I had been using to run mid-sized models locally) was stolen last fall and I haven't replaced it. I was unwilling to send my questions to the AI companies lest they train to the test; so I didn't have access to large or mid-sized models to try for a while. Then ollama started offering cloud services. I think it is much less likely that queries made through that channel are making their way back to the model vendors (ollama says they don't and I suspect the massive social cost of getting caught would deter them even if they were inclined to cheat), so I started trying the big models on their servers from time to time.

Meanwhile, back on the ranch

The big open-weights GPT (gpt-oss:120b) is a really impressive model—I've been using it for a lot of chat tasks I might previously have lobbed at ChatGPT—but it still failed at both my "trick" joke questions. In fact it gave almost the same wrong answers as phi4, gemma3 and so on. Maybe the result of a effectively common training set?

On the other hand its answers to the technical question I ask models were much more like those the big (300B+ parameters) leading edge models were giving six months ago than the one mid-size models (30-70B parameters) were giving while I still had the framework. So I concluded that progress was being made on several fronts including coding2 and what I call "deep-search"3 but not necessarily on making connections for which its training set had few examples.

Today, after doing my regular Saturday chores, I updated my ollama and looked at the recent models. Hmmm ... I'd seen a youtube about this gfm model and they made a variety of brags that would be impressive if true. So I tried it. It aced one of my favorite, easyish-but-out-of-the-mainstream coding questions. Not screaming fast, but fast enough that running the model and looking through the output would be faster than my solving it by hand if I hadn't worked it out in advance to use as a key for the test.

I got ambitious and asked it about the jokes.

One of the prompts includes a couple of hints: it notes that the joke was current "in mid to late 2011" and asks for an explanation of the physics behind the humor. For a human not familiar with the episode that generated the joke it would probably require some web searches to answer, but I suspect most educated people would get there. Gfm-5 if the first model I've put this to that nailed it.

The other prompt is a little more blind. The model has to recognize the relationship between the scenario in the joke and a much more common, but generally not humorous, scenario in fiction. Then it has to examine the change in the joke's version and work out why people do a double take and then laugh or groan. The answer I got from the model was not great, but it was the first LLM answer to ID the underlying story fragment, ID the crucial change, and write that the change represent an unexpected or twist ending. Close enough for government work.

Wow. Just wow.


1 Why do LLMs hate Moby Pickle and Smokey the Grape, anyway? Parke Godwin may have been dead for ten years, but he still had the power to baffle GPT-4 with children's riddles.

2 Generally something that is vaguely like a common example problem but differs in significant ways. My prompt for writing a wavefront model (.obj and .mtl files) is enough like the usual example (a cube) that many models start hallucinating cube half-way through.

3 That is, digging into a topic and giving me a top-level explainer such as you might get from a academic colleague in a different department who knows you are smart but not familiar with the domain.

2026-02-26

Modifier precedence in English

Languages often build more complex ideas by combining symbols for simpler ideas. How this works in any particular language is governed by some set of rules or another. It's pretty typical to divide the rules into (at least) two groups: some tell you what symbols can go where (grammar), and other tells you what it means when you put them there (semantics).

To clarify the difference, let's some example rules from each family across a set of natural and synthetic languages. Some example of syntactic rules are:

English
Prepositions are generally followed by objects or object phrases
Algebra
An equal-sign, other equivalence symbol, or inequality has an expression on each side either explicitly or implicitly
C
A declaration consists of one or more identifiers (with optional initializers) and information about their type1
Notably these are all about the grouping (and sometimes order) of language symbols (words and punctuation) in the text. By contrast, semantic rules are about the meaning of combinations of symbols.
English
Appending "ly" to many nouns converts them into associated adjectives2
Algebra
Compound expressions are reduced by respecting grouping symbols to identify sub-expression, followed by applying exponentiation, then applying multiplicative operations, and finally applying additive operations
C
A declaration gives the identifier(s) meaning within the program and instructs the compile on how it (they) can be used (the type information).

The fun part of this is that none of it is forced on it. Both sets of rules are devised by people for people reasons. In "natural" languages this comes about slowly and often organically for reasons that I certainly don't understand. Talk to a linguist. In programming languages some person or small group of people sat down and consciously decided them (though after the first couple of decades there came to be some broad consensus understanding to build upon).3

An advantage of someone making a deliberate decision shows up when you have complicated rules. The originator can write down an authoritative description of the method and that's that. For instance, the c-declaration int (*normalized_comp)(unsigned, const char *, const char *) may be pretty complex,4 but by looking up the procedure in The C Programming Language, the standard document or some website, we can know with certainty that "normalize_comp" is a pointer to a function taking three arguments (one unsigned integer, and two pointers to const characters) and returning a integer value.5

My beef today is about the rules for expressing frequency in English. In particular, we can use the "ly" suffix formation discussion about to modify a time-period into a frequency. Monthly. Daily.

Fine.

But we also have access to some prefixes that modify the number of thing: "bi" and "semi" for two and one-half are common in this use.

Alas, there is no authoritative author's document to tell us if "biweekly" should be interpreted as "twice weekly" or as "every two weeks". I'm fairly sure it's the former, but...


1 I'm going to ignore the wrinkle in which multiple identifiers in a single declaration can have different type when some of them are pointers. Those of you who need to know, know. And for the rest of you it doesn't add anything to the discussion.

2 I'm also going to largely ignore the irregularities of English. They are other ways to make adjectives but, again, it doesn't add anything to the discussion.

3 The algebraic symbols and order of operations is an intermediate case. It came to be through an organic process of push-n-pull in a community, but it was a small community and the candidates were generally formed deliberately by one or a few participants. Fun stuff.

4 This kind of thing is hard enough that a typical course in c includes a bunch of exercises in how to read these thing, but in case you fall out of practice there is a tool (cdecl) just to help you out.

5 Experienced c-programmers will likely intuit still another layer of meaning. Guessing that the pointer arguments are probably meant to point to character buffers (that is strings) rather than single characters, and the return value probably takes on values in the range -1 to +1 ala strcmp. The name of that layer is "idiom", and like in natural languages it is required to be really fluent. Our hypothetical experienced program might also have a guess about the initial argument (unsigned, so probably a size, so probably the max number of characters to compare...), but that is not so well established in the idiom.

2026-02-02

Hey, ya wanna help?

Here's the thing about smart phones: you cannot reasonably prop them between your shoulder and you ear. Not only will you get an instant muscle cramp (and probably scoliosis within minutes if you persisted), but the thing won't actually stay there. In that respect they really, deeply suck.

But whatever. Price you pay for the benefits of the form factor. Or whatever.

That said, this has a consequence: if someone calls and (a) you don't feel you can skip it, (b) you still need to have both hands for something (anything) other than the phone, and (c) you don't currently have your buds in then you must, in short order:

  1. answer the call
  2. switch to speaker
  3. prop the phone somewhere

Presumably the people who write the UI for these things have this experience, too.

But recently with my phone, when I tap to answer, the UI goes through some flashy, battery-draining, nonsensical animation which results in the hang-up control landing right where the change-the-audio-button was a moment before. I have no words.

Random musings

If you found yourself in the same room as whatever self-satisfied twit is responsible for foisting "liquid glass" on us and asked the two nearest other iPhone users if they wanted to help administer a swirly, what do you think the odds would be?

I put them over 2/3, personally.