2026-03-22

JavaScript, PEBKAC, and LLMs

A couple of months ago there were, at last, some promising smoke signals from corporate suggesting that IT is getting around to really thinking about, just maybe, supplying us with something by way of security-minded LLM support.

Okay, so it was a little more than that. They actually bought licenses for a commercial product (something that comes with security guarantees backed by performance penalties) for supplying that kind of thing. For the managers.1 That was a bit disappointing because I couldn't explore that space on my employer's dime.

But it got me to start thinking seriously about trying out LLM supported programming workflows, and I decided to apply it to some aspects of my hobby projects. After some thought I concluded that the models I was likely to run were going to be uninspiring on C++ projects. To maximize the utility of the tool, I needed a project where (a) the model would have seen a lot appropriate of training data and (b) I was not already highly conversant with either the domain or the tooling.

I decided to start writing some little games for the browser in JavaScript.

And, brimming over with excitement for my new project I ... spent about four weeks of spare time mucking about with tooling. Yeah.

I did eventually gets Emacs configured to know about JavaScript (including an LSP, a major side-quest what with some self-imposed limitations, but that's another post) and chose one of the multiple LLM interfaces for the editor. There were a few hiccups along the way, but a I got some advice from a couple of LLMs which really helped. Oddly all the configuration work was making me feel optimistic about the project: the models weren't really handling much tedious boilerplate, but they were getting me past some ignorance-driven friction and probably (at least maybe) saving me time. So really yeah!

Then I watched a couple of on-line tutorial videos on JavaScript animation and game-dev, created a new project repository and started random hacking about while asking the models a bunch of questions and having them try their hands at writing bits of code. In about a-week-and-a-half of spare time I had a canvas with four distinct objects moving around it, each under their own set of rules for speed, gravity, wrapping around edges and bouncing. The code quality was awful, but not because the models wrote it. The parts I wrote were just as bad because this was a "just get something working" exercise. Almost everything was in global scope and free functions poked into the depths of the object collections to manipulate internal state. Classic.

The next thing to do, of course, was impose some order on the chaos. To channel interconnectedness through designed mechanism, and achieve a degree of uniformity in implementation.

I broke everything that worked to separate some concerns here, name elements of the modeling there and so on. And after burning another week of spare time on it, this afternoon I had all the same data in the program in a much more orderly manner and the first frame drawing as expected. But nothing was changing even though it looked like I had all the plumbing back in place. The application of some judicious console.log calls2 proved that the animate function was getting called, it was iterating through the object and telling them to update themselves, and they were iterating through their update action list and calling them. But the display was not updating.

And also, I hadn't asked the LLM anything for days of real-time and hours of coding time despite (if you believe Mrs. NoSwampCoolers) a steady stream of swearing at the code. I've been "a programmer" since sometime between 1982 and 1992, depending on what you want to credit with the title (1982 was first typing the classic 2-line "hello" infinite-loop in basic; 1992 was my first production code). I have deeply ingrained habits for both learning new things and tackling problems with code-bases. And "ask the AI" does not, currently, feature in those habits. Which is a me problem to the extent that "ask the AI" is better than my other approaches, but is still a thing to be overcome if a transition is to occur.

Anyway, I did eventually think to "ask the AI". I used my LLM interaction tool to pick four (quite short) files out of the project directory and asked why the actions called by Obj2d.update were failing to update the positions.

After telling me many thing about the code that I knew (because I had designed them in), and complaining about the bits of context I hadn't show it because they did matter, noticing (out loud) that they didn't matter, and deciding (out loud) to move on, it first proclaimed that there wasn't anything reason for issue and then said, "but wait". Turns out the joker who wrote this code didn't provide setters for some concepts he's trying to set in the bowels of the Vec2d class. Which was the correct resolution of the issue.3 Facepalm. Score one for the model.


1 Whose main day-to-day complaint (at least in my unit) is that they don't get enough time to do technical work what with all the people, contracts, and paper work they're saddled with.

2 Yes, there is a debugger in the browser, and yes I've largely figured out how to use it. But my "just hack on things" machine has a small display which (added to the unfamiliarity of the language and debugger) made print-debugging feel more comfortable. Presumably I'll get over it as I go.

3 I don't think I've mentioned on the blog before, but I really hate environments that silently fail. And Firefox will let you write v.x = new_value; when v has neither a public data member nor a setting named "x". Really. So the f'ing environment gets some of the blame here.