2026-02-28

Progress in the artificial "comprehension" of humor

A minor side-line in my on-going investigations of how well or poorly LLM perform has been teasing them with jokes. Back when I started they were consistently abysmal at explaining why jokes (even very simple ones) were funny, though they could fairly consistently categorize them into wordplay, dark humor and similar bins. Some models would confidently assure me that I wrote the jokes wrong or that they didn't make sense.1

When the models started to get a little better (able to do a passable job on the easy ones) I added a couple of relatively subtle jokes to my list and those promptly stonkered the largest models I had access to.

Until today.

Aside

My 64 GB RAM framework laptop (which I had been using to run mid-sized models locally) was stolen last fall and I haven't replaced it. I was unwilling to send my questions to the AI companies lest they train to the test; so I didn't have access to large or mid-sized models to try for a while. Then ollama started offering cloud services. I think it is much less likely that queries made through that channel are making their way back to the model vendors (ollama says they don't and I suspect the massive social cost of getting caught would deter them even if they were inclined to cheat), so I started trying the big models on their servers from time to time.

Meanwhile, back on the ranch

The big open-weights GPT (gpt-oss:120b) is a really impressive model—I've been using it for a lot of chat tasks I might previously have lobbed at ChatGPT—but it still failed at both my "trick" joke questions. In fact it gave almost the same wrong answers as phi4, gemma3 and so on. Maybe the result of a effectively common training set?

On the other hand its answers to the technical question I ask models were much more like those the big (300B+ parameters) leading edge models were giving six months ago than the one mid-size models (30-70B parameters) were giving while I still had the framework. So I concluded that progress was being made on several fronts including coding2 and what I call "deep-search"3 but not necessarily on making connections for which its training set had few examples.

Today, after doing my regular Saturday chores, I updated my ollama and looked at the recent models. Hmmm ... I'd seen a youtube about this gfm model and they made a variety of brags that would be impressive if true. So I tried it. It aced one of my favorite, easyish-but-out-of-the-mainstream coding questions. Not screaming fast, but fast enough that running the model and looking through the output would be faster than my solving it by hand if I hadn't worked it out in advance to use as a key for the test.

I got ambitious and asked it about the jokes.

One of the prompts includes a couple of hints: it notes that the joke was current "in mid to late 2011" and asks for an explanation of the physics behind the humor. For a human not familiar with the episode that generated the joke it would probably require some web searches to answer, but I suspect most educated people would get there. Gfm-5 if the first model I've put this to that nailed it.

The other prompt is a little more blind. The model has to recognize the relationship between the scenario in the joke and a much more common, but generally not humorous, scenario in fiction. Then it has to examine the change in the joke's version and work out why people do a double take and then laugh or groan. The answer I got from the model was not great, but it was the first LLM answer to ID the underlying story fragment, ID the crucial change, and write that the change represent an unexpected or twist ending. Close enough for government work.

Wow. Just wow.


1 Why do LLMs hate Moby Pickle and Smokey the Grape, anyway? Parke Godwin may have been dead for ten years, but he still had the power to baffle GPT-4 with children's riddles.

2 Generally something that is vaguely like a common example problem but differs in significant ways. My prompt for writing a wavefront model (.obj and .mtl files) is enough like the usual example (a cube) that many models start hallucinating cube half-way through.

3 That is, digging into a topic and giving me a top-level explainer such as you might get from a academic colleague in a different department who knows you are smart but not familiar with the domain.

No comments:

Post a Comment