2024-07-14

Color me impressed with up-to-date LLMs

I only recently started playing around with LLMs, and I started with somewhat dated models.1 Yesterday I signed up for ChatGPT and I can see why some former sceptics have gone fully bought in. The 4o model is impressive.

My main investigative tool was to pick things that I know something about, and ask the model a question from those fields neither trivial nor really hard but requiring some nuance. I tried to select ideas with a range of popularity because I believe that the volume of writing on a subject may influce the "skill" the model exhibits. My subjects so far:

  1. Discuss the Epistemological implications of Godel's incompleteness theorem.
  2. Summarize Freemon Dyson's "Time Without End" paper. (If the model did well on that I followed up with a question on which of the papers conclusions had been overcome by more recent developments in cosmology.)
  3. Summarize Lamb's "Anti Photon" opinion piece.
  4. Discuss the similarities and differences between the movies Ghost Dog and Leon: The Professional.
  5. Explain the continued strong culture of short form composition in speculative genre fiction as compared to general fiction
  6. Differences between classical and modern guitar
  7. Prepare a packing list for a lightweight wilderness survival kit to carry on my day hikes in the desert southwest.

My highly unscientific observation suggest that items 1, 6, and 7 are the more popular topics and should provide mode source material for the models. The movies question (4) is weird: each movie has been extensively discussed so there is a lot of source material for each movie, but I'm not sure how much human generated comparison text is out there. Then I would rank 2 and 5 a more common than 3.

I didn't get any major factual errors from ChatGPT4o. I's main fault was that much of the writing was bland and characterless on all responses. Of course, the desireablity of "character" is context dependent: an encyclopedia or other raw factual source should be pretty neutral and that's what most of the responses sound like. ALso I didn't do any prompt engineering to elicit character: I just gave the model the question and let it go.

I do have two speific comments on ChatGPT's responses. First, it's interpretation of Lamb's paper if different enough from mine that I wasn't particularly happy with it, but I wouldn't be surprised to learn that other trained and capable physicists subscribed to that interpretation. Second, the answers it gave for the movies really sounded like they were drawn almost entirely from reviews for each film in isolation:2 Ghost dog this; Leon that. Over and over again.


1 My initial goal was to investigate the possiblity of using a LLM as a purely local coding assistant (that effort continues). At work our security guidelines imply that no internal code or design details should be release to the wider world or sent over an unsecured internet connection. Obviously just using CoPilot from VS Code is out. But I started this investigaction on a personal machine to find out if the tooling would meet requirements before going in search of management buy-in. Alas, my "best" machine is totaly unsuited for even late 3rd gen models. I can run models up to about 1 billion parameters smoothly and easily. A 7B parameter model (zephyr) is too slow for use in a work-flow (seconds-per-word), though it's fast enough to learn about the models. I have put a 22B parameter model on the machine, but it runs at minutes-per-word.

2 Indeed that observation is what prompted my comments above.

No comments:

Post a Comment