2026-03-23

A DeCSS shirt for the late 2020s

So, I see a lot of wittering and gnashing of teeth about online age verification laws. Both in general and specifically as the apply to Linux, BSD, and other open source operating environments. I want to talk about some practical issues around what technology will have to emerge to make them "work" and how easily even moderately technologically aware people can, to be blunt, screw the laws over.

And I want to propose a new fashion that might, just, catch on in the next few years.

What is going on

A few juristictions (including Brazil and California) have passed legislation pertaining to online age reporting and many other jurisdiction seem to be following suit.

Allegedly these are intended—as so many, many bad ideas have been in the past—to Protect The Childred (tm).

They're not going to work any better than content labeling of music, the v-chip, or video game content rating (just to name a few) did1. I recall a time when every stand-up comedian seemed to have a bit about how parents would have to get their seven-year-olds to program the v-chip just like the kid was the one who set up the VCR. But hey, we have to do something and this is something so obviously we have to do this. Not that I'm depressed by how predicatable all this is or anything.

Why it's weird for open source

There are a few of things going on here. One is strcutural, one is philosophical, and under all that is brute technological fact.

Structural

While Windows, MacOS, IOS, ChromeOS, and Android2 are controlled by large coorporate entities that decide what their customers get to install, Linux, BSD, and other open source operating system are, in principle, fully under control of the individual installing them. The weasle words are in there because few people build out their system from raw parts: they mostly use a distribution, which does have a central point of control (though many offer much more customization that the you get from Apple or MicroSoft).

This is not a funcdametal issue; the kernel and or the encrustation of supporting code could feasibly (I won't say "easily" because I'm not the one programming it) be altered to support the requirements of the laws. And those changes could be incorporated in upcoming distribution releases and make their way out to the mass of users.

But it's not like there is one place to go to try to enforce this decision. Or even ten places. Keep in mind that even if some major distribution (perhaps Ubuntu) were to comply nothing stops a downstream re-packager (say Elementary) from removing, disabling or defanging that support. More on that later.

By the way, there are litteraly scores of distributions originating on all the inhabited continents and from various points in Oceana.

Philosophical

As a generic term "open source" covers a lot of ground, but central to the that nebulous mass broadly known as the open-source/free-software movement we find Creative Common, the Open Source Iniitive, and the Free Software Foundation all of which of are organizations with some money, rather more precisely specified definitions, and some very strong opinions on matters of software control and human flourishing. And they're not the only ones. In fact the space is just crawling with various NGOs that provide legal support, lobbying services, publicity, and (obviously) software packaging.

The whole "the government is telling you how to build your software" thing isn't going down well. You may expect resistance at many levels. Anyone else own a DeCSS shirt?

The foundational reality of Open Source

Programmers program. In one sense that's a tautology, but it has profound implications.

Before delving into what it means for this issue, let's just talk about what it means for organizational cyber security. My employer has recently gone through a series of IT security exercises in an effort to lock down all the possible cyber threats. And they have a problem: what programers do on a day to day basis is indistinguishable from a large class of threats. We create new executables not known to the system and run them. Often dozens of times a day. And that is unavoidable: you can't have the benefits of what programmers do without having the relative chaos of programmers at work.

Similarly, you can't have open source and still be confident that everyone is running the nannyware you insist on. Remember that I said a downstream distributor could strip-out or neuter a reporting facility installed by an upstream provider? Well, in principle every single user is a downstream provider with that same capability. Worse, capable programers can provide tools to enable less capable people to perform the necessary modifications. Indeed, Ageless Linux is already pushing back against early complinace efforts on the part of systemd (the dominant, but often derided, init system on major Linux distros).

Legal aside

I think the intent is that anyone modifying the softawre is the "Provider" that the government enforcers can go after, but if that's just a couple of techy parents who don't want their machine identifying their minor childern to the wider internet, there is a "parent's rights" argument to hand a political and legal challange on.

But ... talk to an actual lawyer in your actual jurisdiction with actual expertese on the legal system you actually might be picked on by before counting on that kind of thing. K?

Speculation on implementation and countermeasures

From ten kilometer altitude, communication between a user's machine and a software store or other endpoint that might want to use a age signal can take one of two forms, and one of them is harder than the other for actual installed systems. You see, one machine has to initiate the conversation, and if that's the store's server, then many home and corporate firewalls will drop the packets on the floor.3 For that reason I suspect the industry will settle on a strategy where the user machine asks the server for a one-off token, hands that to a local age-reporting API which cryptographically mixes it with the answer, and the mixed data is then relayed back to the server for decoding. There are other things they could try, but they're all pretty fragile.

Anyway, on Linux the bit that builds the reply would either be built into the kernel itself or in a kernel module, but either way a savy user will be able to disable them. Then they just substitute a dummy system that respects the protocol, but always returns a least-interesting answer to every query (Yeah, this user is of age. Trust me.).

What Ageless does is more than that: it removes the infrastructure and storage that could be used to respond, which is a good thing, but the above is enough to stop making meaningful responses. And I'll bet a bottle of scotch that the dummy responder can be constructed with code that will fit on a t-shirt.

Intent

I'll write the thing as soon as the spec is available (or grab someone else's if it's avilable, because I'm not stuck about this sort of thing). And then I'll be printing shirts. And maybe hoodies, too. You never know.


1 Which is to say that (a) the kinds of parents who take the trouble to monitor their kids' media consumption will have another tool while other kinds will completely ignore it and (b) the kids will not only find ways around the tech, they'll use the system to advise them where the "good" content is.

2 Android is a little weird, because while Google (whatever name they're going by now) controls the system, many devices ship with manufacture customized versions. But it is still the case that there is a corporate entity for the government to go after.

3 And maybe report them to an intrusion detection system, but that's not really relevant here.

2026-03-22

JavaScript, PEBKAC, and LLMs

A couple of months ago there were, at last, some promising smoke signals from corporate suggesting that IT is getting around to really thinking about, just maybe, supplying us with something by way of security-minded LLM support.

Okay, so it was a little more than that. They actually bought licenses for a commercial product (something that comes with security guarantees backed by performance penalties) for supplying that kind of thing. For the managers.1 That was a bit disappointing because I couldn't explore that space on my employer's dime.

But it got me to start thinking seriously about trying out LLM supported programming workflows, and I decided to apply it to some aspects of my hobby projects. After some thought I concluded that the models I was likely to run were going to be uninspiring on C++ projects. To maximize the utility of the tool, I needed a project where (a) the model would have seen a lot appropriate of training data and (b) I was not already highly conversant with either the domain or the tooling.

I decided to start writing some little games for the browser in JavaScript.

And, brimming over with excitement for my new project I ... spent about four weeks of spare time mucking about with tooling. Yeah.

I did eventually gets Emacs configured to know about JavaScript (including an LSP, a major side-quest what with some self-imposed limitations, but that's another post) and chose one of the multiple LLM interfaces for the editor. There were a few hiccups along the way, but a I got some advice from a couple of LLMs which really helped. Oddly all the configuration work was making me feel optimistic about the project: the models weren't really handling much tedious boilerplate, but they were getting me past some ignorance-driven friction and probably (at least maybe) saving me time. So really yeah!

Then I watched a couple of on-line tutorial videos on JavaScript animation and game-dev, created a new project repository and started random hacking about while asking the models a bunch of questions and having them try their hands at writing bits of code. In about a-week-and-a-half of spare time I had a canvas with four distinct objects moving around it, each under their own set of rules for speed, gravity, wrapping around edges and bouncing. The code quality was awful, but not because the models wrote it. The parts I wrote were just as bad because this was a "just get something working" exercise. Almost everything was in global scope and free functions poked into the depths of the object collections to manipulate internal state. Classic.

The next thing to do, of course, was impose some order on the chaos. To channel interconnectedness through designed mechanism, and achieve a degree of uniformity in implementation.

I broke everything that worked to separate some concerns here, name elements of the modeling there and so on. And after burning another week of spare time on it, this afternoon I had all the same data in the program in a much more orderly manner and the first frame drawing as expected. But nothing was changing even though it looked like I had all the plumbing back in place. The application of some judicious console.log calls2 proved that the animate function was getting called, it was iterating through the object and telling them to update themselves, and they were iterating through their update action list and calling them. But the display was not updating.

And also, I hadn't asked the LLM anything for days of real-time and hours of coding time despite (if you believe Mrs. NoSwampCoolers) a steady stream of swearing at the code. I've been "a programmer" since sometime between 1982 and 1992, depending on what you want to credit with the title (1982 was first typing the classic 2-line "hello" infinite-loop in basic; 1992 was my first production code). I have deeply ingrained habits for both learning new things and tackling problems with code-bases. And "ask the AI" does not, currently, feature in those habits. Which is a me problem to the extent that "ask the AI" is better than my other approaches, but is still a thing to be overcome if a transition is to occur.

Anyway, I did eventually think to "ask the AI". I used my LLM interaction tool to pick four (quite short) files out of the project directory and asked why the actions called by Obj2d.update were failing to update the positions.

After telling me many thing about the code that I knew (because I had designed them in), and complaining about the bits of context I hadn't show it because they did matter, noticing (out loud) that they didn't matter, and deciding (out loud) to move on, it first proclaimed that there wasn't anything reason for issue and then said, "but wait". Turns out the joker who wrote this code didn't provide setters for some concepts he's trying to set in the bowels of the Vec2d class. Which was the correct resolution of the issue.3 Facepalm. Score one for the model.


1 Whose main day-to-day complaint (at least in my unit) is that they don't get enough time to do technical work what with all the people, contracts, and paper work they're saddled with.

2 Yes, there is a debugger in the browser, and yes I've largely figured out how to use it. But my "just hack on things" machine has a small display which (added to the unfamiliarity of the language and debugger) made print-debugging feel more comfortable. Presumably I'll get over it as I go.

3 I don't think I've mentioned on the blog before, but I really hate environments that silently fail. And Firefox will let you write v.x = new_value; when v has neither a public data member nor a setting named "x". Really. So the f'ing environment gets some of the blame here.

2026-02-28

Progress in the artificial "comprehension" of humor

A minor side-line in my on-going investigations of how well or poorly LLM perform has been teasing them with jokes. Back when I started they were consistently abysmal at explaining why jokes (even very simple ones) were funny, though they could fairly consistently categorize them into wordplay, dark humor and similar bins. Some models would confidently assure me that I wrote the jokes wrong or that they didn't make sense.1

When the models started to get a little better (able to do a passable job on the easy ones) I added a couple of relatively subtle jokes to my list and those promptly stonkered the largest models I had access to.

Until today.

Aside

My 64 GB RAM framework laptop (which I had been using to run mid-sized models locally) was stolen last fall and I haven't replaced it. I was unwilling to send my questions to the AI companies lest they train to the test; so I didn't have access to large or mid-sized models to try for a while. Then ollama started offering cloud services. I think it is much less likely that queries made through that channel are making their way back to the model vendors (ollama says they don't and I suspect the massive social cost of getting caught would deter them even if they were inclined to cheat), so I started trying the big models on their servers from time to time.

Meanwhile, back on the ranch

The big open-weights GPT (gpt-oss:120b) is a really impressive model—I've been using it for a lot of chat tasks I might previously have lobbed at ChatGPT—but it still failed at both my "trick" joke questions. In fact it gave almost the same wrong answers as phi4, gemma3 and so on. Maybe the result of a effectively common training set?

On the other hand its answers to the technical question I ask models were much more like those the big (300B+ parameters) leading edge models were giving six months ago than the one mid-size models (30-70B parameters) were giving while I still had the framework. So I concluded that progress was being made on several fronts including coding2 and what I call "deep-search"3 but not necessarily on making connections for which its training set had few examples.

Today, after doing my regular Saturday chores, I updated my ollama and looked at the recent models. Hmmm ... I'd seen a youtube about this gfm model and they made a variety of brags that would be impressive if true. So I tried it. It aced one of my favorite, easyish-but-out-of-the-mainstream coding questions. Not screaming fast, but fast enough that running the model and looking through the output would be faster than my solving it by hand if I hadn't worked it out in advance to use as a key for the test.

I got ambitious and asked it about the jokes.

One of the prompts includes a couple of hints: it notes that the joke was current "in mid to late 2011" and asks for an explanation of the physics behind the humor. For a human not familiar with the episode that generated the joke it would probably require some web searches to answer, but I suspect most educated people would get there. Gfm-5 if the first model I've put this to that nailed it.

The other prompt is a little more blind. The model has to recognize the relationship between the scenario in the joke and a much more common, but generally not humorous, scenario in fiction. Then it has to examine the change in the joke's version and work out why people do a double take and then laugh or groan. The answer I got from the model was not great, but it was the first LLM answer to ID the underlying story fragment, ID the crucial change, and write that the change represent an unexpected or twist ending. Close enough for government work.

Wow. Just wow.


1 Why do LLMs hate Moby Pickle and Smokey the Grape, anyway? Parke Godwin may have been dead for ten years, but he still had the power to baffle GPT-4 with children's riddles.

2 Generally something that is vaguely like a common example problem but differs in significant ways. My prompt for writing a wavefront model (.obj and .mtl files) is enough like the usual example (a cube) that many models start hallucinating cube half-way through.

3 That is, digging into a topic and giving me a top-level explainer such as you might get from a academic colleague in a different department who knows you are smart but not familiar with the domain.

2026-02-26

Modifier precedence in English

Languages often build more complex ideas by combining symbols for simpler ideas. How this works in any particular language is governed by some set of rules or another. It's pretty typical to divide the rules into (at least) two groups: some tell you what symbols can go where (grammar), and other tells you what it means when you put them there (semantics).

To clarify the difference, let's some example rules from each family across a set of natural and synthetic languages. Some example of syntactic rules are:

English
Prepositions are generally followed by objects or object phrases
Algebra
An equal-sign, other equivalence symbol, or inequality has an expression on each side either explicitly or implicitly
C
A declaration consists of one or more identifiers (with optional initializers) and information about their type1
Notably these are all about the grouping (and sometimes order) of language symbols (words and punctuation) in the text. By contrast, semantic rules are about the meaning of combinations of symbols.
English
Appending "ly" to many nouns converts them into associated adjectives2
Algebra
Compound expressions are reduced by respecting grouping symbols to identify sub-expression, followed by applying exponentiation, then applying multiplicative operations, and finally applying additive operations
C
A declaration gives the identifier(s) meaning within the program and instructs the compile on how it (they) can be used (the type information).

The fun part of this is that none of it is forced on it. Both sets of rules are devised by people for people reasons. In "natural" languages this comes about slowly and often organically for reasons that I certainly don't understand. Talk to a linguist. In programming languages some person or small group of people sat down and consciously decided them (though after the first couple of decades there came to be some broad consensus understanding to build upon).3

An advantage of someone making a deliberate decision shows up when you have complicated rules. The originator can write down an authoritative description of the method and that's that. For instance, the c-declaration int (*normalized_comp)(unsigned, const char *, const char *) may be pretty complex,4 but by looking up the procedure in The C Programming Language, the standard document or some website, we can know with certainty that "normalize_comp" is a pointer to a function taking three arguments (one unsigned integer, and two pointers to const characters) and returning a integer value.5

My beef today is about the rules for expressing frequency in English. In particular, we can use the "ly" suffix formation discussion about to modify a time-period into a frequency. Monthly. Daily.

Fine.

But we also have access to some prefixes that modify the number of thing: "bi" and "semi" for two and one-half are common in this use.

Alas, there is no authoritative author's document to tell us if "biweekly" should be interpreted as "twice weekly" or as "every two weeks". I'm fairly sure it's the former, but...


1 I'm going to ignore the wrinkle in which multiple identifiers in a single declaration can have different type when some of them are pointers. Those of you who need to know, know. And for the rest of you it doesn't add anything to the discussion.

2 I'm also going to largely ignore the irregularities of English. They are other ways to make adjectives but, again, it doesn't add anything to the discussion.

3 The algebraic symbols and order of operations is an intermediate case. It came to be through an organic process of push-n-pull in a community, but it was a small community and the candidates were generally formed deliberately by one or a few participants. Fun stuff.

4 This kind of thing is hard enough that a typical course in c includes a bunch of exercises in how to read these thing, but in case you fall out of practice there is a tool (cdecl) just to help you out.

5 Experienced c-programmers will likely intuit still another layer of meaning. Guessing that the pointer arguments are probably meant to point to character buffers (that is strings) rather than single characters, and the return value probably takes on values in the range -1 to +1 ala strcmp. The name of that layer is "idiom", and like in natural languages it is required to be really fluent. Our hypothetical experienced program might also have a guess about the initial argument (unsigned, so probably a size, so probably the max number of characters to compare...), but that is not so well established in the idiom.

2026-02-02

Hey, ya wanna help?

Here's the thing about smart phones: you cannot reasonably prop them between your shoulder and you ear. Not only will you get an instant muscle cramp (and probably scoliosis within minutes if you persisted), but the thing won't actually stay there. In that respect they really, deeply suck.

But whatever. Price you pay for the benefits of the form factor. Or whatever.

That said, this has a consequence: if someone calls and (a) you don't feel you can skip it, (b) you still need to have both hands for something (anything) other than the phone, and (c) you don't currently have your buds in then you must, in short order:

  1. answer the call
  2. switch to speaker
  3. prop the phone somewhere

Presumably the people who write the UI for these things have this experience, too.

But recently with my phone, when I tap to answer, the UI goes through some flashy, battery-draining, nonsensical animation which results in the hang-up control landing right where the change-the-audio-button was a moment before. I have no words.

Random musings

If you found yourself in the same room as whatever self-satisfied twit is responsible for foisting "liquid glass" on us and asked the two nearest other iPhone users if they wanted to help administer a swirly, what do you think the odds would be?

I put them over 2/3, personally.

2026-01-31

Baby steps in declarative style

I had occasion, recently, to look at some code one of my younger colleagues had produced. The code works. It's clear. Future maintenance will be straight-forward if a little tedious. It provides a very useful and non-trivial feature. But it is really—extravagantly, even—wordy. The engineer who wrote it knew that: he left a comment to the effect that it was ugly but he didn't know a better way to accomplish the task.

I do know a better way, so this post is intended to put the solution out there for those who need it.

Case study

The feature is simple:

Detect what columns of a CSV file contain specific data by searching the header for pre-defined strings, and subsequently look up that specific data for use while processing the remaining rows.
With which we can ingest the same data from files written by different producers that disagree on what extra columns should be present and what order any of them should be in. At least the headers fields are consistent across all the writers.

To do this we need some storage that will hold the indices (or indicate that columns are not present, so we must support a not-applicable value), and we need some code that walks the list of header fields examining each string and setting the appropriate index (or skipping with a diagnostic if the header is unrecognized). Some of the indexes are required to be set, but we check that after we've set all the ones we find.

A naive approach would be (a) define a named variable for each header index we want to capture, and (b) use a a big if-else if chain (with the terminal else handing the unknown header case) for the "set the appropriate indices" bit.1 That is something like this

//...
    int statusIdx = -1; // -1 for "not found"
    int thingIdx = -1; 
    std::vector<std::string> headerFields; // Setup somehow
    for (size_t i=0; i<headersFields.size(); ++i)
    {
  		headerText == headerFields[i];
        //..
        else if (headerText = "Status")
        {
            statusIdx = i;
        }
        else if (headerText == "Thing")
        {
            thingIdx = i;
        }
        //...
}
As I said above this works, but it's not ideal. Let's start by talking about what is probably the smallest issue: it's long. with that big chain of conditionals living inside the loop over fields (possibly inside two loops if we've built the "iterate over lines" behavior into the same function). And because each index has it's own named variable running "extract-function" on the code would result in an unreasonable argument list for the new routine.2

The bigger problem is that the association of the string names with the related indices only exists in the form of the comparison statements in that big chain of conditionals (that is, far from the variable declarations), so you end up scrolling around to understand what's going on. Good naming can help here, but...

To simplify our life we make the connection between desired header string and the index storage explicit. In the simplest form that looks like

std::map<std::string, int> indices {
    //...
    {"Status", -1},
    {"Thing", -1},
    //...
};
though we could elaborate this in various ways. Having done that, our index setting code becomes a loop3
for (size_t i=0; i<headerFields.size(); ++i)
{
    const auto it = indices.find(headerFields[i]);
    if (it == indices.end()) 
    {
        LOG_INFO("Skipping unknown CSV header field " + headerField[i]);
        continue;
    }
    it->second = i;
}
which is easily extracted to a named function and new supported fields can be added by extending the indices map.

This is much better.

Mind you, it still isn't great and depending on how long the rest of the file processing loop is we may want to elaborate on the system a little. We only use the indices to look up fields in subsequent lines, so rather than writing dataField[indices.at("Thing")] every time, we might want to provide a lookup-field-in-line routine that hides the icky syntax. But that's just gravy at this point.

Elaborations

Extending this is easy, and up to a point is a good idea. Though, you can over-do it. Bring your common sense.

In the case study above the first thing I would note is that our read requires certain columns to be present or the data can't be used. So I want to associate that info with the rest of the data. And my typical approach to that would look something like this:

struct columnData 
{
    int index;
    const bool required;
};

std::map<std::string, columnData> indices = {
    //...
    {"Status", {-1, false}},
    {"Thing", {-1, true}},
    //...
};

int columnIndex(const std::string & key, const std::map<std::string, columnData> &headerData);
bool columnRequired(const std::string & key, const std::map<std::string, columnData> &headerData);
if I wanted to use free functions or if I preferred a class it might be
class HeaderInfo 
{
	struct columnData 
	{
    	int index;
    	const bool required;
	};

	std::map<const std::string, columnData> indicies;

public:
	columnData(std::initializer_list<std::map<std::string, columnData>> l) : indices(l) {}
    
    bool has(const std::string &key) const;
    int index(const std::string &key) const;
    bool required(const std::string &key) const;
};
with the initialization data provided in the read-CSV routine by way of that initializer list constructor.

Critically I don't use two separate data stores that can get out of sync. All the information about our header handling is collected in one place where it can be edited without confusion if needed. In practice many of my data stores in this form go a long time between edits.

Nomenclature

I waffled a bit on the title for this post because I'm uncertain what other people would want to call this approach. There are several obvious candidates out there: "Data-driven design", "data oriented design", and "declarative".

In my mind, the first two are used for the idea of paying attention to access patterns and cache behavior in the way you lay data out in memory. They're a group of optimization techniques, which is not what we're about here.

I went with "declarative" in the title because I think it get much closer to my intent here (the programmer says what they want and counts on the infrastructure to make it happen), but I have reservations in the sense that C++ has very little declarative nature: we have to engineer to the relationship between declaration and behavior each and every time. In this case, that's pretty easy,4 but it can add up to a lot of code if you push the idea hard.

A tooling issue

In this case we want the indices to reset every time we call the "read a csv file" routine, so the data map is scoped to that routine, but I often use some variant of this pattern for static data, with the map placed in global storage. Which valgrind's memcheck tool doesn't appreciate.

You see, various containers in the C++ standard library (definitely including std::map) store some of their data outside the class proper. With any static data store you make this way, those allocations are created on the heap at runtime (though before main is invoked) and they persist until the program ends. But "dynamic allocation still extant at the time the program ends" is one of the things that memcheck detects as a memory leak.

In light of my on-going goal of (not to say obsession with) seeing completely clean reports from compilation, static analysis, leak checkers and so on, this is an annoyance.

You can work around this. Rather than std::map use an array (either c-style or the C++ container) of structured data, and provide your own search abstractions. Rather than std::string use string literals. And so on.5 Or just live with it; by keeping a list of "known okay" reports you can minimize the time wasted on this kind of false positive.


1 In a language with match or a more flexible switch-case construct you could use that instead of the if-else if chain, but C++ is not our friend on that front.

2 The "hard to refactor" bit could be solved by collecting the indices in a single named object struct indices { /*...*/ int statusIndex; int thingIndex; /*...*/}; but we're going to end up doing better than that.

3 Written here in a "modern" but pre-C++20 form because (a) I'm still using C++17 at work so for me maps still don't have contains and (b) because if you did use contains you'd have the search the map twice the in (common!) case where the field is a known.

4 When I tried a full implementation of the elaborated class (just because I wanted to get it right) it came to about 40 lines of code.

5 Or try living in the present: newer standards are supporting constexpr for much more of the standard library, including strings and containers. However that works under the hood, the allocations are not runtime heap entries, so memcheck should be happy.

2026-01-30

The limits of manipulations for "their own good"

It's OK, because Duggee has his gas-lighting badge!

It's an endless question for parents about their children, isn't it? How much pressure and distortion can I, legitimately, use to teach them things,1 to buy a little space, and so on? Some, I suppose, but it must be an ever moving target as the kiddo grows, develops, and just gets better at seeing through our BS. We try to keep in mind that the kiddo must one day stride forth to meet the world with her own skills, opinions, and point of view. We'd like that to go pretty well, so the scaffolding must be dismantled and some kind of model of good-person-in-a-hard-world needs to be offered.

I'll just get right on that.2

But, wait! There's more! My wife and I are smack is the middle of the sandwich, so it's also applies to interactions with our elders. And the answer to that, too, will be an evolving thing. Right now it's just one set, but there is every reason to suspect the others will need support sooner or later. So that's a whole different take on the same kind of questions.


1 In my prior, professional life, it even had a name: "lies to children".

2I wonder who in their right mind would sign off on our being parents in the first place?