2021-11-17

Some thoughts on automated switchboards

If you pick up the phone and call many businesses these days you find yourself talking to a computerized system before a human being. Such is the modern world, but a few thoughts keep coming back to me.

All the modern inconveniences
I'd prefer to stick with the old "press # for ..." system unless your voice navigation lets me respond before the prompt is finished, is consistent in recognizing many people's speech patterns, and is flexible enough that I don't have to chose exactly the right key word for every branch of the tree. A few systems do this well, some do it tolerably, but a lot of them just aren't ready for prime time.
I wonder ...
A lot of these systems give you the "this call may be monitored" warning right up front, so it applies to your interactions with the computer. Does anyone ever listen to customer interactions with the computer? Do they hear when you're snarky, impatient or frustrated with the computer? Do they take any note of the sarcastic comments you make to people around you while you're on hold? To how much you hate the hold music they've chosen?
Correction
From a UI point of view a "We're sorry, but due to higher than anticipated call volume wait times may be longer than usual." type warning is a pretty good idea. But when you get it every time you call you know they are lying to you. They realy mean "due to anticipated call volume".1
DAG nabit!
Most of these things seem to be organized as a tree, meaning there is only one set of choices that will get you to a particular leaf operation, but I feel that they could be more usefully be organized as a directed graph. Ideally acyclic because user would presumably be very annoyed to be carried around a cycle.

1 The reason is clear enough, unless they have a driving focus on customer experience they don't want to pay any call center people to sit around doing nothing, so they try to optimize their staffing so that the hold queue never never quite empties out. If their load consists of relatively few long call this means long waits for people calling in. They are optimizing for their interests at the cost of their customer, and how to make such call is a strategic choice for a business. But please don't lie to me: I notice and I care.

2021-11-06

Changing media landscapes and "smart" TVs

We have two smart (or at least occasionally clever) TVs in the house. Neither of them was a fancy model because we got the first one while I was still badly underpaid and we got the second one while I still had the habits of the badly underpaid.1 One of them simply has a fixed set of apps in it'e ROM; upgrades are not possible, but then neither are certain classes of persistent attacks that require the attacker to make local changes.2 The other, the older one as it happens, has a loadable app system, but the manufacturer has stopped updating the existing apps or providing new ones.

Our daughter watches various shows on her tablet and on our TVs. Because they are smart TVs we're able to, for instance, bring up things on Netflix. We haven't felt the need to restrict her screen-time yet because (a) her pre-school is a screen-free zone and (b) she spends as much or more of her time at home engaged in play with physical toys as she does on screens. We figure as long as she is becoming adept at the real world we won't fret about her interest in the virtual world (though we do curate what apps she can have on her tablet).

Anyway. The kiddo has some new favorite shows that are on a platform that is new to us. Putting it on our laptops and her tablet is easy enough, but we can't get it on either TV because it wasn't a thing when they froze the available content.

Which is a problem.

I'm thinking that in the future I'll get a dumb TV well supplied with ports so that I can plug in a computer of some kind (be it a streaming stick, a raspberry pi, or some kind of mini IPX machine). so that we can upgrade the "smarts" for as long as the display is useful. Unless some manufacturer is making smart TVs based on an open platform. Anyone know? For that matter are their any streaming sticks built on open platforms?


1 Honestly I"m trying to hold onto certain of those habits even though our financial situation is rather improved. The trick it to figure out which of those habits are a good idea in general and which would be better replaced by buying a little further upscale.

2 If I had known this about it prior to buying I might have hesitated. But it was a deal of the day Woot just when we needed a larger TV, so the clock was ticking. It is, at least, a nice display.

2021-11-02

Holding back the schadenfreude

Today my news feed included yet another article about the terrible plight of hiring managers these days. The poor dears are being left high and dry by job seekers who don't care about their wasted time and don't feel the need to offer their victims even the common courtesy of a phone call or email. Or even a text message. Such unprofessional behavior! Who could credit it?

Or something like that.

I am aghast In theory. But not in reality.

You see I conducted professional job searches in 2004-5, 2007-8, 2012-13, and 2017-18; making somewhere between 250 and 300 applications in total. Consequently I have a reasonable statistical basis on which to make some observation of what "professional, business behavior" associated with hiring situations has actually turned into over the last decade and a half. Albeit, based on my memories rather than good records. You've been warned.

First a few observations about the large scale context:

  • The vast majority of those applications were made via email or on-line form. I can't recall sending a single hard-copy packet in the last two job searches.
  • The prevalence of on-line forms over "send us these documents in a format we understand" grew steadily through the period and was total dominant by the end.
  • It is obvious (occasionally even explicit) that many of these systems use some kind of automated filtering to save the hiring managers from needing to look at all the applications.1

Before we go on, I should mention that I've sat on hiring committees. I've sat in bed next to my loving spouse pouring over a slush pile of packets sorting out those that should never have been sent from those that at least come close to having the background we asked for. The next week I was working over a subset of them to select a set of promising prospects for deeper investigation.2 I've called references. I've sat through two rounds of telephone interviews and hosted the in-person appearances. I've debated the pros and cons of the multiple interviewees who were almost but not quite the candidate we wanted, each with their own strengths and weaknesses.

I know how painful, labor intensive, and demoralizing it can be to be on the hiring end of these things.

That said, I want to focus on a particular feature of the hiring system we've developed in these glittering first decades of the twenty-first century: in many, many cases the system is designed solely for the convenience of the hiring manager at the cost of the job seeker. First I have to register with a portal to start the process. Including, of course username3, password4, and recovery questions. No, you won't be able to re-use any of that; every employer has their own portal. Then not only should I upload my carefully formatted resume (or CV), cover letter, and other documents but I also need to copy the data therein to an endless series of web forms.5 Then I get to answer a set of true/false or multiple choice questions to confirm that I have the minimum requirements for the job.6,7 Then I'm lucky even to get an automated email confirming that the system received my application8 and even more lucky to receive further communications if I don't get called for an interview.9

Total time screwing around in the web interface: 30-60 minutes. Exclusive of (a) finding the prospect, (b) any research on the company you care to do, and (c) any customization of your packet materials you care to do.

The "painful, labor intensive, and demoralizing" bit I experienced when trying to hire was never a patch on what it was like to be a job seeker.

And there are three key points here. First, no one on the other end gives a fig how well this system works for the job seeker. Your blood, sweat, or tears are of no consequence to them. Second, while people are looking for a "fit" they care more about being able to identify a reasonable fit than about finding the perfect (or even a great) fit if it means more work to do it. And finally, the last fifteen plus years have been spent normalizing the idea that one side of this negotiation is allowed to drop the other side without a word. Some folks just imagined that only one side would ever exercise the option.

Ghosting your partner in this interactions is the standard of professional business behavior in the hiring game. Because a significant fraction of hiring managers have been working hard to make it so for decades.

Now, it's just possible that the people interviewed for these articles are the compassionate few who insisted on maintaining polite, if automated, contact with applicants up to the point that the position closed with those hopefuls still outside the door.10 It could be, but I doubt it.

I started this post with the intention of writing a sly observation on the reversal of fortunes. But somewhere along the way it turned into a bit of a rant. It seems I may have a little residual anger. Something on order of a coal-seem fire, perhaps. And I'm sure I'm not the only one.

In any case, schadenfreude is an ugly emotion and I try not to indulge, but sometimes it's very tempting.


1 And let's face it: those filters are utter trash. Natural language processing has come a notable way since the early 1990s but it is still better characterized as "artificial unintelligence" than AI. If you write it by hand you use word and phrase matching, and if you hand it over ot a ML system you get a more finely weighted word and phrase matching system. Either of which suck at seeing transferable skills and related experience. You can try to fix that with some kind of mapping systems but only if you can accurately anticipate what related experience and transferable skills will appear in your application pool. And of course, if you care to spend the not inconsiderable amount of money turning such predictions into work code would require.

2 I'm positive, both as a job seeker and a hiring committee member, that good candidates get screened at this step.

3 But not that one, it's already taken.

4 But not that one, it has characters we don't allow. Or doesn't have a character from this class. Or doesn't have characters from enough classes. Or whatever.

5 Occasionally the system tries to pre-populate the forms for you by parsing the documents you uploaded. I presume this works OK if you upload in Word format (but why would you upload something like that in an editable format?!?) and use a popular template.

6 Always after you've done all the other work.

7 Which means these things are hard fails with no chance for intelligent consideration of the other factors. A friend who chaired the IT security committee at a major public university and gave talks at major security conferences was ruled out of a position in IT security because he didn't have a master's degree. Of course, you could lie to the computer, but how will the hiring manager feel about that?

8 I'd guess this was roughly 50% of systems in my earliest search reported here and had dropped to less than 25% by my latest one.

9 A few systems do make it possible for you to log back in to track the progress of your applications. For those you actually have to hold on to that username/password pair you made.

10 I even had a handful of personal emails telling me I hadn't been selected back in the noughties. One person took the time to write what looked like a personalized encouraging message. Nothing like that in the teens, however.

2021-10-25

How do I test a spreadsheet?

One of my hobby horses for up and coming science students has long been a recommendation that they learn to use a "real" analysis package fairly early on. Something that supports fitting things other than lines, knows about error intervals, and the like. A common counter-comment is that spreadsheets can do an awful lot and the students already know them. I don't disagree with the basic premise of those comments. Indeed I used to be very happy to show students who weren't going into science how to use a spread sheet to perform their lab analysis.1 I also stand by my recommendation because the domain specific support in spreadsheets runs out with roughly the tools that are also used by business majors and they scale poorly as programming environments (even the ones that are Turing complete).

Despite knowing this I've just run right up against that limit.

I've mentioned my daughter's honorary Grandma before. She's a friend who lives with us. She has a neurodegenerative disease and can't take care of herself and my wife holds her powers of attorney (with me as the backup).2 Now the good news here is that Grandma has sufficient financial resources to pay for plenty of high quality care. She could afford a really good nursing facility, but when presented with the choice she opted to stay with us.3 Even when I had to move the family cross-country for a new job. So, that care takes the form of full time aides, and for more than a year and half it's been a private pay arrangement instead of using an agency. Long story, but the result is that my wife is in effect a healthcare manager, HR department, and payroll department. I pitch in as much as I can, but the main burden falls on her.

Right now payroll is a highly manual thing. Hours are recorded on paper and collated at the end of each pay period, then individual spreadsheets are worked up taking into account various categories of overtime and PTO.4 Now, my wife passed the programming courses that were required of her in her engineering education and puts up with my rattling on about trade-offs and choices I encounter in my work, but she doesn't naturally reach for programming to solve her own problems.

I, however, do, and when I learned of the state of the payroll process last month I quickly banged out a spreadsheet template which I believe supports all our policies and can be printed as a clear and concise description of what is being payed how and why.

And we're not using it. Because it is untested.

Can I write a little suite of unit tests for a spreadsheet? If so, how?5

Can I, at least, create a few example cases that exercise all the parts? I suspect that this is my best bet.

Other options?


1 Though they mostly turned to their peers for this advice. Especially after the first night. Fine with me, of course, gave me more time for other things and once they start learning from one another the idea spreads.

2 There is a lawyer in my family, and they are as pedantic a bunch as physical scientists and computer nerds, so I've had it pounded into my little brain that the power of attorney is a document and a person exercising such a document is, contrary to popular usage, an attorney in fact. You didn't need to know that, but I needed to spread the pain.

3 She came to be living with us in the first place because we thought we were only hosting her through a period of rehab and recovery after a hospitalization and before we got the scary diagnosis.

4 Because Grandma can afford it and to hold good staff we offer higher wages and better benefits than are normal in this woefully underpaid line of work. More than half our carers are CNAs and the rest are well on their way to being skilled up to it. They are all tasked with skilled, often unpleasant, occassional physically demanding, and very responsible work. Now in a private care situation like at our house they get a fair amount of downtime, but they also have to do without partners for moving the patient and the like. Yet somehow PTO isn't a thing in the industry.

5 It's a Google doc, on the off chance that someone knows the answer for select programs.

2021-10-03

Wishful thinking

I'd like—really, really like—to spend an entire day doing adult things for adult reasons.

That is all.

2021-09-30

"Objective" assessment by questionnaire

Some time ago we received a recommendation that our daughter get screened for autism. Now, with generally overbooked state of medical specialists around here and the pandemic and all it's taken a year and half to make any headway, but the process is underway at last.

Great.

Part of that process involved a questionnaire for the parents. We were asked to rate our child's willingness to perform various tasks without prompting on a 0-3 scale.

  1. The child is unable to perform the task
  2. The child is able to perform the task but rarely or never does it autonomously
  3. The child is able to perform the task and sometimes does it autonomously
  4. The child is able to perform the task and often or always does it autonomously

And there is the options to indicate if we have no actual experience with this task but are extrapolating from related tasks.

So far, so good.

But then we get to list. Honestly it's not bad in general, but the pandemic has played merry havoc with some of its underlying assumptions.

Take "Does your child ring the doorbell or knock when visiting at a friend's house?" Presumably the question is about knowing and respecting social conventions, except that the pandemic has interfered with her ability to find little people to make friends with and prevented her from visiting those friends that she has. Is it time to guess? On what basis?

But that's not the worst of it. Another ran along the line of "Does our child lookup or respond when her name is called?" Well ... always when she's in a good mood, but never if she's really focused on what she's doing or suspects she's not going to like what you have to say (say when she knows it's after bedtime). Is that a 2? But it doesn't really capture the nature of what's going on, does it?

Every time I have to fill out a questionnaire for medical stuff I run into these ambiguities, and even if there is a practitioner around to ask no one knows how you're suppose to handle them.

I find myself assuming that it's down to trying to build nice objective measures out of the chaos that is human behavior. You can't do statistics on a bunch of anecdotes, so you have to covert all that disorder to some nice clean numbers. So, people gather a lot of experience with a phenomenon they want to study, make up some questions and stick the answers on a numeric scale, propose an analysis, test in on a sample of subjects identified by a less numeric mechanism, and check for correlation. If you get a strong enough signal you have a diagnostic mechanism. Tada!

In my fevered imagination that's the whole thing, but I'm pretty sure they try a little harder than that. I know that psychology people like to pepper their questionnaires with distractors, and I'm aware that some of the diagnostics are versioned which implies some kind of feedback and improvement loop. Alas I don't know anything about the details of the validation and improvement process.

Anyway, the problem with the bare process I outlined is that you can get a good correlation as long as there is some uniformity among the way subjects treat ambiguous questions—and anyone who's written assignment and exam questions knows that getting 80 or 90% of people to read a question a single way is easy but making it truly unambiguous is hard—but if you use the same list as a diagnostic you are immediately at the mercy of individual variation. Which offends my sense of order.

Sigh.

2021-09-13

Authorization is part of a safety mechanism, but not a goal.

My better half ran a news show tonight and the anchor was interviewing a policy wonk on the subject of Covid19 vaccinations for kids (both teens who are currently eligible under a emergency use authorization and younger kids). On the whole I thought the interviewee very good. Knowledgeable, reasonable in her balancing of competing desires and risk, and in touch with the effects of this pandemic on communities. Not that I was one hundred percent on board with her point of view but at least I could see where she was coming from.

Then right at the end of the segment, the guest triggered a pet peeve of mine.

You may imagine me turning green and bulging out into the looming shape of Semantics Hulk. Or something.

Anyway, this lady said something like "We want to make sure the vaccine is authorized and is safe and effective."

A fine sentiment, except that authorization is not—should not be—a goal in and of itself. The only reason that authorization is desirable is because it forms part of a process which is suppose to ensure the "safe and effective" part. Those are the (only!) real goals.

2021-09-03

Observation of the day

We all come into this world with our Pleistocene brains, ready to take on the information age.

2021-08-23

Objective knowledge is ... subjective?

I recently finished Jonathan Rauch's The Constitution of Knowledge: A Defense of Truth. I found most of the book pretty depressing: it relentlessly examines the ways in which trolls, propagandists, well meaning activists, and shameless bullshitters have been succeeding in attacking the foundation of collective certainty that is the legacy of the enlightenment and subsequent advances. It does make the effort to end on a optimistic note with a exhortation to action in the defense of objective truth.

I recommend it; but hang onto your sense of purpose in the world: it's a rough road.

But I want to point out an oddity in the author's conception of the world (one freely admitted in the text, by the way). One of several guide-stars it the text is what Mr. Rauch calls "the reality-based community" and the rules under which it operates (which he dubs "The Constitution of Knowledge").

We should stop to note that Mr. Rauch's conception of this community encompasses a pretty broad swath including not only scientists but also many other scholars, journalist, intelligence analysts, various members of evidence-based judicial systems, and some governmental and no-governmental policy wonks. Basically everyone who approached the creation of knowledge using The Constitution of Knowledge as a foundation.

One of the key rules of the "knowledge" produced by these systems is that anyone else honestly and diligently following the same rules and using the same base of existing facts should come to the same conclusions. A condition you know you've reached when a strong consensus emerges in the community itself.1

But that leaves us in the epistemological interesting positions of having "objective" knowledge be the product of consensus (among a suitably trained set of investigators), which is at some level a subjective entity.2

The reason this doesn't bring the whole structure down in ruins is the allegation that the processes is what generates the reliability. Have trained in a physical laboratory science I have recourse to highly repeatable experiments for much of the grounding of my discipline,3 but the idea that persuasion through open, earnest, and largely no-personal argumentation is the legitimate route to authority works more broadly that the experimental sciences.


1The author presents a number of examples.

2 Indeed, the author talks about the ways consensus forming can fail or be subverted.

3 Even in physics you get into places (like quantum foundations) where interpretational issues become important in the way we teach, relate, and apply the things we know.

2021-08-15

Covid update (2)

Not about the renewed threat from the delta variant. Not even about how depressingly unnecessary the renewed threat from delta is. More about the how Covid has affected us.

Both of my regular readers probably recall that the whole family got the thing in late November or early December. Well, we never had the child tested, but she had symptoms similar to, but less severe than her mother.

General update:

  • Grandma was in the hospital for over a month. She was sent home in dreadful shape and still on a feeding tube. She rallied and transitioned to liquid, then chopped, then normal diet; but she remains on hospice and is still bed-bound (before Covid she needed increasing levels of assistance in walking and transfers but was decidedly not bed-bound). Her specialists say that Covid accelerated the progress of her underlying condition.
  • My wife and I both experienced bouts of unexpected fatigue throughout January and February. One advantage of working from home is that if you have a irresistible need to take a ninety minute nap at 2:30pm the only disruption is the need to make up the lost time that evening.
  • My wife seemed to recover well at first, but in March started to grow weaker. In April she was diagnosed with long Covid. One of the main features of the extended version of the disease is a swelling on the heat and lungs which puts pressure on them and reduces their efficiency. The treatment of choice at the moment is supplemental oxygen, avoiding any kind of intense exercise, and waiting for the swelling to go down. So we're lugging O2 bottle around when we go out, and she's been scheduled for consultations with half a dozen assorted specialists. Best guess is that we're in this boat for another six months, give or take.
  • My on-going symptoms faded around the end of February. But my better half remains worried on my behalf to this day. So I asked about attending the long Covid clinic and got the expected answer (I'm not sick enough to justify any of their overbooked time), and have restricted my exercise regime to low intensity (walk, don't run; go steady, not hard on the rower; very modest levels of moderate resistance training). I began to feel something like my old self again around late May. Mind you, the gray that appeared in my hair hasn't gone away like it did the first few times it showed up (always during a particularly stressful period in the last ten years or so). I tell my self that a little salt-n-pepper at the temples is "distinguished". Sometimes I even believe it.

2021-07-22

Reading from files, single responsibility principle, and testability

Say that your program needs to read some kind of structured data from a file. In a object oriented language the obvious thing to do is encapsulate the parser and the storage of its results in a class. Perhaps something like:

class DataReader { public: DataReader(std::string filename) { std::ifstream in("filename") if ( in.good() ) { parse(in); } } //... }

Indeed some of the legacy code my project calls had one of these as the main access point for a bunch of important functionality, and I've been asked to extend it and authorized to do some re-factoring along the way. Whoohoo!

But I can't afford to break existing code (or at least I have to know what I'm breaking and why...), which means I need some detailed test. Yes. It doesn't have a test suite. I mentioned that it was legacy code, didn't I? And that is where the problems start, because how do you write a self-contained test for DataReader?

  • Actually creating a (or more likely several) test files break self-containment and means your test code needs to know where to look for the files, and test can fail for reason that have nothing to do with DataReader. Yuck!
  • You could encode the test input as strings in your tests, create and write a temporary file, run the test on that file, then clean up the temporary file. That solves the self-containment and "where to find it" problems and doesn't clutter up the disk with a large number of similar test files. But it still breaks if there is a problem with the disk. More problematically all the set-up and tear-down code is a place to write bugs, and interferes with the clarity of your test. Still pretty bad.
  • The third option is to refactor to make it clear that opening the file and dealing with the contents are separate "thing"s for the purposes of the Single Responsibility Principle. The class should take a stream not a filename. Now to avoid breaking existing code we can relax our pedantic insistence of separation of concerns by leaving the existing c'tor as a convenience but defering most of the work to a new c'tor that takes a stream. Something like: class DataReader { public: DataReader(std::istream in) { if (in.good()) { parse(in); } } DataReader(std::string filename) : Datareader(std::ifstream(filename)) {} //... } Then we use a std::istringstream in the test eliding much of the setup and all of the tear-down.

I shouldn't have to ?*(<|#% register to do that!

The house we bought out here is not only bigger than any we've had before but it is bigger than we actually needed. There just wasn't anything smaller that met our needs on the market when we had to buy, so we had to take what was available.

The shear size of the place has been a problem from a WiFi perspective.1 There has been signal and bandwidth everywhere, but in some corners both sometimes drop pretty low. One of those corners is the guest bedroom which is where I work-from-home.

Well, complaints from my better half finally got me off my duff. I bought a couple of meshing extenders. Nice ones that maintain the same SSID none-the-less. The instructions included in the box tell me to download the app and use it to configure the widgets. I grumbled a bit, but fine. Except that once I have the app I learn that it requires me to set up an account with the manufacturer's web site.

Are you kidding me? Why would I want to do that?

It's a serious question. What makes them think I want to register?

In a word: No!

Now, it turns out these widgets support a WPS pairing mechanism.2

However, that wasn't mentioned in the material in the box and was pushed to the very bottom of the help web page: a tiny paragraph of plain text under screen-fulls of colorful pictures and enumerated lists of steps supporting the two ways to register with them to get it done.

I can only assume that this is done in bad faith. It is a malicious act in the service of the company to the detriment (admittedly small) of their customers and it makes the world a tiny bit worse.

Worse, I think the printed docs included the no-registration instructions at some time (the controls for it are labeled on the diagram but never referred to). At some point a marketing jerk told the tech writers to make the documentation worse and the tech writers complied, but they didn't take the time to scrub all the traces of their lost efforts.


1 The house actually has some kind of wire-in-the-wall-and-RJ45-jacks system from Honeywell pre-installed, but we didn't get the printed docs from the previous owner and I have never figured it out. Naively it looks like some kind of star topology that assumes you're putting the control next the main panel (i.e in the laundry room). So we're living with wireless. Besides there are relatively few ports and most of them are in inconvenient places.

2 Just as well for them. I'd have sent them back rather than register. And so should you. Fight this nonsense.

2021-07-10

Well, obviously...

We've had my in-laws here for the last week with our nephew for "Camp Cousins" to ensure that the youg'uns know one another. Lots of activities around the house: arts, crafts, games, and even an intoduction to Roblox development for the eight year old (which is why it had to be here and not in California where the in-laws live); and trips out to the zoo, aquarium, science museum, and even putt-putt.

I had shirts made. But they took longer than expected to come, so we didn't get them until Thursday and couldn't wear the to the zoo for the planned pictures. We rescheduled that for today (the last day). When the time came we headed out to the (new) planned location (where we could include the same mountains in the background as I had put on the shirts), only to find that it was starting rain (out of season) even as the dust storm continued (very much in season).

Rain and dust storm. Of course.

2021-06-28

Oh, no! It's worse than that!

Me
[Rants for a while about a desired mechanism for solving a technical project-management issue at work]
Better-half
But there's no way to do that?
Me
Oh, no! It's worse than that! There are at least six ways!

Seriously, I'm spending too much time reading about `git-subwhatever`, build-system support for external dependencies, and even dependency management systems (though this really shouldn't require going to those lengths). And I'm really thinking about suggesting we do this the old fashioned way.1


1 Take the bits we want to factor out and share between projects, make each it's own project that builds a library, and link the libraries in the overlying projects. Done and dusted. Except for the hassle of getting a new hire up and running; and versioning issues; and remembering where each bit actually exists when you need to work on it; and deciding on, setting up, and enforcing commit privileges for each library; and extra deployment complexity for software we ship. But at least those are all known problems.

2021-05-14

Well aren't you just Mr. Subtle today?

I've been getting some unsolicited political texts recently. Here's a sampling:1

Trump won't ask again: Will you join him on his new social site. Friend? You have 1hr to stand w/ Trump until link closes for good [minified link redacted] stop=end
Have you abandoned Trump, Friend?! He launched his new site & you haven't told him you'll join. You have 1hr to stand w/ Trump [minified link redacted] stop=end
Trump won't ask again: Will you join him on his new social site, Susan? You have 1hr to stand w/ Trump until link closes for good [minified link redacted] stop=end
We've texted 9x & you still haven't answered Trump: WIll you join his new site, Susan? 20min or he'll know you've sided w/ Dems: [minified link redacted] stop=end
It's Steve Scalise & I'm wondering if we lost you, Susan. True Patriots will steps up. Socialists will ignore this. Which are you? [minified link redacted] stop=end

And so on. In case you're wondering, my name isn't Susan nor does that name belong to anyone I've shared a mobile plan or a mailing address with, so the contact database they are using is suspect.

The main thing that stands out to me here is the "subtle" menace in much of the phrasing here. Oh, it's ambiguous enough to provide cover against legal action—your basic plausible deniability—but it is also clear in the natural reading. The semantics here are those of bullies and crooks.


1 Before last year's election I was getting a bunch from pro-democratic sources. Those were run of the mill electioneering. The partisan selection and nature of this spam changed recently.

2021-05-10

Can't I just make a picture of this 3D model?

It sounds like a simple request: take this set of object models and make a set of pictures so we can vdiff them. No problem, right?

On one level that is right: there are a lot of tools out there than can draw you a picture of a 3D model.

But there are some unstated requirements:

  • These represent the same object at different points in the evolution of a physical effect, so we want to image them all from the same point using the same viewport and the same lighting. Maybe even generate a movie, but that's a bonus.
  • There are a lot of these objects because they are are generated by a scripted process, so we'd like to be able to initiate and control the rendering from the command line.
  • If our renderer uses a light model more sophisticated than pure ambient light (and it probably should, even if we don't need it to be particularly sophisticated), we want to control the lighting.
  • We'd actually like to add a simple legend to the scene, though this a "may" item that can be dispensed with.

And now the problem is actually pretty hard, and there are many fewer tools that offer the facilities we need.

Let's take a closer look at the requirements.

Scene description

The first thing to know is that a plain object model (in wavefront (.obj) or stl format) isn't enough to define a picture. What direction do you see it from? How much of the frame does it fill? Which way is "up" in the frame? How is it lit (and for that matter, how do we model lighting)? In what color? And so on ad nauseum.

So we need some kind of additional information. We call the combination of the object model(s) and all that ancillary information a "scene description", and it is written in a scene description language or format.

There are a lot of scene description languages out there (in many cases used by a single tool), though a small number are reasonable common. From today's web browing it looks like COLLADA, the RenderMan format (.rbi), and blender's proprietary and undocumented .blend files are the big players.

Command-line interface

This operation is part of a fairly complex workflow that I expect execute a few score to a few hundred times. It really should be scripted, but doesn't justify writing a custom tool unless I can just plumb together some existing bits. For that to be practical we need a renderer with a command-line interface (because this means that someone else has already done the bulk of the work).

In an ideal world I could write something like:

 renderer -i thingy_2.5s.obj \
        -c camera.txt -l lights.txt \
        --quality=3 --resolution=1920x1080

and get thingy_2.5s.png out. Then I could just loop over the available input files (re-using the scene description elements) and get my perfectly aligned set of images.

Where that leaves us

In a lot of ways the obvious choice is blender (not withstanding that this is a hugely heavy tool for the job), but it wants the scene description in an undocumented format which I can't generate outside of the program. Strike the obvious solution.

Now both COLLADA and RenderMan are documented, but as far as I've learned so far neither one is modular in the sense that I can define the camera and lights in a separate file from the 3D model and neither one lets me say "Use this wavefront file with rotation matrix M and translation T." in the singular file. In principle I could rework my code that is generating .obj files to produce .rbi instead but that would mean the producing code would need to know about my choice of camera and lights which is simply in the wrong domain for that code. Argh!

So, at least in my context, the answer to the title question seems to be "No, you can't.", which seems to be a bit of free real estate in the software utility market. Another task to add to my queue of projects that are not getting done due to the demands of my home life. Sigh.

Update 11-May: It turns out that RenderMan format does support separate inclusion in the form a "Entity files" and a ##Include directive. This represents a path forward, though it means supporting mesh output in a new format.

Update 14-May: The specification says:

Note that the Include keyword itself does not cause the inclusion of the specified file. Aswith all structural hints, theIncludekeyword serves only as a special hint for rendermanagement systems. As such, the Include keyword should only be used if render management facilities are known to exist.

Though it appears that I can provide the adequate level of "render management facilities" with a simple tool that identifies the keyword. Better still the renderer I'm looking at (aqsis) is willing to accept the standard input so it becomes possible to write something like includer lights_and_camers.rib | aqsis .

2021-05-02

Yep. Naming things is hard.

There are only two hard problems in computer science: cache invalidation and naming things.
Phil Karlton1

So I've been reaing Uncle Bob's Clean Architecture as part of my continuing education. I'm in the bit where he's talking about SOLID in gneral and trying to explain the Single Responsibility Principle in particular. Uncle Bob admits that it isn't well named. Nice mea culpa. Still, it's not too bad a job. Especially if you compare it to the naming of the principle widely know as RAII.

Anyway, moving on to defining this thing Uncle Bob gives the original statement of the principle as:

A module should have one, and only one, reason to change.

Which he immediately admits isn't particularly clear, so he then suggests the intermeidate form:

A module should be responsible to one, and only one, user or stackholder.

This is also less than perfect as there will often be a group of related users or stackholder who hold the module's specification collectively. To remedy that Uncle Bob tries:

A module should be responsible to one, and only one, actor.

Which is more pithy. Alas he needed a separate sentence to nail down the meaning of "actor" in this context.

It seems to me that "faction" might be a better word here. Admittedly there is a contation of conflict or at least competition that comes with that choice. But, honestly, the different groups of stackholders who rely on a medium to large scale software project are in competition for programmer hours if nothing else and they often have to hash out conflicting interests in specifications as well.


1 Yes, I'm a big fan of the variation that includes off-by-one errors in the list of two things. But the original is insightful enough that Phil deserves plenty of recognition and it is hard to find a authoritative origin for the expanded version.

2021-04-26

But what are the opinons?

I'm about to experiment with a piece of software (the meson build system). Now, I've read this and that and I've watched a couple of video presentation by the author who acknowledges that the software is opinonated and gives a reasonably convincing argument for why that is a good (or at least appropriate) thing for a build system.

Fine. I'll buy it for the interim. But .. what are the opinons? What rules does the tool enforce that I might not be used to from other tools in the same space? Is there a list? If so I can't find it. I'm not asking because I want to use them as a reason to reject this thing, I've commited myself in my own mind to trying it. But I'd like to get into the required mindset. Only there doesn't seem to be a single place to look.

2021-03-15

Bad webmonkey, no Mt. Dew!

Given the possibility that the title could be taken as a pejorative, I should get a few things out of the way:

  • Web programming can be "real" programming. It is a cross-disciplinary task and there are so many subtleties and complexities that web programmers ought to be taken seriously.
  • Web programmers don't have the luxury of having most of their mistakes take place off camera the way some of us do.
  • In addition to the complexities of the programming environment they get to deal with UI (or UE, if you prefer) all the time. Poor things.

But ... those same bullet points explain why they get bagged on: their mistakes are highly visible, annoying, and they generally turn out to be something that some segment of the users consider to be "obvious". As in "How could anyone with half-a-brain not know that?!?" kind of obvious to some part of their audience. And they don't generally hear from the cohort who shares their ignorance.

The answer to that exasperated query is that it takes time, effort, and experience to become good at cross-disciplinary stuff. You need to know a few things in great depth, a larger number of things to moderate depth, and a little bit about a huge number of things. That takes time.

Now, twenty years ago web programming had a reputation for low barrier to entry. Is that still true? I don't hang around the right places to know what people say about it these days. But combining low barriers to entry with pretty high requirements for preparation is a recipe for disaster.

So, "What brought this on?" I hear you saying.

Well, not really because you're on the other side of the internet from me, I haven't hacked your microphone, and it's not clear that anyone actually reads this thing with any regularity. But I have a good imagination, so I hear you saying it anyway.

Setting my security questions and responses for a web site. Unfortunately,

  • The street I lived on when I was ten years old had a two word name (not counting "Road": its name was a phrase), which the person who programmed the form apparently thinks is a thing that doesn't happen.
  • I got my first job in a city in the US southwest named after a saint. In Spanish. So it, too, is two words. Again, the programmer apparently doesn't believe in such nonsense.

Now, young programmers in my end of the business are pointed at a few documents early on to give them a starting familiarity with some of the more trap-laden parts of our work. Things like What Every Programmer Should Know About Floating-Point Arithmetic and What Every Programmer Should Know About Memory (PDF link) . I would have thought that Falsehoods Programmers Believe About Names and Falsehoods Programmers Believe About Addresses would be reasonably obvious candidates for the same role among web programmers. In fact these kinds of document make up a little genre of their own, and it's one worth taking a little time to explore. I promise you'll find one or two that you're guilty of.

Rather than hurling imprecations at web programmers in general, or even at the miscreant in this particular episode I'll just ask anyone going into web programming to find some resources for how to think about these things. Please.

2021-03-12

More on bureaucratic language

My wife and I got the Janssen (AKA Johnson&Johnson) vacinne today. No significant side effects at this point.

But I want to talk about the paperwork. The fact-sheet that I was required to acknowledge receipt of included language like

There is no U.S. Food and Drug Administration (FDA) approved vaccine to prevent COVID-19.
The Janssen COVID-19 Vaccine is an unapproved vaccine that may prevent COVID-19.

And several more variations on the same theme.

Now, on one hand the point is clear: the FDA has not completed its (lengthy) approval process and has just given permission to use these things because the pandemic represents an emergency situation.

But on the other hand the expectation is that hundreds of millions of Americans (not to mention billions of people in other parts of the world) will receive one or the other of these vaccines. So what is the difference between the current situation and approval? Nothing outside the structure of the bureaucracy's regulatory structure.

Not that I think there is anything malicious or even dishonest about this business. It's just that organizations create their own cognitive structure and then live in them even when they lose fidelity with the real world. In other cases that can be a problem.

2021-02-23

TDD practice project

Among the programming videos I've watched in the last few months there was a entire multi-day workshop given by Robert "Uncle Bob" Martin. He's an entertaining speaker. Of course, most of the presentation covered the topics he's been writing about for the last couple of decades, but the part that really caught my attention was his demonstration of the coding process for test-driven design.1 I'd like to take the idea for a test drive.

Now, I've been conscientiously adding testing to my personal process for years, and I've long included bug-report-as-test in my toolbox but I've never used the "write a failing test first" scheme that Uncle Bob demonstrated. I completely buy his claims that (a) you have to practice to learn how to do it and (b) you can't learn it on the job. I need an exercise.

Probably there are sample problems out there, but I've thought of one of my own: I'm going to write a bracket nesting validator. Moreover, I'm going to do it in stages as if I were refining an agile project. In all cases the program will accept a arbitrary number of files to process.2 The stages I intend are:3

  1. Work on a fixed set of bracket pairs ((), [], and {}) and reports the filename of each file that has an error. The program will be silent by default on correct files. A report-on-correct switch is optional.
  2. Expand the error report to include the line and column number at which the error was detected.
  3. Expand the error report to include the class of error (close doesn't match open, close with no open, reach end-of-file with one or more open brackets). Bad match reports should also include the line and column number of the non-matching open bracket; end-of-file reports should include the line and column numbers of the unmatched open bracket.
  4. Allow the user to specify arbitrary character pairs to be treated as open/close pairs (including canceling or overriding the default pairs). Attempting to specify a single character more than once represents a error and suitable diagnostic must be produced.
  5. Allow a single character (such as ' or ") to server as both open and close for a pair; note that these pairs can not nest without an intervening scope. That is 'a'b'c' has two single levels pairs not one pair inside another, but 'a"b'c'd"e' is three levels deep.

I haven't even set up a repository yet and I'm already struggling with the conflict between by habitual way of working and the Process-with-a-capital-P I'm suppose to be exploring. On the white-board in my home office is a sketched out scheme for five data structure that will collectively support at least version (4) of the spec which I started drawing automatically almost as soon as the idea occured to me. But I think I'm suppose to let most of that "just happen", aren't I?

Argh! This is going to be, uhm...fun?


1 I've also adopted his "Yeah! I'm a programer!" bit as pick-me-up for those days when even the little victories seem few and far between.

2 I'm intending to do a command-line tool, but there is nothing in here that requires it. Feel free to get that list of files from a file-picker widget and report in GUI list of some kind.

3 The professor in me feels obliged to note that there are even more basic versions of this exercise available. Notably:

  1. Perform the matching on exactly one kind of pair. This can be done without a stack.
  2. Just count that the number of open characters equals the number of close characters without caring about order.

However, I'm not going to bother with these variants unless the "do as little as you can" process happens to pass through one of them along the way.

2021-02-21

A sad end for "the party of Lincoln"

I had written a longish screed--full of invective and disdain--here. But in the end that kind of thing just isn't helpful, so let's just go with the bare facts.

I've contacted my County Clerk's office to change my party affiliation, and I shall never again vote for a candidate running as a Republican. Not one. No matter how good the candidate.

I could still vote for a person running on a platform full of the better Republican ideas and ideals, if they could earn my respect. But only if they choose a different party association.

2021-02-18

Abstraction and debugging

Working on a new feature the last couple of days, and my naive initial implementation isn't working correctly. Trying to track the problem took me into some code that we've written to interface with a legacy system we rely on. The code in question was written by a junior dev and while it is funcitonal it has a number of raw loops that perfrom simple actions. We're talking about things like constructing a container of objects used by the legacy code from a container of objects used by our core code and vice versa.

These things can also be done using the algorithms library and a lambda.

So compare two ways of doing the same thing.1 Both assume the existance of a std::vector<coreGadget> named input. First, using a raw loop:

std::vector<LegacyWidget> output;
for (size_t i=0; i<input.size(); ++i)
{
	const CoreGadget & gadget = input[i];
    const LegacyWidget widget(gadget.data(), gadget.info());
    output.push_back(widget);
}

Second using std::transform:

std::vector<LegacyWidget> output;
std::transform(input.begin(), input.end(), std::back_inserter(output),
	[](const CoreGadget & gadget)
    {
    	const LegacyWidget widget(gadget.data(), gadget.info());
    	return widget;
    } );

Now, highly respected speaker and bloggers have been encouraging the latter over the former for some time, but I've wondered if this was really a home run or just a convincing gound-ball single. Basically because the std::transform version requires you to understand a lot. Iterators may be more general than indexing, but they are a thing you have to know. Similarly lambdas may be a cleaner way of doing function pointers, but they have their own syntax and in may cases you have to understand captures.

Personally I enjoy writing code that uses the algorithm header and once I started doing that I quickly became comfortable and adept at reading it. But you code for your teammates (and future teammates) as much as for yourself and my junior devs seem a little hesitant at times. So it's nice to have a quality of life argument in favor of the modern way of doing it, and that's what I discovered recently.

Imagine stepping your debugger through a function that performs this transformation in search of a bug. Imagine it's the third (or fourth or fifth or whatever) time through and you already know the transformation works as intended. How do you skip it?

Oc course you could just set another break point beyond the transformation and continue, but with the std::treansform version you could also use your debugger's "step over" feature. I'm not familiar with a dbugger tha has "step over loop".


1 There is also the intermediate option of using a iterator-based loop and the extra flourish of using an immediately invoked initializing lambda but I don't think they change the trade-offs here.

2021-02-13

Little surprises #7: more Qt versus the preprocessor misery

Trying to be a good kid. Writing tests as I code. Testing the edge cases. "Hey, this should throw, does Qt have a test for that?" Yeah, its QVERIFY_EXCEPTION_THROWN. Great, let's use that!

void suspisiousFunctionCallThrows()
{
    // Define badInput and otherParam;
    
    QVERIFY_EXCEPTION_THROWN(suspicisouFunction(badInput,otherParam), std::runtime_error);
}

It doesn't compile. Why not? Commas again, of course.

And it is conceptually easy to fix: you just create some blind wrapper than make the offending call without taking multiple argument.

void suspiciousFunctionCallThrows()
{
    // Define badInput and otherParam;
    
    std::function f = [&]{
    	suspiciousFunction(badInput,otherParam);
    }

    QVERIFY_EXCEPTION_THROWN(f(), std::runtime_error);
}

Not exactly a featured stop on the "Look at our transparent tests" tour, is it?

2021-02-03

Maybe type aliases are part of the happy medium.

Tension between conflicting goals is as much a part of software as it is of life in general. Today I am thinking in particular of the tension bewteen planning ahead and generality on one hand and KISS and YAGNI on the other. Plan too little and you either end up with multiple slightly incompatible implementations of parts of your design or you metaphorically paint yourself into a corner and have to redsign at a large scale. Plan too much and you both over complicate and waste time writing features you never use. Somewhere in the middle is a sweet spot that you aim for.

I've been working on reining in my tendency to overplan for some years, and I'm doing a lot better these days. Except for one one paricular case: making thing type-generic. If I'm writing a class and ask "Hmm ... what type of underlying data should this use?" and don't find an immediately obvious answer my first reaction is to type template <typename T>.

Looked at naively this is a good trade-off: the mental cost of writing and reading a simple template that just serves to defer the choice of underlying type is barely more than that for the untempalted code, and it increases the generality of the code. What's not to like?.

But there are hidden costs: increased compile times; latent bugs,1 and the need to chose between explcit declaration of desired instances and header-only code. And of course, header-only code makes the compile time issue worse. Now I'm aware of techniques like thin templates, but that negates my claim that the template isn't any harder to write or read than the untemaplated option.

Today, while I was waiting on yet another overly long build I had a long overdue insight. In most case that template class starts something like:

template <typename T>
class Thing 
{
public:
   using part_t = T;
   //...

That is I've borrowed the habit of naming types related to my classes using type aliases (seen in the standard library and elsewhere). But here is the crucial observation: I could write Thing without templates and still use the type alias with a interim choice for the type:

class Thing 
{
public:
   using part_t = float; // Just pick something for now...
   //...

If I change my mind about the type fixing it is a one-line edit.2 And if I find that I need multiple choices later coverting the class to a template is a straight forward refactoring step, but I don't pay the template costs until I actually make that call.


1 Even latent compile time bugs. I've had to fix compile-time bugs in code that's been in the repositoy for months because we finally instantiated it with a type where the bug surfaced.

2 As long as I am consistent in using the alias, but I'm trying to do that for readability anyway.

2021-01-14

Moore's law may have failed for individual CPUs, but progress is still astounding.

I bought a rasberry pi kit for a friend's kid the other day. He's sixteen and thinking of going into engineering but he doesn't have any mentors around the house.

Given that I don't know much about his home situation except that he has a game box (which implies a display) I got one of the kits that includes case, keyboard, mouse, pre-loaded SD card and all the cables. Given that I'm a bit of a cheapskate I looked back a couple of generation and got one based on a 3A+ instead of something more modern.

Frankly, however, I don't feel at all sorry for him becuase I'm flabbergasted by the raw power of this allegedly obsolete embedded board. It has four cores and each one is roughly as powerful as the machine I analyzed my dissertation data on. It has eight time as much RAM. It has only about sixty times the persistent storage becuase the cheap kit I chose comes with a "small" SD card. It has vastly better networking hardware.

There is only one metric where my old machine might have been better: back then I has spent a lot of my advisor's money on a 10,000 RPM fast-and-wide SCSI2 drive with a large and smart cache (for the era, natch),1 so I think I had better achieved bandwidth to persistent storage.


1 This was an informed choice, the software I going to run did a lot of logging and looking things up in disk-based databases as it processed large volumes of input into large volumes of output. It was generally IO bound on the big shared servers that were available at the lab. As a result of not being IO bound my PII desktop actually processed files faster than the big HPs. Even though I had given up about 10% of the available clockspeed to get that disk system and still squeak in under budget.

2021-01-12

Having it both ways

My better half and I decided we needed to buy a cardio machine. The only issue was what kind to get. We've both used treadmills, ellipticals, bikes, and rowers of various kinds in the past (and I've used climbers), but the selection of a single best kind of machine is no easy task. To make matter worse we have slightly different needs and goals. It comes down to my wanting a rower and her wanting a bike.

The good news is that hybrid machines exist. They have always been rare and in the past they've been pretty expensive, but when I started looking around I found that prices are way down. There are two models you can actually get for less than US\$800, when the last time I look for these things they mostly ran \$1500-2000.

On the other hand the market for bike/rower hybrids isn't a big one and these seem to have driven the higher priced options right out of the market. Unfortunately they are (to judge by specs and reviews) not as nice; adequate but in no way luxurious. I would have spent more to get a nicer machine if the option was open, but given what was actually shipping we got a Avari A150-335 Conversion II.

Assembly was not hugely difficult, but the instructions don't quite match the product actually delivered.1 I haven't had time to really put it through it's paces, but it does all thie things it says on the tin.

The belt drive is a little noisier and not quite as smooth as some higher-end rowers I've used in the past, but it's tolerable. The magnetic reistance means it is very quiet once you get all the bits in the right place. The range of resistance seems OK (though the fact that there are only eight settings is one of the things lost from earlier, more expensive options). The rail folds up fairly easily, though you need to make sure the seat is locked in place at least a little way from the console or it gets in the way.

The one oddball complaint I have at this point is the range of rowing motion in the front is somewhat limited: I have short legs and long arms and my reach on the recovery stroke is coming right to the cradle that holds the handle when you're not rowing. I don't think this is a show-stopper, but it came as a surprise.


1 This has been happening to me more and more frequently in the last few years. I get the feeling that the products are under continous development and accumulate a steady stream of small changes to reduce costs and or improve function, but the documentation doesn't always keep pace. Some manufacturers admit the changes up front with models name/numbers like Widget-Xtra-Special-5000/a15 where that last bit changes every few months. Others keep selling the "same" model so that you can only tell what you really have from the serial number. Sigh.

2021-01-06

Little things make a big difference

Of all the cool features I've read about that are in c++20, the one I find myself wishing for most often is the spaceship operator.

Sure concepts sound great, and improve the improvements to constexpr are going to make a big difference, but I write more class comparators than code that will be helped by those things. And it's almost all annoying boiler plate (which is why the spaceship opperator is a thing, after all).

2021-01-05

As if you needed another reason to hate the preprocessor

C++ inherited from c a compilation model that discards nearly all information about types and symbols. This is a mixed blessing. On one hand it simplifies a number of things and made c portable to a wide varienty of machines including some with minimal resources. On the other hand it means debuggers need added support to do their work and that reflection is not natively supported (in c) or requires special work (as in RTTI in c++). Unfortunately there are some very nice things you can do in the testing domain if you have reflection that are quite difficult without it.

This can be worked around in a number of ways, but most approaches make heavy use of the preprocessor: using function-like macros as well as the built in location macros (__FILE__, __LINE__ and their more modern counterparts). Indeed, your humble author once wrote a unit-testing framework for c and c++ in a combination of c-preprocessor and GNU make.1 More seriously Qt's native signals/slots mechanism and test framework make use of a variety of macros (Q_DECALRE_METATYPE, QCOMPARE, and so on).

Enter c++ templates. In particular templates taking more than one parameter. For example, say I'm using QTest and I want to perform a test of a computation that returns a three element stadard array of floats. I write something like QCOMPARE(actualArray, expectedArray);, compile (no problems) and run the tests which results in a hard crash in the bowels of qmetatype.h.

Oh yeah, I needed to declare the type to the metatype system.2

No problem. Scroll to the top of the file and add Q_DECLARE_METATYPE(std::array<float, 3>);, but that won't even compile (or if you're using Qt Creator the linter will catch it for you). Do you see why?

Q_DECALRE_METATTYPE is a macro. It's arguments are parsed as comma separated plain text. Which means the preproecssor reads that line as having two arguments where only one is expected. Crap.

To be sure you can work around this issue. My favorite approach is to use a type aliases like

using floatAry3 = std::array<float, 3>;
Q_DECLARE_METATYPE(floatAry3);

Keep in mind however, that this approach has its own trap: Qt's metatype system won't let you register the same type twice. The same underlying type, not the same name. Which means that you have to use a single name for each type to avoid inadvertent duplication, but you can't use the underlying typename for any multiple argument template types.

Anyway, the point is that you can't mix funciton-like macros with multi-argument templates without pain.


1 About all that can be said for it is I learned a lot and it worked well enough for me to use on some of my toy projects.

2 Not that you can tell from the crash or even from the stack trace in the debugger. All the behind the scenes magic that goes into QTest hides the origin of the error. I assume that most every QTest user has spent a frustring hour or so learning this the hard way.

The way I remember it...

Parent:
So it's to be torture?
The Albino:
[nods enthusiastically]
Parent:
I can cope with torture.
The Albino:
[shakes head enthusiastically]
Parent:
Don't believe me?
The Albino:
You survived grad school, so you must be very brave, but no one withstands The Toddler.

And later

Count Rugen:
As you know, the concept of the suction pump is centuries old. Really that's all The Toddler is except that instead of sucking water, she's sucking serenity, sanity and coolness.

2021-01-02

Honestly, I hadn't considered that

When considering becoming a first time-parent in ones forties it is a good idea to list as many of the ways that your life will change as possible and consider how you might feel about it.

I did my best when it was my turn, though I did not really appreciate the way need for extra time to accomplish anything would follow a rule of maximum inconvenience; for instance, she climbs happily into the car when we have time and uses a effectively infinite arsenal of delaying tactics when we're late. Arrgh!

But I didn't consider that this decision meant there would be someone to insists on celebrating my birthdays rather than letting them pass quietly by with only a slight grump to mark the occassion. Sigh.