Author Archives: Robert Marsanyi

From a book* I’m reading, while being interrupted by the events of war in Iran driven by the US administration:

I was beginning to behave like a fatally wounded old animal that charges in all directions, bumps into every obstacle, falls and gets up, more and more furious, more and more weakened, crazed and intoxicated by the smell of its own blood.

* The Possibility of an Island, Michel Houellebecq

A better take on computers and kids

doc.searls.com/2026/04/13/the-kids-take-over-2/

How to empower kids in schools to really use the tools of now. AI, CNC, robotics, programming, all integrated into the curriculum instead of firewalled into a speciality class. Starting in Kindergarten, like reading.  The “coding” push in the last few years was always a dead end, focused on job skills and quickly superseded by AI.  But the approach described here is about empowerment.

Contrast this with https://arstechnica.com/science/2026/04/to-teach-in-the-time-of-chatgpt-is-to-know-pain/, which documents a thoroughly frustrated take on how (higher level) education is grappling with AI.  What are the institutions missing?

“Trumpism” v plain old authoritarianism

So I’m watching Ezra Klein interviewing Christopher Caldwell of the Claremont Institute about Trumpism.  Caldwell is describing Trumpism as being concerned with small-d democracy, bureaucracy (the Deep State), inequality (the Global Elites, the technocrats). He cites the Iran War as the point at which Trumpists throw up their hands in incredulity, having not expected this at all based on their understanding of Trump.

In science, we have theories about theories, all of which are questionable, but nevertheless … Here’s the most naive one, which most scientists themselves profess.  Scientists craft theories to fit a set of known generally-acknowledged facts.  There may be more than one competing theory. Theories are evaluated not only on how well they fit the facts, but how good their predictive powers are: can a theory predict new facts?  Is it conceivable that we can find facts that contradict the theory to the extent that we can prove it’s wrong, or at least incomplete?

There are at least two popular competing theories of Trumpism.  One is that described by Caldwell.  Another is that Trump is a classic authoritarian.  Caldwell is saying that people whom he classifies as Trumpists (Meghan Kelly, Joe Rogan, …) didn’t predict, and are surprised by, the Iran War.  On the other hand, theories of authoritarianism explicitly predict that tyrants will start wars of choice for a variety of reasons having to do with their hold on power.

There are useful things to know in Caldwell’s explanation of how and why Trump gained the power that he has gained.  But in some scientific sense, surely authoritarianism is a better explainer for Trump than Caldwell’s tortured description of Trumpism?

Avoiding supporting the community in which you live

https://www.hamiltonnolan.com/p/suicidal-bootlicking-as-a-method

A satisfying riposte to the panic over the millionaire income tax recently passed in WA.  A couple of thoughts:

1. What the article doesn’t describe is: why do these states act this way? Could it be because the legislatures of these states are captured by the oligarchs for whom they enact legislation, rather than the regular people who are hurt by the results?  How come this happens?

2. This process also describes why we can’t expect oligarchies to fix the climate, and other global catastrophes.  They will not be overly affected by the crumbling of the climate, because they can mitigate the effects for themselves and their families (perhaps have to forgo the odd skiing vacation, but otherwise …)

War is Evil

War is evil. Murder is evil. All the careful parsing that’s going on about Netanyahu’s war of choice and our complicity in it is irrelevant to this axiom. Go and see The Choral, set in a small English town during WWI, if you think otherwise.

Unsafe at any speed

Useful ways of thinking about AI in the world: “AI as a Normal Technology”, Arvind Narayanan & Sayash Kapoor, https://knightcolumbia.org/content/ai-as-normal-technology.

AI risks and appropriate mitigation should follow well-understood patterns developed for existing technology.  The hyperbolic risks invoked by critics around imminent arrival of “super intelligence” misunderstand the nature of AI, because they discount the slow adoption and diffusion by individuals and companies and only consider the rate at which the invention phase is proceeding.  Analogies with electricity, which took 40 years to diffuse into industry and replace boilers in the basement of the factories. Also cf Doctorow’s notion that AI isn’t good enough to do your job yet, but it is good enough for your boss to think that it can and he will fire you anyway.

In proposing appropriate regulation when considering safety of, eg, social media publishers, the authors make an Interesting analogy with car development; a reference to Nader’s paper “Unsafe at any Speed”, that points out that safety was not considered the responsibility of the car’s manufacturer but of the driver for much of the early development of cars, and the resulting market failure to improve safety was successfully addressed by regulatory intervention.

The paper mentions many existing forms of risk mitigation that may be adopted from existing technology.  Notably, like Brin, the authors propose that more AI models are better than trying to limit them using a non proliferation model: no single point of failure, usefulness of many AI in mitigating other AI’s mistakes.