“Trumpism” v plain old authoritarianism

So I’m watching Ezra Klein interviewing Christopher Caldwell of the Claremont Institute about Trumpism.  Caldwell is describing Trumpism as being concerned with small-d democracy, beurochracy (the Deep State), inequality (the Global Elites, the technocrats). He cites the Iran War as the point at which Trumpists throw up their hands in incredulity, having not expected this at all based on their understanding of Trump.

In science, we have theories about theories, all of which are questionable, but nevertheless … Scientists craft theories to fit a set of known generally-acknowledged facts.  There may be more than one competing theory. Theories are evaluated not only on how well they fit the facts, but how good their predictive powers are: can a theory predict new facts?  Is it conceivable that we can find facts that contradict the theory to the extent that we can prove it’s wrong, or at least incomplete?

There are at least two popular competing theories of Trumpism.  One is that described by Caldwell.  Another is that Trump is a classic authoritarian.  Caldwell is saying that people whom he classifies as Trumpists (Meghan Kelly, Joe Rogan, …) didn’t predict, and are surprised by, the Iran War.  On the other hand, theories of authoritarianism explicitly predict that tyrants will start wars of choice for a variety of reasons having to do with their hold on power.

There are useful things to know in Caldwell’s explanation of how and why Trump gained the power that he has gained.  But in some scientific sense, surely authoritarianism is a better explainer for Trump than Caldwell’s tortured description of Trumpism?

Avoiding supporting the community in which you live

https://www.hamiltonnolan.com/p/suicidal-bootlicking-as-a-method

A satisfying riposte to the panic over the millionaire income tax recently passed in WA.  A couple of thoughts:

1. What the article doesn’t describe is: why do these states act this way? Could it be because the legislatures of these states are captured by the oligarchs for whom they enact legislation, rather than the regular people who are hurt by the results?  How come this happens?

2. This process also describes why we can’t expect oligarchies to fix the climate, and other global catastrophes.  They will not be overly affected by the crumbling of the climate, because they can mitigate the effects for themselves and their families (perhaps have to forgo the odd skiing vacation, but otherwise …)

War is Evil

War is evil. Murder is evil. All the careful parsing that’s going on about Netanyahu’s war of choice and our complicity in it is irrelevant to this axiom. Go and see The Choral, set in a small English town during WWI, if you think otherwise.

Unsafe at any speed

Useful ways of thinking about AI in the world: “AI as a Normal Technology”, Arvind Narayanan & Sayash Kapoor, https://knightcolumbia.org/content/ai-as-normal-technology.

AI risks and appropriate mitigation should follow well-understood patterns developed for existing technology.  The hyperbolic risks invoked by critics around imminent arrival of “super intelligence” misunderstand the nature of AI, because they discount the slow adoption and diffusion by individuals and companies and only consider the rate at which the invention phase is proceeding.  Analogies with electricity, which took 40 years to diffuse into industry and replace boilers in the basement of the factories. Also cf Doctorow’s notion that AI isn’t good enough to do your job yet, but it is good enough for your boss to think that it can and he will fire you anyway.

In proposing appropriate regulation when considering safety of, eg, social media publishers, the authors make an Interesting analogy with car development; a reference to Nader’s paper “Unsafe at any Speed”, that points out that safety was not considered the responsibility of the car’s manufacturer but of the driver for much of the early development of cars, and the resulting market failure to improve safety was successfully addressed by regulatory intervention.

The paper mentions many existing forms of risk mitigation that may be adopted from existing technology.  Notably, like Brin, the authors propose that more AI models are better than trying to limit them using a non proliferation model: no single point of failure, usefulness of many AI in mitigating other AI’s mistakes.

Email, spam and AI

David Brin posits that the way to safety with generative AI is competition between AI engines.  I’ve noticed that the amount of AI-generated spam in my Inbox has increased geometrically in the last month, and spam filters aren’t keeping up.  Is anyone using AI for spam filters?  Are they just being outmatched?

Update: well, of course they are, just (maybe) not with my email provider. Tempted to run a filter locally.

A call in the wilderness

To the developers at the New York Times: your app has become completely unusable on my iPad. Slow, then stops completely as I move through the news.  Your website, otoh, is still workable but slow.  FYI, the Guardian site works great.

I went to your support pages to see if I could report this but theres only a chat bot there, and some FAQs. So posting here in the event your AI sweep tools harvests my complaint 🙂