Useful ways of thinking about AI in the world: “AI as a Normal Technology”, Arvind Narayanan & Sayash Kapoor, https://knightcolumbia.org/content/ai-as-normal-technology.
AI risks and appropriate mitigation should follow well-understood patterns developed for existing technology. The hyperbolic risks invoked by critics around imminent arrival of “super intelligence” misunderstand the nature of AI, because they discount the slow adoption and diffusion by individuals and companies and only consider the rate at which the invention phase is proceeding. Analogies with electricity, which took 40 years to diffuse into industry and replace boilers in the basement of the factories. Also cf Doctorow’s notion that AI isn’t good enough to do your job yet, but it is good enough for your boss to think that it can and he will fire you anyway.
In proposing appropriate regulation when considering safety of, eg, social media publishers, the authors make an Interesting analogy with car development; a reference to Nader’s paper “Unsafe at any Speed”, that points out that safety was not considered the responsibility of the car’s manufacturer but of the driver for much of the early development of cars, and the resulting market failure to improve safety was successfully addressed by regulatory intervention.
The paper mentions many existing forms of risk mitigation that may be adopted from existing technology. Notably, like Brin, the authors propose that more AI models are better than trying to limit them using a non proliferation model: no single point of failure, usefulness of many AI in mitigating other AI’s mistakes.