Very clear article.
The New York Times show “The Weekly” did a story on the influence of YouTube in Brazil. While their conclusions are certainly alarming and seemingly valid, they’re not the whole story of how a fascist government took over the country. Missing is an analysis of the way the right wing establishment decided that characters such as Bolsonaro were useful idiots that they could manipulate to implement their own policies for their own good, and especially the plot to jail left-wing leadership as scapegoats for corruption in the country.
The discussion of the shortcomings of recommendation algorithms in YouTube remind me of the experiment with automated chat bots run by Microsoft, which quickly degenerated into hate speech. It’s clear that we have not yet incorporated basic ethics into our algorithms in the same way we have integrated the profit motive. Computer scientists need to be speaking with ethicists.
It’s important to remember that there are people behind these decisions, and those people need to take responsibility.
A: Tell him your plans.
Short of banning guns except for law-enforcement and the military:
Guns may not be owned by private individuals. Guns may be purchased and registered by gun clubs which are licensed, audited associations of people who wish to use them. Guns may only be loaded and fired on the premises of the gun club. Membership lists of clubs are publicly available. In the event that a crime is carried out using a gun, all members of the club to which the gun is registered are criminally liable.
Peter Calthorpe is one of the guys involved with new urbanism from back in the 20th century. He recently did a Ted talk outlining his vision for how cities will be developed and re-developed to accommodate 3 billion more people and it’s pretty inspiring.
This is the kind of thing I can see spending a bunch of time working on.
Listening to Episode 1 of “Bellweather”, a podcast by Sam Greenspan (https://www.kickstarter.com/projects/bellwether/b-e-l-l-w-e-t-h-e-r-a-podcast-of-speculative-journalism) investigating the first pedestrian death from collision with a self-driving car. In this case, it looks s though the car, having determined that a collision was imminent in the next few seconds, relinquished control to the drive without alerting her, expecting her to realize what was happening, figure out what to do and apply emergency braking and swerve all within four seconds or so.
He introduces the idea of a “moral crumple zone”. A traditional crumple zone, of course, is the part of the vehicle that is designed to fail in an emergency to protect the occupant. The analogy is that, in this case, the designated part to fail is the human driver, and the entity being protected is the company that built the car. In this case, delegating the authority to the human relieves the company of the moral responsibility for the crash.
I’m realizing that this sort of principle eliding responsibility from the company has become pervasive in a lot of systems design. There’s a lot of engineering around making sure the company can’t be held responsible for any malfunction, at the expense of the end user; DRM comes to mind.
This reminds me about Asimov’s three laws of robotics from the 50s, of which the first states
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
I’m realizing we are really at the point in systems design where the Three Laws need to be designed in as system requirements. And the fact that it’s optional is a flaw.