The problem of AI ethics, and laws about AI

0

In the late 1990s, the UK Post Office deployed a new point-of-sale computer system, built for it by Fujitsu. Almost immediately, post-masters, who are self-employed and generally small independent retailers, started reporting that it was showing shortfalls of cash; the Post Office reacted by launching prosecutions for theft. Close to a thousand people were convicted over the next 15 years, many more were falsely accused and settled, and there were at least four suicides.

Meanwhile, Fujitsu and the Post Office knew that the system was full of bugs that could cause false shortfalls to appear, but Fujitsu and Post Office staff went to court and testified that the system was working correctly and that theft was the only explanation. This has now, understandably, become a huge scandal.

I think about this case every time I hear about AI Ethics and every time people talk about regulating AI. Fujitsu was not building machine learning or LLMs – it was 1970s technology. But we don’t look at this scandal and say that we need Database Ethics, or that the solution is a SQL regulator. This was an institutional failure inside Fujitsu and inside the Post Office, and in the court system failing to test the evidence properly. And, to be clear, the failure was not that there were bugs, but in refusing to acknowledge the bugs. But either way, to take the language that people now use to worry about AI: a computer, running indeterminate software that was hard to diagnose or understand, made decisions that ruined people’s lives – that money was missing. The staff at the Post Office just went along with those decisions.

We don’t solve this problem with a SQL regulator, and the same point applies when we read that FTX had a spreadsheet with eight different balance

Read the rest of the article here.