
Designing out the idiot. Very many years ago I had a job to create a system to collect information from staff working out in the fields (literally, they did work in fields). My boss at that time instructed me to create something that ‘designed out the idiot’. Basically something that nobody could get wrong, accidentally or deliberately.
Being enthusiastic and eager to please I approached the task with ‘ruthless efficiency’, another one of the Boss’s favourite sayings. Unfortunately there was only one real idiot in this story, me!
My colleagues who worked out in the fields were highly experienced and skilful individuals, committed to their jobs. Designing a ‘system’ that essentially treated these people like idiots was never going to work, and was ultimately a deeply disrespectful thing to do.
Fortunately my actions were largely harmless, as is the case with much of my administrative effort I suspect. However I did get reprimanded for my ‘ruthless efficiency’. That happened 25 years later, when I met one of the subjects of my system at a retirement party. 25 years on and he was still irritated!
Perfect Systems. The title of this post quotes from the 1934 T.S.Eliot poem, Choruses from “The Rock”. You’ll find it in Chorus Six, which has been interpreted as being about why the modern world doesn’t understand it’s past and tries to deny it. In the link above I do like the comment that the poem… “captures the struggling, empty soul of modern work life”…. and that was 1934! Anyway here is the quote:
I haven’t dabbled in poetry appreciation since I was in school, and I’m not about to start again. However, those few lines from Choruses from “The Rock” are useful for interpreting what I was trying to achieve with my ‘design out the idiot’ endeavours. In my naive way I was aiming for a ‘perfect system’ that removed the need for anyone involved to be consciously ‘good’. The ‘right’ thing would happen, without anyone having to think about it.
Unfortunately I suspect my ‘designing out the idiot’ mindset will have biased the process. This was to a point where my system was a heavy-handed tool of checking and compliance, rather than something that was helpful for the people out in the field as well us sitting in the warm and comfortable head office.
Deviant Librarians and Perfect Systems. Choruses from “The Rock” was written in 1934 but seems to have a great deal of relevance to current predictions that the world of work is moving rapidly towards large-scale automation and ‘perfect systems’. Last week this intrigued me: ‘Automated book-culling software drives librarians to create fake patrons to ‘check-out’ endangered titles’
Basically it is the story of two librarians at East Lake County Library in Central Florida who engaged in deviant behaviour, by creating a fictitious person (Chuck Finley). The purpose of fictitious Chuck was to borrow books, so that they would appear to be ‘popular’ on the computerised system. This would avoid them being ‘culled’ by the software algorithm that decided what books would sit on the library shelves (a perfect, human free, system). The librarians knew from their experience, relationships and understanding of their service users that some of the books at risk would come back into fashion. They used ‘Chuck’ to help keep them on the shelves.
Unfortunately their deviant behaviour was ‘rumbled’. They did what they did for the right reasons. However creating a fake identity probably isn’t really something that public servants should indulge in outside of a James Bond movie.
I don’t know much about how the East Lake Library book culling software and algorithm was designed. Perhaps there was plenty of ‘user experience’ testing to understand the point of view of highly experienced librarians? Hopefully the coders and designers spend time in the libraries observing real life and speaking with the librarians and the lenders to understand the needs, wants and behaviours of both groups? There may even have been and extensive process of piloting and refinements of the system to reflect feedback from the real world experiences?
Whatever approach they used, I do hope it wasn’t anywhere close to the ‘design out the idiot’ brief I was given over 25 years ago. With the growing push to automate so many systems there is a lot at stake. Systems that focus initially on supporting humans, might be a better option than jumping straight to ‘perfect systems where nobody need be good?’
Artificial Intelligence or Artificial Stupidity? One final thing to ponder. Whilst the story of the deviant librarians was getting an airing last week, Matt Wyatt (@ComplexWales) made the following comment to Ollie Minton (@drol007) (both good Twitter buddies), “Just remember, way before Artificial Intelligence turns up we’ll spend a long time living with Artificial Stupidity!”
- Automated systems can, and have proven to be, a great help to everyday human activities.
- The mindset that influences the design of the system is essential. I’d suggest not adopting a ‘design out the idiot’ view of people.
- Think about augmentation (complementing what already exists) rather than complete replacement. To quote Matt, you might spend too much time with Artificial Stupidity before you get to Artificial Intelligence, so just be aware of the consequences.
You may like this TED Talk on Security and the Internet of Things (https://www.ted.com/talks/avi_rubin_all_your_devices_can_be_hacked). It’s particularly interesting as part of the TED Radio Hour on Networks (http://www.npr.org/programs/ted-radio-hour/), as it follows a piece on driverless cars. Driverless cars reduce the human margin for error, but the consequence of hacking all of a sudden becomes huge. Driverless Cars are “designing out the idiot” to create a system that’s free from error, but the immediate future seems to be following your point about augmentation via Advanced Driver Assistance Systems (https://en.wikipedia.org/wiki/Advanced_driver_assistance_systems). That’s until Big Brother takes over and gives me a lift home every day!
Great post Chris!
Dyfrig
I’ve been working with some quantum computing boffins, busy committing their lives to stuff nobody cares about, yet. A rather beautiful quote from one of them, who shares our Celtic roots, that both explains why I’m working with them and allays any fears you may have about AI:
“The most advanced thinking machine we have at the moment can comprehend the world, just about as well as a dog. Not a Border Collie mind you, more like a Pekingese, with hair over its eyes and no sense of smell, or personal hygiene. We’ve a long way to go! (Sighs)”
Ah thank you, that is reassuring.
We’ve a dog that lodges with us that fits in a similar category to the Pekingese, a Pug.
Not as hairy, but well up the scale on smelliness.
One additional worrying characteristic is an affinity for computer keyboards and anything electronic.
It is responsible for more lost work and dubious emails that you’d imagine – well thats my excuse.
Unlike the cat which writes excellent emails, when it can be bothered.
On that comparison, where do your computer boffins place AI in relation to cats?
Anything approaching cat intelligence would be a huge leap forward.
Thanks for the comment I’ll send you a pic of the pug on twitter.
Chris