25 May 2022

For the last three years the emphasis in my career has been safety. I was first exposed to the community of safety researchers and practitioners at Strange Loop in 2017 and now I’m coming a bit full circle. 

One of the important take aways I have from deep diving into this field is that there’s a lot of research that was done outside computer science that addresses challenges current software engineers struggle with, some of it done decades before these problems appeared in software. I’ve started to feel like I’m watching a bad disaster movie where very smart people make reasonable but dangerous decisions without understanding that. I keep thinking “…wait, don’t you know what Rasmussen said about this?” But of course …. how would they know? How would they even know to look for such research?
The traditional concept of a computer science degree focuses on computation, because when CS started as a field the people who programmed computers were also the people who built them, or designed their operating systems or contributed to compilers. All work where a thorough understanding of math and engineering are essential.

But these days software people who do anything like that are in the minority. The bulk of software work today is about integrating computation into human driven tasks, predicting and anticipating how people think, what they need, how they react to new communication and work methods. Despite much chatter about the need for CS programs to better prepare students to work as software engineers, the delta between what professionals need to know in order to build successful systems and what schools teach is moving in the opposite direction.

If the point of a CS education is to prepare people to work in software (and that assertion is debatable!) then a good computer science education needs to give people enough exposure to the domains that study how people think and behave and organize for them to know how to occasionally break down the silos.

We need to think of computer science as a part of liberal arts, in other words. This doesn’t mean giving up the math and engineering bits, it’s more a change in perspective. People associate the phrase “liberal arts” with college degrees that do not map to specific trades or clear career paths (and are therefore “worthless”), but liberal arts is about critical thinking, logic, picking apart complex problems, and ethics.
Here is a short, incomplete and unordered list of research done in social science that is directly relevant to problems we struggle with building computer systems:

Jens Rasmussen (ergonomics) system incentives always push the systems towards critical failure. 

Helen Nissenbaum (ethics) increased computerization comes at the cost of accountability as harm is split across systems that have different owners.

Lisanne Bainbridge (psychology), the expertise human operators need to supervise automation comes from operating the system, which the automation takes away from them. 

Hal Arkes (psychology), experts are more likely to ignore decision rules in favor of probabilities, making them more error prone than amateurs. 

Charles Perrow (sociology) systems that are tightly coupled and complex will experience uncontrolled, cascading failures so often it becomes normal. 

Sydney Dekker (ergonomics) organizations need psychological safety in order resolve incidents quickly. 

Garry Kasparov (chess) good process can beat both experts and supercomputers.

Karen Yeung (law) algorithms that optimize for personalization compromise the principle of rule of law over time. 

Berkeley J. Dietvorst (marketing science) recommendations (modifiable algorithms) overcome algorithm aversion and resistance better than improved accuracy. 

Write & Read to Earn with BULB

Learn More

Enjoy this blog? Subscribe to Dodo_zain


No comments yet.
Most relevant comments are displayed, so some may have been filtered out.