For the most part, all engineering failures can be attributed to a combination of causes: human (including ethical) failures, design flaws, materials failures, and extreme conditions. One can add pure accidents to the list, as well, but often those accidents can be, in part, attributed to the above causes with further analysis. In teaching using engineering disasters, I have found it valuable to provide background information and concepts about each area, so I plan to add a page on this blog for each area. In that way, when I (or you) find an interesting link or resource which can provide some elucidation in one of these causal areas, it will have a place to go. I will try to also have resources of this nature on my website on learning from disaster, so those of you who are educators can make use of it.
Category Archives: Uncategorized
Engineering judgement and the role of “experts” in avoiding disaster
In studying cases of engineering failure (and disaster), we often wonder what could have been done to avoid the problem in the first place. Questions often arise, such as “Didn’t anyone notice that …” or “How could this have been designed without anyone realizing that it was a disaster waiting to happen?” As professionals, engineers are often called upon to assess, inspect, certify, evaluate, etc. complex structures, systems, devices and software, to help both designers and those charged with operations and maintenance to catch problems before they happen. These engineers are often identified as “experts” in a particular area, and their judgement is highly valued. But this brings up other important considerations. For example, exactly what makes a person an “expert”, and how can engineering judgement best be used to help avoid failure?
In considering these issues from an educational perspective, I recently encountered a very interesting paper which is available online: “Engineering judgement in reliability and safety and its limits: what can we learn from research in psychology” by Lorenzo Strigini of the Center for Software Reliability, at City University Northampton Square, London. The paper can be downloaded at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.104.3931&rep=rep1&type=pdf. While the paper is from 1996, it provides an excellent lesson for engineers today, starting from a psychological perspective.
The author considers such issues as the ‘presumption’ of expertise, the role of experience and background knowledge in evaluating reliability, and the value of an informal judgement process versus a structured methodology. The latter is especially important when the “expert” is asked for their judgement in a situation where they encounter a possible fault with which they have little personal experience.
A number of psychological factors which are critically important in considering the nature of engineering failures (and how they managed to occur despite supposedly thorough analysis and inspection) are reviewed in this paper. Overconfidence is identified as a leading contributor to failure, as is the existence of various biases in the judgement of engineers. Examples of overconfidence are abundant — for example, the space shuttle Challenger disaster, on which much has been written.
An interesting psychological bias described by the author is “hindsight” bias — as in “hindsight is 20/20”. But hindsight can skew our impression of how a failure occurred. In the words of the author:
“When reviewing a sequence of events and decisions which ended in failure, we build a theory that predicts what we already know to have been the final outcome; then, the decisions which preceded it appear to have been wrong: we no longer recognise the dearth of information, or the ambiguity of the information available, at the time decisions were made.” This is an important concept to keep in mind, especially for those of us using case studies to teach about learning from failure — while in hindsight, the causes of failure may seem obvious, we need to put ourselves in the shoes of the engineers and others who designed the system, operated it, or were present when the disaster occurred. Doing so may provide us with new lessons for how to better avoid problems in the future.
The conclusions of the paper describe a number of ways to ensure that engineering judgement and the opinions of experts can best help us avoid failure — for example, using multiple experts (as a check) and using structured methods for analysis and failure prevention. Techniques such as Failure Modes, Effects and Criticality Analysis (FMECA) can be used to help remove some of the biases and overconfidence from the process, and are often taught in engineering design courses. I will add some web links to information on these methods, and their applications, to this blog.
The Gulf as an ecological ‘experiment’
There’s a nice article in the July 21st issue of “New Scientist” on how scientists (and engineers) can study the ecological disaster occurring in the Gulf as a result of the Deepwater Horizon oil spill, and use the data to develop better (more accurate) models of the impact of oil on the aquatic environment. Better models, of course, will help engineers design better ways to cope with this sort of an ecological disaster.
You can read the article at: http://www.newscientist.com/article/mg20727702.800-gulf-of-mexico-becomes-an-accidental-laboratory.html?page=1
Article on learning from disaster in New York Times (7/20/10)
“Taking Lessons from What Went Wrong” by William J. Broad is in the Science Times section of the NYT today. Especially in light of the Deepwater Horizon/BP oil blowout disaster in the Gulf of Mexico, it is a great time to relect on how engineers can learn from failure (in order to limit the chances of future disasters!).
The article can be read at: http://www.nytimes.com/2010/07/20/science/20lesson.html?pagewanted=1&_r=1&sq= technology engineering&st=nyt&scp=1
A course in engineering disasters at SBU
A new undergraduate course in engineering disasters has been devloped at Stony Brook University. It is offered in the Spring semester, for 3 credits. The bulletin description reads:
ESG 201- H: Learning from Disasters
The role of the engineer is to respond to a
need by building or creating something along
a certain set of guidelines (or specifications)
which performs a given function. Just as
importantly, that device, plan or creation
should perform its function without fail.
Everything, however, does eventually fail
and, in some cases, fails with catastrophic
results. Through discussion and analysis
of engineering disasters from from nuclear
meltdowns to lost spacecraft to stock market
crashes, this course will focus on how modern
engineers learn from their mistakes in order
to create designs that decrease the chance and
severity of failure.
We are planning on ofering the course in an online version, possibly in 2011. More will be posted here in that case.
Learning from Engineering Disasters
I (Dr. Gary Halada) have developed a web site at Stony Brook University on using engineering disasters as a tool for learning about engineering, ethics, and the role of engineers in design. Have a look — it’s at http://www.matscieng.sunysb.edu/disaster/