A recent article from CNN about the major flooding disaster associated with Hurricane/Tropical Storm Harvey has a great discussion of how poor city planning, lack of appropriate zoning, and other factors made the city more prone to disastrous flooding (even after changes made following other flooding incidents in recent years). It is crucial for engineers and urban planners to learn from these events and take well-considered and appropriate action to mitigate the impact of future major storms. This is especially true for urban areas dealing with rapid population growth and infrastructure development.
The recent failure (July 26, 2017) of the amusement park ride (the Fireball, manufactured by KMG) was a result of excessive (and apparently undetected) corrosion.
For more information, including some photos of the failed joint and soem interesting comments from workers, see this post.
Corrosion in all its forms is behind many failures, and is especially dangerous where inspection is difficult (for example, between assembled parts or hidden behind cover plates, as in the case of the I-95 Mianus River bridge collapse in Connecticut in 1983. Stress corrosion cracking and fatigue failure in which corrosion can play a role have been responsible for many material failures.
The online course — ESG 201: Learning from Disaster — will be taught this semester (Spring 2017). Videos have been created with original content, interviews and laboratory analysis of the Titanic, the Hindenburg, and Long Island train disasters, including the Great Long Island Pickle Wreck (1926).
Please contact email@example.com for more information.
Starting in Fall, 2016, the Stony Brook University course, ESG 201- Learning from Engineering Disaster, will be taught in a fully on-line format. The course will still fulfill the STAS requirement for the Stony Brook Curriculum. It will also be useful for all students who wish to learn about the role of engineers in analyzing and hopefully reducing the likelihood of engineering disasters. More details will be posted here shortly.
The Engineering for Change site has described some interesting new technologies to help support disaster preparedness and relief — from apps and web tools to exoskeletons and collapsible cell phone towers. Please have a look at their excellent post at: https://www.engineeringforchange.org/news/2014/10/24/the_next_generation_of_technology_for_disaster_preparedness_and_relief.html
After almost 2 years, I am reviving my Stony Brook University blog on how engineers learn from engineering failures and disasters, how both theory and case studies involving such failures can be used to enhance undergraduate courses and curricula, and related issues. We will also discuss topics such as risk, complexity and failure analysis, and relate these to some of today’s emerging technologies as they respond to society’s growing needs for energy, environmental protection and human health.
So not everything discussed here will be a disaster – our real focus is on developing solutions in an an increasingly complex engineered landscape.
Please send your comments and thoughts, and please have a look at the older posts on this site.
– Gary Halada
Associate Professor, Department of Materials Science and Engineering, Stony Brook University, NY
With the increasing complexity of engineered systems (and their interactions with the environment in which they operate — not to mention the organizational and human factors which impact their operation), concepts for improving reliability are increasingly important. Designing for reliability also requires an understanding of the nature of complexity itself. Klaus Minzer, in ” Thinking in Complexity: The Complex Dynamics of Matter, Mind and Mankind” (a book I strongly recommend) defines complexity in terms of the resulting non-linear behavior of complex systems. He explains the non-linear dynamics of complex systems with fascinating examples, from the evolution of life and emergence of intelligence to complexity in cultural and economic systems. I find his thoughts to fit in very well with concepts of complexity in engineered systems, and especially with how they fail.
Failure in complex systems often comes about due to a non-linear response to a load or an input (wheteher the input is something expected during normal operation or is due to an external event, such as a weather phenomenon or an accident). Engineers study how these non-linear responses happen, and how techniques for robust design of systems or incoporation of sensors and automated response systems can detect and correct a process or mechanism “going off the rails” before disaster can strike. In many cases, the non-linearity is due to an unseen or unintended interaction between compoents or processes. A relatively small loss in elasticity in an o-ring due to cold weather can lead to a rapid escape of burning gases which in turn leads to a catastrophic failure of a space shuttle, for example. I feel that failure is, in a sense, a way of recognizign the true complexity of a system. Of course, it would be far better to understand the complexity, the accompanying interactions, and the potential for non-linear response in an engineered system before a failure occurs.
I am a co-author on two articles appearing in Mechanical Engineering, the magazine of ASME, which address complexity and failure. You can find the first at: http://memagazine.asme.org/Articles/2011/December/Complexity_Consequence.cfm
The second should be appearing in the March issue.
Both will help to explain some of the issues which make reliability of complex systems both a critical and difficult goal for engineers.
In reading this article about a study justifying the closing of airports (and grounding of flights) in Europe last year after the eruption of a volcano in Iceland (to avaoid problems due to the large amounts of particulate material dispersed into the atmosphere), I am reminded of the arguments concerning the large amounts of money invested in avoiding possible problems due to the “Y2K” issue in the late 1990’s. The aitport closing costs the airline industry several billion dollars, but resulted in no loss of aircarft due to the cloud from the volcanic eruption. Would any planes have crashed had this not been done? Who knows — but if past disasters have taught us anythign, it is that we must be prepared to act based on the best possible knowledge of the impact of extreme conditions (or known faults, in the case of Y2K) before a failure occurs. If engineers (and policy makers) are successful, failure will be avoided. But that will always lead to arguments over whether the investment was worth it.
This also reminds me about arguments over investment in preventative medical care, etc. One can never tell what the outcome would have been had these precautions not been taken. Yet, for financial and other reasons — including any possible negative impact of the remedy or precautions — decisions must be made based upon peer-reviewed scientific evidence, collected past experience, and use of comprehensive computational calculations, modeling and simulations. This is an expensive perscription, but perhaps the best way to avoid some disasters.
The U.S. Department of Energy has published a number of reports related to the BP/Deepwater Horizon oil spill. These make great background information for educators, scientists and engineers, and anyone who would like to know more about this oil spill disaster. The website is: http://www.energy.gov/open/oilspilldata.htm
While some of the information may be a bit too detailed for general consumption (such as the “Flange Connector Spool Assembly Report”), the general timeline of the failure, details on the design of the blowout preventer (which failed) and subsequent technologies to contain the oil flow, as well as the footage of the oil flowing from the broken pipe, are fascinating.