Lessons from BP Texas City
In 2005, the BP Texas City Refinery explosion killed fifteen workers and injured 180 more.
The Chemical Safety Board’s investigation found multiple procedural and human errors. Control and safety system failures also contributed to the disaster.
Notably, no mechanical failures were identified. Pipes, vessels, and relief valves all held up. The root causes were hidden in electronics and human behavior.
This pattern is common in Oil & Gas (O&G) and petrochemical accidents. Mechanical failures are often easier to detect, even if specialized tools are needed.
By contrast, today’s facilities depend heavily on software, networks, and now artificial intelligence (AI). These virtual systems are invisible and cannot be examined under a microscope.
Forensic analysis in this environment requires expertise in digital controls, safety systems, and the standards that guide their design. Without this examination, any failure analysis is incomplete.
Understanding the Swiss Cheese Model
James Reason’s Swiss Cheese Model explains how complex accidents occur.
Accidents rarely result from a single failure. Instead, they come from multiple weaknesses that align.
Each safeguard in a safety system is like a slice of Swiss cheese. Holes represent weaknesses, failures, or limitations. A hazard passes through when the holes line up across several slices, causing catastrophe.
Rising Complexity in Control and Safety Systems
Over the last 20–30 years, system complexity has grown exponentially.
Large facilities now track hundreds of thousands of data points. Algorithms manage advanced functions such as multi-variable optimization and AI-driven process control.
These hidden digital layers must be included in failure reconstruction.
Facilities use a Layers of Protection Analysis (LOPA) to identify safeguards. Typically, seven to nine layers exist, with at least four dependent on digital control and software. Increasingly, AI is now part of these layers.
Where AI Meets Process Safety
AI’s role in process safety is an emerging frontier. Standards organizations are racing to adapt.
- IEC 61508 and IEC 61511 guide functional safety and safety instrumented systems.
- These standards predate AI, but they require all technologies, including AI, to meet Safety Integrity Level (SIL) requirements.
- Safety software must be deterministic, verifiable, and testable. AI’s “black box” nature makes compliance difficult.
- Technologies must also meet SIL requirements for probability of failure on demand (PFD) and dangerous failure rates. AI systems struggle to prove this under existing methods.
New Standards and Guidance
In 2024, ISO/IEC TR 5469 addressed AI in functional safety systems. It states:
- If AI is used inside a safety function, it must comply with safety rules.
- If it cannot, it must be downgraded to non-safety status with a conventional safety layer on top.
Despite this, some professionals continue integrating AI directly into safety functions. Eventually, an accident involving AI in a safety system will occur. Forensic experts must understand AI’s role, capabilities, and limits, along with how standards address its use.
AI as a Moving Slice of Swiss Cheese
When AI becomes a “slice” in the Swiss Cheese Model, analysis grows more complex.
Unlike fixed safeguards, AI’s “holes” shift in size and location.
Human behavior adds similar challenges. People are unpredictable, not repeatable, and often unaware of why they acted a certain way. Many forget what they did entirely.
AI is designed to mimic human cognition. This means its failures resemble human unpredictability, but with the added complication that AI cannot be questioned or deposed.