The Incentive Trap: When “Zero Harm” Becomes Less Reporting

The oil and gas industry has spent decades refining its safety metrics, distinguishing between lagging indicators like Lost Time Injuries (LTI), Medical Treatment Cases (MTC), and Fatality and Catastrophic events (FAC) that measure what already went wrong, and leading indicators like UCUA cards and near-miss reports that capture what could go wrong. But somewhere along this journey, we created an incentive structure that has transformed “Zero Harm” into Less Reporting.  Operating companies now compete on fractional TRIR rates and celebrate consecutive days without LTIs, and what was meant to prioritize safety has instead prioritized appearing safe. I’ve witnessed incidents downplayed, reclassified, and buried under bureaucratic semantics. A fatality from chest pain after inhaling chemicals becomes “non-work-related medical emergency” because acknowledging it as a workplace incident destroys your target metrics. When your bonus structure, your contract renewals, and your corporate reputation hinge on these numbers, the system doesn’t reward safety, does it ?.

The Middle Management Filter: When Leading Indicators Get Buried; having that tough conversations for safety

This politicization extends beyond just the lagging indicators. On one vessel I’m familiar with, there were significant corrosion problems that should have triggered immediate action. Yet these issues never appeared in the daily UCUA cards that were supposed to capture such concerns. Middle management filtered them out, insisting these problems would be “managed differently” through other channels. The leading indicators that should have warned us were suppressed before they ever reached decision-makers, not because workers weren’t seeing the problems, but because reporting them created uncomfortable conversations about maintenance backlogs, budget constraints, and accountability. This is where the entire safety metric system breaks down—when the very data meant to prevent incidents gets filtered through political considerations about how things will look on reports, in meetings, and in annual reviews.

The AI Delusion: You Cannot Trend What Isn’t Reported

Now enter artificial intelligence, sold as the solution that will extract hidden patterns and predict incidents before they happen. Here’s the uncomfortable truth that senior HSE professionals already know but vendors won’t tell you: AI cannot extract meaningful trends from non existent data. If your UCUA cards are filtered by middle management politics and your LTI data is massaged by incentive structures, feeding this into an algorithm will only teach it to predict which issues management wants to hide. No machine learning model can unearth secrets that experienced HSE professionals can’t see when the fundamental problem isn’t analytical sophistication—it’s data integrity. I’ve sat in meetings where we discussed AI trending tools with senior HSE personnel, and the conclusion was clear: you cannot trend what isn’t honestly reported. AI is not a truth serum for organizational dysfunction, and pretending it can compensate for corrupted leading and lagging indicators is dangerous magical thinking.

The Cultural Shift: From Metrics to Psychological Safety

Fortunately, progressive companies are beginning to recognize this fundamental flaw and are shifting toward psychological safety—the belief that workers can speak up about safety issues without fear of punishment or humiliation.  Forward-thinking oil and gas firms have started aligning with ISO guidelines and others on psychological health and safety at work, pushing beyond compliance- i.e, driven approaches to foster a safety culture where workers feel empowered to report hazards honestly. Rather than incentivizing low incident rates through bonuses and contracts, these companies are creating environments where speaking up is celebrated rather than punished, where stopping work for safety concerns carries no career consequences, and where the goal shifts from looking safe on paper to actually being safe in practice. This cultural transformation acknowledges what the research has consistently shown: you cannot improve what you cannot measure honestly, and you cannot measure honestly when the measurement system itself punishes truth-telling. The industry is slowly learning that psychological safety isn’t soft management theory—it’s the foundation upon which accurate data collection, genuine learning from incidents, and meaningful safety improvements are built.

Practical AI

What AI can actually do, when deployed practically rather than as a political cover, is fundamentally different. It can thoroughly cross-reference your organization’s incident history with industry-wide databases to surface relevant lessons learned. It can review work packages and PTW submissions against historical incidents to flag: “Three companies had fatalities doing something similar to this—here’s what killed people and here’s what you might be missing.” It can help incident investigation teams systematically analyze root causes and track whether corrective actions actually get implemented. This is practical AI supporting professionals with lessons learned, incident management, and permit-to-work processes—not predictive analytics pretending to forecast incidents from data that’s been politically sanitized. Fix your incentive structures first, create an environment where people can report honestly without career consequences, and then AI becomes a powerful tool to connect your workforce with relevant safety intelligence. Until then, you’re just teaching algorithms to be complicit in the same theater that’s making your operations less safe while your metrics look better.