Signed RAMS and an issued permit to work (PTW) are often treated as proof that risk is controlled. In practice, they’re proof that a decision was recorded at a moment in time — not that conditions stayed controlled once the job moved.
That gap persists because, under programme pressure, many RAMS/PTW processes are run to keep work moving. Success is measured in turnaround, signatures and an audit trail — not in whether conditions stay as authorised at the point of work.
The drift is structural. RAMS are typically authored away from the work, optimised for sign-off, then applied to live conditions that inevitably shift. When the job changes, organisations often have no fast, usable mechanism to capture and revalidate that change at the point of work. Adaptation happens informally, and escalation carries friction, delay, and politics.
This blog explains how the incentives, authority gradients, and supervision choices that make up normal site management systematically undermine the controls they are supposed to support. It also explains what it takes to design verification that holds under those conditions rather than depending on individuals to resist them.
Most management systems treat PTW and RAMS as gates: completed, signed, filed. The Construction (Design and Management) Regulations 2015 (CDM, 2015) (Statutory Instrument (SI) 2015/51) require something different.
The duties to plan, manage, monitor, and review extend through the work as conditions evolve, not only up to the moment of sign-off (HSE, 2015). The Health and Safety at Work etc. Act 1974 (HSWA 1974) sets the same expectation at the foundation: the duty to protect health and safety so far as is reasonably practicable is continuous.
If your management system treats the PTW or RAMS as a one-time gate, you are recording compliance with the start of that obligation, not meeting it.
HSE’s guidance on PTW systems makes the gap concrete. A permit controls the conditions for a task (HSE, n.d.-a). When the set-up drifts from those conditions and nobody acts on it, it stops being a control and becomes a record of a decision already taken.
That gap between authorisation and live control is where duty-holder liability tends to sit, and where incidents tend to occur.
Drift is not primarily a failure of supervision or an individual shortcut. It is a predictable consequence of how RAMS and PTW systems are structured, incentivised, and resourced in practice.
RAMS are typically authored before work begins, often by someone far removed from where the work is happening, such as a planner, contracts manager, or safety professional working from drawings, prior task data, or an earlier contract’s method statement.
The incentive is approval. The document needs to be comprehensive enough to pass review, signed by the right people, and submitted on time. Whether it is usable under actual site conditions tends not to be measured.
When RAMS are carried forward from previous tasks or adapted from a template, that gap may widen further. The practical consequence is a document that describes what the organisation intended rather than what the work requires in context — and it reaches workers at the stage when stopping to rewrite it would be most costly.
Under those conditions, workers do what human factors research consistently describes as local rationality. They resolve the immediate constraint using the resources available, within the system as it exists, without updating the document (Dekker, 2006).
The adaptation may be entirely reasonable, but from the management system’s perspective, it is an unrecorded deviation.
Sequencing clashes, plant routes, other trades in adjacent areas, and access constraints introduced by late design changes are typically treated as operational nuisances. The implicit assumption is that RAMS and PTW systems will absorb them.
In practice, they frequently cannot. RAMS describe task-level controls, but interface risks arise at the boundaries between tasks, trades, and time windows, where task-level documents have the least grip.
A PTW for hot work in one area does not automatically account for flammable material delivered to an adjacent bay by a different contractor under a separate method statement. Nobody’s RAMS covers the boundary between them.
When those coordination gaps materialise on the ground, the response is typically informal. A conversation, a workaround, a short delay, and no documentation. The set-up drifts from what was authorised, and the management record shows nothing changed.
When a project manager, client representative, or principal contractor signals, directly or by implication, that programme matters more than a stop point, supervisors face a structural conflict.
Escalating means delay. Delay means a conversation that is at best unwelcome and at worst career-limiting. The authority gradient tends to resolve in one direction, with the work proceeding and the supervisor absorbing the accountability for doing so.
This is how supervisors become programme shock absorbers. The function is not conscious or malicious. It is the predictable response to an organisational structure in which holding work has a visible, immediate cost, such as delay, friction, or a difficult conversation with a client-facing lead, and no visible immediate benefit. Escalation paths that carry a reputational cost for using them are not real escalation paths.
As James Reason’s organisational accident model makes clear, latent conditions in the management system, specifically the incentives, authority relationships, and resource allocations that constitute normal operations, create environments in which deviation becomes locally rational before it becomes visible as risk (Reason, 1990).
Verification collapses under supervision stretch in a specific and predictable way. It defaults towards what is auditable rather than what actually controls. Signatures on documents, permits in folders, and actions marked closed are all auditable. Exclusion zones holding under live traffic, isolations confirmed under shift change, and access routes clear when a second crew enters an adjacent area are rarely auditable in the same way.
Permit issuers are particularly susceptible to this pattern. Where permit offices are measured on turnaround and volume, specifically how quickly permits are processed and how few delays occur, the incentive becomes throughput rather than verification.
A permit issued promptly and without friction is the success metric, regardless of whether the set-up is later confirmed. Over time, the issuing function can become primarily an administrative gate rather than a control function, with permit conditions that are technically correct and routinely unverified.
When small adaptations succeed, such as a RAMS not updated because the change appeared minor or a start check abbreviated because the crew is experienced and the task is routine, the organisation gradually learns that the deviation is acceptable.
The successful outcome retrospectively confirms the judgement. Over time, the adaptation may become routine, and the system stops registering it as a deviation.
Sociologist Diane Vaughan documented this mechanism in the context of the Challenger disaster. She described incremental accommodation of conditions that fell outside the original design specification.
Each deviation individually rationalised and collectively moved the operation further from its safety boundaries without any single decision being obviously wrong (Vaughan, 1996). The pattern recurs in construction and maintenance incident investigations.
The question is not whether to use review triggers, start checks, mid-task checks, and closure targets, but why they tend not to hold under the pressures that matter. Each of the following is paired with its characteristic failure mode, because understanding that failure is what makes the control something you can actually design, rather than just aspire to.
The control: Define the conditions that require a RAMS review or permit suspension, including changes in method, equipment, location, team, supervision, weather, ground conditions, trade conflicts, or near misses.
When it tends to fail: Changes are categorised as “minor” to avoid the friction of re-briefing and re-approval. The permit issuer may not be on site to make the determination. Revalidation has often been designed, through remote authorship, late approval turnarounds, and fragmented ownership, to be slow enough that “we’ll update it later” becomes the rational choice on the ground.
Why the failure is underestimated: The unreviewed adaptation usually works. The judgement that “it wasn’t really significant” is retrospectively confirmed. The threshold for what counts as a significant change gradually lowers, one accommodation at a time, without any single decision being recognisably wrong.
The control: A check at the start of each shift or task, comparing RAMS and permit conditions to what is actually set up where the work is happening.
When it tends to fail: Start checks are often conducted in the site cabin, before plant is in position and before the crew has assembled. They audit the document rather than the set-up.
They can become predictable, with the crew understanding what the right answers look like and the checker understanding what to expect. When mobilisation is already under way, stopping to verify feels practically impossible, and the check becomes a record of intent rather than a confirmation of conditions.
Why the failure is underestimated: A completed start-check record is identical whether the check was substantive or performative. The paperwork cannot distinguish between the two, and most audit systems do not look past the record.
The control: A check during longer tasks to confirm that conditions have not changed since the task began.
When it tends to fail: There is often no designed stop point in the task sequence. When supervision is stretched across multiple fronts, mid-task checks are the first thing to drop. When conditions do change, the pace of work creates momentum that is difficult to interrupt without visible cost to the delivery schedule.
Why the failure is underestimated: Mid-task deviations are rarely captured unless they produce an incident or near miss. Unreported workarounds become standard practice without appearing in any management record, making the gap between work as planned and work as done invisible to the system that is supposed to manage it.
The control: Close identified gaps within a defined timeframe.
When it tends to fail: Closure targets create administrative compliance pressure that can be entirely independent of genuine control improvement. Actions may be recorded as closed while the underlying condition persists. A photograph of a corrective measure is not the same as a confirmed change in conditions. The metric improves; the gap does not.
Why the failure is underestimated: Closure rate is the most visible metric and among the easiest to influence superficially. Auditing the gap between recorded closures and actual on-the-ground conditions is rarely resourced as a systematic function.
Training supports control reliability, but only when it is integrated with work design, decision points, stop points, and verification. As a stand-alone measure, it is unlikely to address drift under the conditions described above.
The most important thing training can address is the escalation path — what conditions require stopping the work, and what happens when someone does. That question only has a useful answer when management’s response to a work hold is predictable, supportive, and does not carry a reputational cost.
Training that covers “how to raise a concern” without those organisational structures in place produces awareness without capability.
If you are reviewing how your teams are trained on permit conditions, stop points, and escalation paths, permit-to-work training is available online through Human Focus.
The structural conditions that produce drift, the same incentives, authority relationships, and supervision decisions examined above, also determine whether on-site verification is genuine or performative. Three questions tend to distinguish between the two.
If it is unclear who is specifically accountable for confirming that set-up matches the RAMS and permit conditions at the point of work, by name and with authority to hold the job, then verification depends on whoever is present using their own judgement.
Under delivery pressure, discretion resolves towards getting the work started. The question to ask is not “do we have a verification procedure?” but “who, specifically, would stop this job if the conditions were wrong, and would stopping it cost them anything?”
A related indicator: where supervisors consistently complete checks faster than the check could plausibly take at the point of work, the record is capturing process rather than conditions. The paperwork and the reality have separated, and the management system is measuring only the paperwork.
Escalation paths that are organisationally costly to use, because they delay delivery, create friction with the client, or mark the person escalating as “difficult”, tend not to be used.
The indicator is not the existence of the path but the evidence of its use. How often has it been activated, and what was the management response each time? Where the answer is “rarely” or “there were consequences,” the path is nominal.
If the same gap appears in the same task on multiple occasions and the response is to close out the action again rather than revise the RAMS or work design, the learning loop has stalled. The common reasons are worth naming:
- Close-out metrics reward recording resolution rather than achieving it.
- Contractors are reluctant to log patterns that reflect on their own planning.
- The organisation with authority to revise the upstream conditions is often not the same organisation that identifies the pattern.
A signed RAMS records that someone, at a point in time, considered the plan acceptable. A signed PTW records that conditions were assessed to be met at that moment. Neither records what is happening at the point of work an hour later, after a delivery has altered the exclusion zone or a second crew has entered the area under a different task.
Controls hold when organisations are honest about that gap and design verification to close it, rather than to document that someone intended to.The problem is that, under the incentives and authority relationships that constitute normal site operations, they tend to measure the wrong thing — agreement rather than control, processing rather than conditions.