How Complys checks subcontractor RAMS in 30 seconds (UK 2026 guide)
Reviewing subbie RAMS by hand can take 30 minutes per document. Here is how AI-powered review tools like Complys work, what they catch, what they miss, and what UK principal contractors should expect from automated RAMS checks in 2026.
The hidden cost of reviewing subbie RAMS by hand
If you are a principal contractor with five subbies on a project, you are reviewing twenty or thirty RAMS documents a month. At twenty minutes each, that is ten hours of your supervisor's time. Most weeks, more.
The work is also boring. Each RAMS reads like every other one. Generic phrases, copy-pasted hazards, the same legislation lists. Reviewers stop reading carefully after the third document of the morning and start scanning for red flags they recognise.
That is exactly when things slip through. The fragile-roof procedure that is not in the roofer's RAMS. The missing isolation step in the electrician's method statement. The asbestos survey reference in the demolition document that points to a 2018 survey that has long since expired.
This is the gap that automated RAMS review tools were built to fill.
Vet subcontractor RAMS in 30 seconds, not 30 minutes
Upload any subbie RAMS. Complys runs a structured review against UK legislation and your standards, returns a clear verdict (Pass / Borderline / Fail), lists every gap, and drafts a polite report you can send back to the subbie.
What an automated RAMS review actually does
An AI-powered RAMS review tool reads the document, compares it against a library of UK construction safety standards, and produces three outputs: a verdict, a list of findings, and a draft response you can send to the subcontractor.
It is not magic. It is the equivalent of a competent reviewer reading the document with a checklist of every possible failure point in front of them, working through it systematically, and never getting tired. The technology that makes this possible is the same large language model technology behind chatbot assistants, but trained or prompted around UK construction legislation, HSE guidance documents, and trade-specific standards.
The verdict
Most tools produce one of three verdicts:
- Pass - the document is comprehensive enough to use on site as-is. No critical gaps
- Borderline - usable on site but only after the subbie addresses listed issues. Most documents land here
- Fail - the document has critical gaps that block use. The subbie should rewrite
A good tool is conservative with the Pass verdict. Many tools learn over time that "Pass" is the wrong call when the findings list contains anything substantive.
The findings
This is where the value sits. A typical findings list contains:
- Critical issues - work-stopping problems (no PPE specifications for the trade, no isolation procedure for electrical work, missing asbestos survey reference)
- Missing items - things the document does not cover but should (relevant legislation, specific control measures, named operatives)
- Category scoring - a 1-to-5 rating across categories like Hazard Coverage, Control Specificity, PPE Adequacy, Trade-Specific Standards
- Questions for the subbie - clarifying questions you can copy directly into an email back to the subcontractor
The report
Some tools (Complys included) produce a draft polite report you can send to the subbie summarising the verdict and listing what they need to fix. This bridges the gap between a useful internal review and the actual conversation you need to have with the subcontractor.
What automated RAMS reviews catch well
Tools like Complys are particularly strong at catching the high-frequency failures that experienced reviewers also catch but in a fraction of the time. Specifically:
Generic content
The single most common RAMS failure is generic content. Hazards described in general terms ("be aware of working at height") rather than site-specific terms ("operatives will work from a 5m mobile tower scaffold on tarmac, with access controlled to the courtyard"). An automated review spots this in seconds because the absence of trade-specific or site-specific language is statistically obvious.
Missing legislation
UK construction RAMS must reference the Construction (Design and Management) Regulations 2015, the Health and Safety at Work etc Act 1974, and trade-specific regulations (Electricity at Work Regulations 1989 for electrical, Work at Height Regulations 2005 for height work, Control of Asbestos Regulations 2012 for any pre-2000 building, and so on). Tools have these legislation references hardcoded and immediately flag any document that misses the right ones.
Outdated references
Many subbies copy-paste from a template they wrote in 2019. Their RAMS still references the 17th Edition Wiring Regulations (now 18th Edition Amendment 2), the 2015 Asbestos Survey Guidance (now superseded), or the original CDM 2015 timeline (now well-established). Automated reviews spot these because they know what the current versions are.
Vague controls
"The operative will take all reasonable care" is not a control. "Operatives will be briefed on the controlled access plan, will wear EN ISO 20471 Class 2 hi-vis at all times, and will not enter the courtyard without supervisor sign-off via the safe-area procedure" is a control. Automated tools recognise the difference and call it out.
Missing competence evidence
Naming "competent operatives" without card numbers and qualifications is one of the most common rejection points. Reviewers know to look for it. So do the tools.
What automated RAMS reviews do not catch
Honesty matters here. There are real limitations.
Site-specific judgment calls
An automated tool does not know your site. It does not know that the courtyard floods at high tide, that the neighbouring tenant runs a school, or that there is a specific scaffold geometry that has caused near-misses on this exact project. A human reviewer with site knowledge catches things a tool cannot.
Operative competence verification
The tool can confirm card numbers are present in the document. It cannot confirm those card numbers are real, current, or held by the people listed. A separate competence-verification step still matters.
Cross-document consistency
If the same subbie has submitted three RAMS for three projects, an automated tool reviews each in isolation. It does not flag that the operative names differ between documents, suggesting the supervisor on this project is not the supervisor at all. Cross-checking is still a human task.
Updates after submission
Most tools review what was uploaded. If the subbie subsequently changes scope, swaps operatives, or works during a different shift pattern than they wrote up, the original RAMS review is no longer relevant. A live audit-trail tool helps but does not solve this entirely.
How Complys handles each step
For context, here is the workflow a Complys user goes through when reviewing a subbie RAMS:
Upload the RAMS
Drag the PDF or Word document into the upload area. The system extracts the text, identifies the trade type from the content, and queues the review.
Wait roughly 30 seconds
The system processes the document against UK construction safety standards. Reviews finish in about half a minute on average, occasionally longer for complex documents.
Read the verdict and findings
The result page shows the verdict (Pass, Borderline, or Fail), an executive summary, the category scoring, the missing items, the critical issues, and a list of questions you might ask the subbie.
Generate the report
One click produces a polite written report you can send to the subbie. You can edit it (you probably should), pick the tone (plain-English or formal), and send it by email or save it as a PDF.
Track the response
Sent reports show in your unified inbox with email tracking (delivered, opened, clicked) so you know whether the subbie has actually read your feedback.
A method statement that actually meets the brief
Sequence of work, equipment lists, supervision arrangements, emergency procedures - written in the format main contractors expect, with cited UK legislation. Edit, brand it with your logo, export as PDF.
When to trust the verdict and when to second-guess it
Automated reviews are reliable but not infallible. Three principles for treating their output:
Pass with no findings
If a tool returns Pass and the findings list is genuinely empty, the document is probably good. Spot-check it but trust the verdict. Save your reviewing time for the documents that need attention.
Pass with findings
This is a red flag. If you see Pass alongside any list of missing items, the tool has been over-generous. A good review tool catches this internally and downgrades to Borderline before showing you the result, but if you ever see Pass-with-findings, treat it as Borderline yourself.
Fail with critical issues
Trust this verdict immediately. The tool only flags Critical when something genuinely cannot proceed. Send the rewrite request and move on.
What the legal position is on AI-reviewed RAMS
This question matters for principal contractors who are CDM duty-holders. The short answer: an AI review does not replace your duty to review.
The Construction (Design and Management) Regulations 2015 hold the principal contractor responsible for ensuring RAMS are coordinated and adequate across the project. That responsibility cannot be delegated to a tool. What an automated review CAN do is dramatically reduce the time you spend on baseline checks, freeing you to focus on the site-specific judgment calls only you can make.
The HSE has not published specific guidance on AI-reviewed RAMS as of 2026, but the principle is clear: the duty-holder remains responsible. AI is an aid to your review, not a substitute for it.
How long does sub-RAMS review actually take?
Real timing comparison from a five-subbie project:
By hand
- Read the RAMS: 8 to 15 minutes per document
- Cross-check against trade-specific standards: 5 to 10 minutes
- Draft a feedback email if needed: 5 to 10 minutes
- Total: 20 to 35 minutes per document
With Complys
- Upload: 30 seconds
- Wait for review: 30 seconds
- Read findings: 2 to 3 minutes
- Edit and send the auto-generated report: 1 to 2 minutes
- Total: under 5 minutes per document
For a busy principal contractor reviewing twenty RAMS a month, that is the difference between ten hours and a single morning. The supervisor uses the saved time on the documents that actually need critical attention.
Who benefits most from automated RAMS review
Not every contractor will see the same value. The pattern of who benefits most:
Principal contractors with multiple subbies
The biggest savings. If you are receiving RAMS from five or more subcontractors regularly, the time savings alone justify the tool. Tier 1 principal contractors who manage twenty or more subbies essentially cannot operate without one in 2026.
Compliance officers in larger trade businesses
If your business has its own compliance officer reviewing RAMS for repeat clients, automation gives them coverage they could not have alone. They become the strategic reviewer; the tool handles the baseline.
Smaller contractors who occasionally subcontract
The value is real but more modest. A tool earns its place when subcontractor RAMS reviews are happening regularly rather than once or twice a year.
What to look for in a RAMS review tool
If you are evaluating tools beyond Complys, here is what matters and why:
UK-specific legislation library
Tools trained on UK regulations catch UK failures. American or international tools often miss UK specifics like the Construction (Design and Management) Regulations 2015, the 18th Edition Wiring Regulations, or HSE-published guidance like HSG33 for roofwork. If a tool was built primarily for the US OSHA framework, it will not flag UK-specific gaps reliably.
Trade-specific scoring
A general-purpose document checker will treat every RAMS the same way. A construction-specialised tool knows that electrical RAMS hold different standards than scaffolding RAMS, that demolition RAMS must reference Refurbishment and Demolition asbestos surveys, and that working-at-height controls dominate roofing documents. Trade-aware scoring catches the trade-specific failures generic tools miss.
Verdict consistency
The verdict and findings should always agree. If you see Pass alongside a long findings list, the tool has internal inconsistency that means you cannot trust its top-line judgment. Good tools enforce verdict-finding consistency internally: if there are any critical issues, it cannot Pass.
Editable report output
You should be able to edit what gets sent to the subbie. An auto-generated report is a starting point, not a finished communication. The wording of your feedback matters, especially when the subbie is a long-term relationship rather than a one-off. Tools that lock you into the auto-generated text are inflexible for real relationships.
Audit trail
Reviewed documents and their verdicts should be searchable later. CDM duty-holders may need to demonstrate to the HSE that documents were reviewed before work proceeded. A tool that keeps no record makes this impossible to evidence. The audit trail also helps you spot patterns (which subbies regularly submit Fail-grade RAMS, where your supply chain is weakest).
Email tracking
Whether the subbie has actually read your feedback matters for follow-up. If you sent a Fail verdict three days ago and the subbie has not opened the email, you have a different problem than if they opened it and ignored it. Read receipts and click tracking turn one-way feedback into accountable conversations.
Reasonable pricing
If a tool charges per review and you do many reviews, the maths gets ugly fast. A tool at GBP 2 per review sounds cheap until you are doing thirty reviews a month and paying GBP 60 monthly for the same workload that fits inside a flat-rate plan elsewhere. Compare flat-rate plans against your actual review volume before committing.
Common questions about RAMS review automation
Will the HSE accept an AI-reviewed RAMS?
The HSE accepts properly reviewed RAMS. The duty-holder is still you. The AI is your assistant, not your replacement. There is no legal issue with using AI as part of your review process, and there is no specific HSE guidance prohibiting it. The duty-holder must be able to demonstrate that the review was competent. Using an AI tool plus human judgment satisfies this; using an AI tool with no human oversight does not.
Can subbies tell their RAMS was reviewed by a tool?
Generally no, especially if you edit the auto-generated report before sending. The findings are real either way; the writeup style is your choice. Some principal contractors deliberately add a sentence at the top of the report ("Reviewed using Complys with editorial review by our safety team") for transparency. Others prefer to keep the tool invisible and present the feedback as a normal review. Either approach is defensible.
Does AI review work for non-English RAMS documents?
Some tools work in multiple languages but UK-specific tools are typically English-only. If you receive RAMS from international subbies, check the tool's language coverage. In practice, almost all UK construction RAMS are submitted in English even when the subcontractor's business language is different, because the document needs to be readable by site supervisors and HSE inspectors who operate in English.
What about confidential or commercially sensitive RAMS?
Worth asking the tool vendor about data retention. Most reputable UK tools (including Complys) handle compliance documents under appropriate confidentiality terms with documented data-handling policies. Check the privacy policy and ask whether documents are used to train further models. For genuinely sensitive contracts (defence, critical infrastructure, government), some clients have additional contractual requirements your tool vendor needs to be able to meet.
Can the same tool generate RAMS as well as review them?
Yes. Tools like Complys do both. The same legislation library and trade knowledge that powers reviews also powers RAMS generation. If your subbies use the same tool to write RAMS, the documents already arrive structured to pass review. This creates a virtuous cycle: subbies using the tool produce better RAMS, which review faster, which means less back-and-forth, which gets them on site sooner.
How accurate are AI RAMS reviews?
Accuracy varies by tool, but for the kinds of failures most reviews target (missing legislation, generic content, vague controls, absent PPE specifications), good tools catch over 90 percent of what an experienced reviewer would catch. The remaining 10 percent is mostly the site-specific judgment territory we covered earlier. Where AI tools sometimes get things wrong is in calling something Critical when it is only Borderline, or vice versa. This is why a final human glance at the verdict matters.
What if the subbie pushes back on the findings?
Findings are not legally binding decisions; they are concerns flagged for the subbie to address. If a subbie disagrees with a finding (for example, claims a control is adequate even though it is not specifically named), the conversation that follows is normal compliance discussion. The tool gave you a starting point, not a final ruling. Most subbie pushbacks resolve quickly with one clarification email.
The bottom line
Automated RAMS review is not a replacement for competent reviewing. It is a multiplier. A reviewer who covers ten RAMS a week can confidently cover thirty with the right tool, and spend their freed time on the calls only they can make.
For principal contractors with multiple subbies, automation has gone from nice-to-have to expected. The contractors winning more work in 2026 are the ones who can demonstrate fast, thorough, consistent compliance review across their whole supply chain.
For more on what main contractors look for in subbie compliance, see our honest 2026 insider guide. For the underlying RAMS standards, see our complete UK RAMS guide.
Upload a real subbie RAMS and see what Complys flags. Trial includes the full review feature plus the rewrite tool, no card required.