All blog posts

I coordinated a $2B hospital with 249 models. Here's what happened.

Table of Contents

Are you prepared for what’s next in AECO?

Read the 2026 report
Read the 2026 report

When I said yes to coordinating the UCSF Helen Diller Hospital Tower, I gave myself three rules. Make it easy. Make it durable. Make it fun.

Everything that followed — the 249 models, the 1.6 terabytes of data, the 130 active modelers, the 9-month coordination schedule running three floors in parallel — was built around those three principles. They sound simple, but holding to them wasn't.

The four months nobody sees

There's a version of this story that starts with clash tests and automation rules and impressive numbers. But the part that actually made it work happened before any of that — in a series of conversations I had with every single trade on the project.

I wasn't just asking about clash tolerances. I was asking about construction logic:

  • What clearances were genuinely non-negotiable. The tolerances that aren't up for debate once work starts on site.
  • Which elements had already been signed off as prefab scope. Components that were locked in and couldn't move, regardless of what the model suggested.
  • Which conflicts were real coordination problems, and which were model noise. The difference between issues worth a coordinator's time and those that would waste everyone's.

One of those conversations changed the entire shape of the setup. The framing trade told me they had no issue with conduit or pipe running through the web of a steel track — what they cared about was the flange. So I removed the web from every top and bottom track in the model. From that point, every clash that touched those elements was meaningful. No false positives. No coordinator time spent triaging issues that would never become real problems on site.

That's not something the software tells you. It comes from understanding how the work actually gets built — and being willing to spend the time before the project starts to find out.

Building a process that would last twelve months

A coordination setup that works brilliantly in week one and falls apart by month three isn't a success. I needed a process that would run reliably for the full duration without constant intervention or rebuilding.

That meant treating the design phase like an engineering problem. I started with 150 clash tests, iterated down to 108, and ultimately landed at 96 tests per team, running every Tuesday and Thursday night. Across those tests, I built 1,422 conditional rules — logic that would make the routing and assignment decisions that would otherwise pile up in a coordinator's queue.

The result was that 61% of stamps went directly to trades without any coordinator review. That's not an efficiency metric. It's the difference between a clash detection setup and a coordination engine. The Collaborative Clash Automation capabilities in Revizto made that level of conditional logic possible — but the logic itself had to be designed, tested, and refined before a single live model came through.

The decision that got pushback

I split my clash tests across three separate team sets rather than running one unified system. It wasn't the obvious choice, and it wasn't universally popular.

The reasoning was straightforward: a single system is a single point of failure. If one clash test breaks, coordination stops across every floor. Three independent sets mean that a failure in one doesn't halt the other two. On a project running multiple floors in parallel on a tight schedule, that redundancy isn't a nice-to-have — it's essential.

We also staggered which systems we released at each stage. We didn't run gravity hangers right away on a new level — we started with main distribution, then released gravity hangers after a few weeks. We used tagging to control the pacing, which meant trades always had a manageable and logical volume of work in front of them rather than an overwhelming backlog.

Automation at scale isn't just about how fast the system runs. It's about what happens when something goes wrong. Resilience has to be designed in from the beginning, not retrofitted after the first failure.

What the process produced and how Revizto helped 

120,000 stamps identified. 107,000 resolved. 93,000 surfaced through advanced automation. More than 500 stamps closed every single day. An estimated $500,000 saved in coordination costs against a manual workflow.

Those numbers are the outcome of four months of planning, obsessive pre-testing, and a setup designed to run without constant supervision. The technology enabled it — the Integrated Issue Management workflows in Revizto were central to how we tracked, routed, and closed issues at that volume — but the technology alone wouldn't have produced those results without the process behind it.

The principle that held everything together was simple: automate the decisions that don't require human judgment so the people on the project can focus on the ones that do. That principle scales. It works on a $2B hospital. It works on any project where coordination complexity is outpacing the team's capacity to manage it manually. 

If you want to see how this process translates to your own projects, get in touch with the Revizto team today.

Laura Medina Rodriguez
Laura Medina Rodriguez
VDC Project Engineer at Herrero Boldt Webcor
Laura Medina Rodriguez is a VDC Project Engineer at Herrero Boldt Webcor with experience delivering large-scale, complex construction projects. She presented at Made Right 2026.

FAQs

Managing BIM coordination across hundreds of models on a complex hospital project requires extensive pre-planning before any clash detection begins. This includes meeting individually with every trade to understand construction logic, prefab scope, and real clearance requirements — not just model tolerances. Building clash tests around actual construction priorities, rather than raw geometry, eliminates false positives and ensures that every flagged issue is actionable. On the UCSF Helen Diller Hospital Tower, 96 clash tests per team running twice weekly — supported by over 1,400 conditional automation rules — allowed 93,000 issues to be surfaced and resolved through automated workflows rather than manual coordinator review.

VDC automation uses pre-configured rules within clash detection and issue management software to route, assign, and resolve coordination issues without requiring manual intervention for every decision. On highly complex projects, this means the majority of stamps and issues can be sent directly to trades for resolution without coordinator review, dramatically reducing the labor cost of coordination. On the UCSF Helen Diller Hospital Tower, automated workflows contributed to an estimated $500,000 saving in coordination costs compared to a fully manual process.

Clash detection identifies geometric conflicts between elements in a building model. A coordination engine goes further — it applies conditional logic to those conflicts to determine which are real issues, which are noise, and who needs to act on each one. The distinction matters at scale: raw clash detection on a complex project produces thousands of results that require manual triage. A properly configured coordination engine, built around the actual construction logic of each trade, filters that output so that only meaningful issues reach the people who need to resolve them.

Prefabricated elements have fixed dimensions and positions that cannot be adjusted on site, making them higher priority in clash resolution than elements that can be routed or repositioned during construction. A BIM coordination setup that doesn't account for prefab scope will treat all clashes as equal, leading to wasted time resolving conflicts that are actually constrained by decisions already made. Identifying prefab elements early and prioritizing them in clash test configuration ensures that coordination effort is directed toward conflicts that are still resolvable.

Resilient BIM coordination workflows are designed to continue functioning when individual components fail. Rather than running a single unified clash test system, splitting tests across multiple independent team sets means that a failure in one set doesn't halt coordination across the entire project. This approach — combined with scheduled automation running on fixed cycles rather than on-demand — ensures consistent output regardless of individual system interruptions, and is particularly important on projects running multiple floors or phases in parallel.