Most days, fiber strands hum quietly in the background—connecting, syncing, storing, streaming—without much notice from the people who depend on it. Fiber carries massive amounts of data both below and above our streets and highways, powering everything from video calls to financial transactions. We trust it to just work. And the vast majority of the time, it does exactly that. But every so often, something happens that reminds us just how much is riding on cables.
On the afternoon of Thursday, July 17th, just before 1:00 PM Pacific Time, a sudden and unexpected disruption hit Ziply Fiber's commercial network. A horizontal drilling crew—working for another internet provider entirely—accidentally tore through one of our underground duct banks, severing two major fiber optic cables more than 12 feet below the street surface.
The location? A critical crossing beneath Oregon's Highway 26, deep inside a conduit path running nearly 1,400 feet long and authorized by the Oregon Department of Transportation. Inside that duct bank ran two 1,728-count fiber strands—essential arteries in the digital infrastructure that serves Hillsboro's datacenter ecosystem.
What was damaged in a moment would take nearly 30 hours of nonstop work to restore.
Within minutes, our Network Operations Center (NOC) detected the fault and sprang into action. By 2:00 PM, our field teams were on site assessing the situation. This wasn't your average fiber strike. This was a full-scale, high-impact incident that required a coordinated emergency response from both our internal construction crews and trusted external partners. With two core strands severed and their duct pathway destroyed, we couldn't simply splice and move on. The damage called for full strand replacements, including two additional segments to ensure the integrity of the route.
Then, just as restoration work was underway, the unthinkable happened: the very same contractor struck again, hitting our infrastructure a second time.
The second hit pushed back timelines and forced us to reevaluate our approach on the fly. But our teams didn't flinch. Through the night and into the next day, they worked with urgency, precision and focus, navigating complex underground repairs with no margin for error. Service restoration came in waves. Bit by bit, customer connections came back online, culminating in full recovery for all impacted services by 4:43 PM on Friday, July 18.
And throughout the entire process of replacing 3,456 fiber strands, not a single residential customer was impacted.
"This was an extraordinary event that tested the resilience of both our network and our team," says Eric Rosenberry, Director of Network Architecture. "When core infrastructure gets hit—not once, but twice—there's no playbook, just experience and execution. What followed was a clear demonstration of our ability to adapt quickly, work collaboratively and restore service under the most challenging conditions."
Incidents like this are rare—but when they happen, they reveal a lot about a network and the people who support it. What could have been a prolonged outage was contained and resolved in just over a day thanks to the commitment, coordination and capability of our teams. More importantly, it showed that while we can't control every variable—especially when others are working near our infrastructure—we can control how we respond. And we will always respond with speed, transparency and an unshakable commitment to keeping our customers connected.