On June 10, 2010, a failure in a fileserver pair led to an outage and file corruption for users relying on the affected server. Prior to the failure, the A server exhibited anomalies but passed memory tests and was returned to standby mode to support disk syncs. At 14:55 PST, a load spike on the B server triggered a fallover to the A server, resulting in repo corruption, especially for those pushed since the fallover. The fileserver was taken offline, and after confirming B was not causing the corruption, it was restored to service at 18:40 PST. Recovery efforts included scanning repos with git fsck and restoring corrupted objects from backups or snapshots, though some recent changes were irretrievable. The root cause was identified as faulty hardware on the A server, which has since been replaced, and procedures have been updated to better handle future incidents and detect early corruption.