Software vendor Kaseya has been caught in the chaos of a supply-chain compromise by the REvil ransomware gang since Friday. Around 40 managed service providers (MSPs) that rely on Kaseya VSA software to administer customers’ IT—and up to 1,500 of their customers—have been stricken with the ransomware.
In response to the attack, Kaseya shutdown the SaaS version of VSA, and instructed users of its on-premises customers to do the same. Organizations that use Kaseya VSA, and their clients, have been without the administration tool since.
Yesterday, the company released a video detailing the attack and steps taken to mitigate it. It was hoping to be back up and running as soon as possible, but it seems an already cautious approach has taken on an additional helping of reserve.
A new, unscripted video has been released in the last few hours which details a delay to getting things back up and running. The original estimate for a recovery timeline appeared to be bringing the SaaS version of VSA back on Tuesday morning, with on-premises installations to follow. That then switched to today. This latest video now mentions Sunday as the most likely date for things to get moving. The reason? Security, apparently.
Friday’s attack was made possible by a zero-day vulnerability in the on-premises VSA platform. Since then, Victor Gevers of the Dutch Institute for Vulnerability Disclosure (DIVD) has revealed that the organization had been in a “coordinated vulnerability disclosure process” with Kaseya at the time of the attack. Fixing those is clearly top of Kaseya’s agenda before it can instruct customers to restart VSA servers.
Striking an apologetic and far less bullish tone than in his first video, the beleaguered Kaseya CEO, Fred Voccola, says the new release time is going to be “this Sunday, in the early afternoon, Eastern Standard time“. This decision, he says, is down to him alone in order to put additional layers of protection in place.
In his own words:
The reason for that is we had all the vulnerabilities that were exploited during the attack, we had them locked. We felt comfortable with the release. Some of the third-party engineers, engineering firms and companies that we’ve been working with, as well as some of our own IT people, made some suggestions to put additional layers of protection in there for things that we might not be able to foresee. This was probably the hardest decision that I’ve had to make in my career. And, we decided to pull it for an additional three and a half days, or whatever the approximate time is … to make sure that it is hardended as much as we feel we can do for our customers.
The slow, careful approach will no doubt cause some roadblocks for customers waiting on systems to be back online. However, this has to be a better alternative than something else happening because a weak spot wasn’t identified.
You can see a full up-to-date timeline of events in the Kasya supply-chain attack in our original article. We will update this as new facts emerge.