Quantcast
Channel: Hacker News 50
Viewing all articles
Browse latest Browse all 9433

At Facebook, zero-day exploits, backdoor code bring war games drill to life | Ars Technica

$
0
0

Comments:"At Facebook, zero-day exploits, backdoor code bring war games drill to life | Ars Technica"

URL:http://arstechnica.com/security/2013/02/at-facebook-zero-day-exploits-backdoor-code-bring-war-games-drill-to-life/


Aurich Lawson

Early on Halloween morning, members of Facebook's Computer Emergency Response Team received an urgent e-mail from an FBI special agent who regularly briefs them on security matters. The e-mail contained a Facebook link to a PHP script that appeared to give anyone who knew its location unfettered access to the site's front-end system. It also referenced a suspicious IP address that suggested criminal hackers in Beijing were involved.

"Sorry for the early e-mail but I am at the airport about to fly home," the e-mail started. It was 7:01am. "Based on what I know of the group it could be ugly. Not sure if you can see it anywhere or if it's even yours."

Enlarge / The e-mail reporting a simulated hack into Facebook's network. It touched off a major drill designed to test the company's ability to respond to security crises. Facebook

Facebook employees immediately dug into the mysterious code. What they found only heightened suspicions that something was terribly wrong. Facebook procedures require all code posted to the site to be handled by two members of its development team, and yet this script somehow evaded those measures. At 10:45am, the incident received a classification known as "unbreak now," the Facebook equivalent of the US military's emergency DEFCON 1 rating. At 11:04am, after identifying the account used to publish the code, the team learned the engineer the account belonged to knew nothing about the script. One minute later, they issued a takedown to remove the code from their servers.

With the initial threat contained, members of various Facebook security teams turned their attention to how it got there in the first place. A snippet of an online chat captures some of the confusion and panic:

Facebook Product Security: question now is where did this come from Facebook Security Infrastructure Menlo Park: what's [IP ADDRESS REDACTED] Facebook Security Infrastructure Menlo Park: registered to someone in beijing… Facebook Security Infrastructure London: yeah this is complete sketchtown Facebook Product Security: somethings fishy Facebook Site Integrity: which means that whoever discovered this is looking at our code

If the attackers were able to post code on Facebook's site, it stood to reason, they probably still had that capability. Further, they may have left multiple backdoors on the network to ensure they would still have access even if any one of them was closed. More importantly, it wasn't clear how the attackers posted the code in the first place. During the next 24 hours, a couple dozen employees from eight internal Facebook teams scoured server logs, the engineers' laptop, and other crime-scene evidence until they had their answer: the engineer's fully patched laptop had been targeted by a zero-day exploit that allowed attackers to seize control of it.

This is only a test

The FBI e-mail, zero-day exploit, and backdoor code, it turns out, were part of an elaborate drill Facebook executives devised to test the company's defenses and incident responders. The goal: to create a realistic security disaster to see how well employees fared at unraveling and repelling it. While the attack was simulated, it contained as many real elements as possible.

The engineer's computer was compromised using a real zero-day exploit targeting an undisclosed piece of software. (Facebook promptly reported it to the developer.) It allowed a "red team" composed of current and former Facebook employees to access the company's code production environment. (The affected software developer was notified before the drill was disclosed to the rest of the Facebook employees). The PHP code on the Facebook site contained a real backdoor. (It was neutralized by adding comment characters in front of the operative functions.) Facebook even recruited one of its former developers to work on the team to maximize what could be done with the access. The FBI e-mail came at the request of Facebook employees in an attempt to see how quickly and effectively various employee teams could work together to discover and solve the problems.

"Internet security is so flawed," Facebook Chief Security Officer Joe Sullivan told Ars. "I hate to say it, but it seems everyone is in this constant losing battle if you read the headlines. We don't want to be part of those bad headlines."

The most recent dire security-related headlines came last week, when The New York Times reported China-based hackers had been rooting through the publisher's corporate network for four months. They installed 45 separate pieces of custom-developed malware, almost all of which remained undetected. The massive hack, the NYT said, was pursued with the goal of identifying sources used to report a story series related to the family of China’s prime minister. Among other things, the attackers were able to retrieve password data for every single NYT employee and access the personal computers of 53 workers, some of which were directly inside the publisher's newsroom.

As thorough and persistent as the NYT breach was, the style of attack is hardly new. In 2010, hackers penetrated the defenses of Google, Adobe Systems, and at least 32 other companies in the IT and pharmaceutical industries. Operation Aurora, as the hacking campaign came to be dubbed, exploited zero-day vulnerabilities in Microsoft's Internet Explorer browser and possibly other widely used programs. Once attackers gained a foothold on employee computers, they used that access to breach other, more sensitive, parts of the companies' networks.

The hacks allowed the attackers to make off with valuable Google intellectual property and information about dissidents who used the company's services. It also helped coin the term "advanced persistent threat," or APT, used to describe hacks that will last weeks or months targeting a specific organization that possesses assets the attackers covet. Since then, reports of APTs have become a regular occurrence. In 2011, for instance, attackers breached the servers of RSA and stole information that could be used to compromise the security of two-factor authentication tokens sold by the division of EMC. A few months later, defense contractor Lockheed Martin said an attack on its network was aided by the theft of the confidential RSA data relating to its SecurID tokens, which some 40 million employees use to access sensitive corporate and government computer systems.

"That was the inspiration around all this stuff," Facebook Security Director Ryan "Magoo" McGeehan said of the company's drills. "You don't want the first time you deal with that to be real. You want something that you've done before in your back pocket."

Even after employees learned this particular hack was only for practice—about a half hour after the pseudo backdoor was closed—they still weren't told of the infection on the engineer's laptop or the zero-day vulnerability that was used to foist the malware. They spent the next 24 hours doing forensics on the computer and analyzing server logs to unravel that mystery. "Operation Loopback," as the drill was known internally, is notable for the pains it took to simulate a real breach on Facebook's network.

"They're doing penetration testing as it's supposed to be done," said Rob Havelt, director of penetration testing at security firm Trustwave. "A real pen test is supposed to have an end goal and model a threat. It's kind of cool to hear organizations do this."

He said the use of zero-day attacks is rare but by no means unheard of in "engagements," as specific drills are known in pen-testing parlance. He recalled an engagement from a few years ago of a "huge multinational company" that had its network and desktop computers fully patched and configured in a way that made them hard to penetrate. As his team probed the client's systems, members discovered 20 Internet-connected, high-definition surveillance cameras. Although the default administrator passwords had been changed, the Trustwave team soon discovered two undocumented backdoors built into the surveillance cameras' authentication system.

Enlarge / An image retrieved from high-definition surveillance cameras used by a large company. During a penetration test, Trustwave employees used them to steal "tons" of login credentials. Trustwave

Viewing all articles
Browse latest Browse all 9433

Trending Articles