By Taylor Armerding, security consultant, Synopsys
Is exposure to software source code disastrous enough to merit a meltdown?
Based on a couple of incidents in the last few weeks, you might think so. The first was portrayed as major tech companies handing tools to the Russians to spy on the US. The other was termed by one researcher as, “the biggest leak in history.”
But those views are not unanimous. Other voices in the IT security community are declaring that everybody ought to take a chill pill.
Both events generated plenty of media coverage, however. It started with Reuters reporting a couple of weeks ago that major tech companies have allowed Russian authorities to inspect the source code of their software – the same software used by at least a dozen US government departments including Defense, State, NASA, the FBI, and other intelligence agencies.
The second round came this past week, with word that an anonymous “someone” (later reported to be a former Apple intern) had posted the “iBoot” source code from Apple’s iOS 9 on the open-source code repository GitHub – a disclosure that Jonathan Levin, the author of several books on iOS and OSX, told Motherboard, qualified as, “the biggest leak in history.”
Which seems a major stretch. Bigger than the breach of the US Office of Personnel Management (OPM), that compromised the personally identifiable information (PII) of more than 22 million current and former employees? Bigger than the Equifax breach, which exposed the PII and credit history of about 145.5 million people?
Perhaps a “leak” is considered different from a “breach,” but for there to be a leak, there first has to be a breach, even if it’s committed by an insider.
So, let’s take them one at a time. Reuters reported that tech companies – SAP, Symantec, Micro Focus and McAfee – had permitted Russian authorities to inspect their source code before using their products.
According to the companies, Russia just wanted to make sure the code didn’t have backdoors or defects that could allow hackers into their systems. They added that those inspections were done under tightly controlled conditions, with not even a pencil allowed in the room.
Still, US government officials and several security experts said that allowing a prospective customer to inspect software source code put the US at risk.
A Dec. 7 letter from the Pentagon to Sen. Jeanne Shaheen (D-NH) said that allowing governments to review the code, “may aid such countries in discovering vulnerabilities in those products.”
But Gary McGraw, vice president of security technology at Synopsys’ Software Integrity Group, branded those warnings “ridiculous.”
McGraw, who initiated a lengthy debate on Twitter about the issue, says he is not advocating handing over the proprietary source code to anyone who wants to inspect it, because it would put intellectual property (IP) at risk.
But he said when it comes to defects that can be exploited for cyber attacks or espionage, access to the source code is no more dangerous – likely less so – than access to the binary code, which is created from the source code and is sold along with the commercial product that results.
“You sell them (customers) the binary,” he said, which means all customers can inspect it for exploitable defects at their leisure.
McGraw contends that the source code scare is simply unwarranted FUD – fear, uncertainty, and doubt – that has tended to reappear every few years for the past two decades.
“The myth is that having source code out there is somehow way more dangerous and exposes you to attackers in a way that having binary out does not,” he said.
“Software exploit can be and is accomplished with binary-only all the time. In fact, some attackers, and white-hat exploit people, argue that having a binary is better than having a source when it comes to exploiting development.”
The Reuters story didn’t even mention binary. But McGraw said the confusion between the two allows, “unscrupulous vendors to produce FUD and get coverage.
“The programs that the Russians were reviewing were programs whose binary is widely available commercially,” he said. “The fact that it was the source code being reviewed doesn’t put any other customer, including the US government, at any greater risk.”
So, does that same logic apply to Apple and users of its older iPhones? Should they just chill, since they aren’t at any increased risk from the GitHub post? As has been reported extensively, the leaked code is old – from two versions ago.
That was the basic message from Apple itself, which issued a statement to Motherboard that, “old source code from three years ago appears to have been leaked, but by design, the security of our products doesn’t depend on the secrecy of our source code.
“There are many layers of hardware and software protections built in to our products, and we always encourage customers to update to the newest software releases to benefit from the latest protections.”
Security researcher Patrick Wardle essentially agreed. He told Mashable that having access to code does not necessarily make a well-designed OS less secure, noting that Linux is quite secure despite being totally open-source.
And, like McGraw, he added that good hackers, “don’t need access to source code – they can reverse a binary and find bugs.”
Still, the leaked code is the part that is responsible for ensuring a trusted boot of the operating system. And, obviously, it wasn’t exposed only to selected people who weren’t even allowed to bring pencils into a room. It was out there for anyone to grab.
While Apple issued a takedown order under the Digital Millennium Copyright Act (DCMA) hours after the Motherboard story appeared, about the only thing that did was confirm that the code was legitimate. By then it had spread far beyond GitHub.
Another reality is that not everybody updates their software. According to Apple’s own estimate, about 7 percent of iPhone and iPad owners may be using iOS 9 or earlier. And with about a billion devices out there, that means a potential attack surface of 70 million devices.
Still, it seems that if anybody is at risk in this case, it would be Apple itself, since the source code is its proprietary IP, and access to it might make it easier to jailbreak the OS and use it on non-Apple devices – something the company ferociously tries to prevent.
That is McGraw’s take. “The thing that makes this story interesting is that it’s a bit of an embarrassment for Apple who has guarded their IP so rigorously,” he said. “And yes, it could make jailbreaking easier.”
John Kozyrakis, the research engineer at Synopsys, said that access to the iOS source code might also make it a bit easier for those looking for defects in the binary code.
“Unlock mechanisms are used by three main groups,” he said. “For legitimate forensic tools, malicious exploit tools for targeted attacks and jailbreak tools.
“The release of this source could help ongoing efforts to use iOS on generic, non-Apple hardware or emulators, which has not been possible so far, and is restricted by Apple.”
But Amit Sethi, a senior principal consultant, also thinks the leak, “should have little impact.”
He said even if it does expose some defects in Apple iOS source code, “we’ll end up with more secure devices in the long term, as Apple fixes the discovered vulnerabilities.”
For customers and users, he said, it should be a reminder that “people should design and implement systems – especially client-side components – so they don’t rely on their source code being secret.”
Beyond that, as McGraw has been saying for more than a decade, the threat of exploits from the exposure of source code can be minimized by building security into it from the start.
“During development, source code can and should be reviewed by a static analysis program,” he said. “When you find a bug in the source code, it is easier to fix, since you know where in the code it is.”
About the Author
Taylor Armerding is an award-winning journalist who left the declining field of mainstream newspapers in 2011 to write in the explosively expanding field of information security. He has previously written for CSO Online and the Sophos blog Naked Security. When he’s not writing he hikes, bikes golfs and plays bluegrass music
About the Synopsys Software Integrity Platform
Synopsys offers the most comprehensive solution for building integrity—security and quality—into the software development lifecycle and supply chain. The Software Integrity Platform unites leading testing technologies, automated analysis, and experts to create a robust portfolio of products and services. This portfolio enables companies to develop personalized programs for detecting and remediating defects and vulnerabilities early in the development process, minimizing risk and maximizing productivity. Synopsys, a recognized leader in application security testing, is uniquely positioned to adapt and apply best practices to new technologies and trends such as IoT, DevOps, CI/CD, and the Cloud. For more information, go to www.synopsys.com/software.
Synopsys, Inc. (Nasdaq: SNPS) is the Silicon to Software™ partner for innovative companies developing the electronic products and software applications we rely on every day. As the world’s 15th largest software company, Synopsys has a long history of being a global leader in electronic design automation (EDA) and semiconductor IP and is also growing its leadership in software security and quality solutions. Whether you’re a system-on-chip (SoC) designer creating advanced semiconductors or a software developer writing applications that require the highest security and quality, Synopsys has the solutions needed to deliver innovative, high-quality, secure products. Learn more at www.synopsys.com.