When will the U.S. Government Act Against Software and Hardware Manufacturers on Cyber Security?

Back in 2003, PBS Frontline did a long news story about hackers who “took advantage of software vulnerabilities that were previously known to manufacturers.”

This is what the Council to Reduce Known Cyber Vulnerabilities is about — stoping the rising tide of known cyber vulnerabilities that manufacturers blithely put in their software and hardware — you buy it new, shrink-wrapped with known cyber vulnerabilities already in it.

There are so many, the U.S. government publishes a catalogue of them on the internet.  It’s called the Common Vulnerabilities and Exposures database, and it’s published by the National Institute of Standards and Technology.

In 2003, when PBS Frontline did its news story, there were extensive interviews of executives from Microsoft, Symantec and the Cyber Czar (not his real title) at the White House.   Then, the title was Presidential Adviser for Cyberspace Security.

Back in 2003, in the words of PBS, the debate centered on “whether the manufacturers should be held more accountable for software security.”

If a stream or river is being polluted, then the obvious answer is to find the polluter and put an end to the pollution at the source.

With medical devices and network hardware, industrial control systems, or any other hardware or software used to control, manage, and maintain our critical infrastructure, this should be a simple, logical approach.

In reality, however, no business buyer or consumer knows what known vulnerabilities are in their hardware or software.  How can we fix what we do not measure?

The recent announcement by Underwriter Laboratories (UL) to partner with Codenomicon is an effort to begin to measure the known vulnerabilities.

OWASP  has a category to their Dependency Check, A-9, to check for components with known vulnerabilities.  It is a free program that allows manufacturers and software vendors to check their code without spending money on special software or commercial code checking services.

Lloyd’s Register Energy recently reported that a brand new, just out of the ship yard off-shore oil rig had to be idled for 19 days (at a cost of $3 million dollars a day) while it was purged of malware.  The 2015 Verizon Breach report found that approximately 97% of all exploits in 2014 were caused by exploits of known vulnerabilities.  It may be reasonable to assume that the malware was likely inserted through known vulnerabilities, and that the mere presence of malware is a strong indicator of a clear and present threat of exploits, via known vulnerabilities.

Despite millions of dollars being spent to address cyber security annually across multiple sectors, the cyber security problems are getting worse.   One hospital infusion pump was declared, by a security researcher to be the most insecure device the researcher ever tested.

This is a patient safety issue.  The pump is essentially sitting there waiting to be hacked, with an open, unauthenticated wifi port, so if you have the ip address of the device, you can instantly get to root using Telnet.  This hospital drug pump contains a known vulnerability found in the NIST data base — as Security Week reports, it lacks  “authentication for Telnet sessions (CVE-2015-3459), which allows remote attackers to gain root privileges via TCP port 23.”  (The CVE numerical designation is from the NIST database, meaning it is an example of a known vulnerability.)

As one security article titled, “Bugs in the hospital: how to pwn your own pethidine machine,” states: “In short, CVE-2015-3459 is the sort of bug that shouldn’t have happened in the first place, because telnet shouldn’t have been there, even with authentication.  And it shouldn’t have been there in the second place, either, because an errant telnet server listening on TCP port 23 is easy to spot during testing.”

How could the manufacturer not know?  Or, is the manufacturer pretending to not know, so they can claim plausible deniability?

Little has changed since 2003.

Manufacturers are still using the same excuses they used back in 2003, again, quoting the PBS Frontline story: “many companies maintain that they are doing their best to prevent and self-correct for inadvertent vulnerabilities,” and that “(t)he code review process and the entire software development process does not have an appropriate level of emphasis on security. The consumers and clients of most software companies are so demanding of new features and capabilities that those features take priority over better software development practices and techniques. So our demand for new features essentially fuels the fire of increased vulnerabilities in software.”

When will some entity of the U.S. government make an example of those polluting the digital eco-system with hardware and software riddled with known vulnerabilities?

Someone needs to be made an example of so the others will put processes in place, like Sonatype’s DevOp software developer program that tells them in real-time, as they are building the code, if they are choosing a component with a known vulnerability, and then points them in the direction of choosing the same component, that is the version without vulnerabilities.

Software manufactures use systems to track the copyright and license terms of the components they use in building their software today — they may use BlackDuck’s software to do it — and if they find the wrong license or copyright, they use a different component.  This is exactly the process software and manufacturers should be using today for known security vulnerabilities, but they are not.

It’s clear that voluntary compliance is not working.

Legislation such as H.R. 5793, the Cyber Supply Chain Management and Transparency Act, which would apply to all future U.S. government procurement, to provide the following clauses in purchasing contracts directing vendors to:

  • supply a bill of materials (kept confidential within the purchasing agency) of each binary component that is used in the software, firmware, or product;
  • verify their product does not contain any known security vulnerabilities;
  • list any known vulnerabilities or defects;
  • provide that the product design allows fixes with patches, updates, or replacements of vulnerabilities discovered after the purchase date; and
  • provide timely repairs for any newly discovered vulnerabilities for the lifecycle of the product.

The Royce bill would have prevented the Hospira drug pump from ever reaching a single U.S. hospital, since government sales would have driven change into their manufacturing and software building process for all of their products.  The Royce bill is really the first set of long overdue building codes for software, and the first and most urgent building code is to eliminate the known vulnerabilities in new code and products, and then establish a system in which newly discovered vulnerabilities in the code or product can be tracked and fixed.

The problem of devices, firmware and software being riddled with known vulnerabilities is getting worse, not better.  The current policies and procedures are not working to solve this problem.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *