The niner noteworthy and the 12 recaps of 2016 (day 7)

According to Mikko from F-Secure all security issues are software security issues and to some extend I have to agree with his statement.
In today’s instalment I will be looking at several software security problems from backdoors to industrial software and from regular vulnerabilities to improving software in such a way we may be able to reduce the number of errors per lines of code significantly. That last number is typically 25 errors per 1000 lines of code in case you were wondering.

In chronological order:

Microsoft’s Golden Key leak illustrates why governments should stop asking for backdoors

This is indeed a prime example why backdoors, development hooks and other code that bypasses stuff in software development to aid developers ending up in production code is such a bad idea. And them I am not even mentioning those backdoors being there because the government is requesting them to be their on purpose.
This is precisely what will go wrong, it’s not even a question of if it will do so because it will for sure.
The article writer hopes that governments will finally understand what the issue is. I’ve got news for you: they perfectly understand the risks, they simply don’t care enough about it to change their mind.
source: The Next Web (external link)


Yes I know, everything is virtual these days from servers to entire networks right? Wrong, there is and always will be physical hardware beneath that virtual layer which if not secured correctly is your hidden security risk.
The good people at the Vrije Universiteit in Amsterdam show very nicely what happens if you can modify physical memory from within your virtual environment. Something which could potentially effect a lot of virtual stuff the current Internet is build upon call “cloud” computing.
source: VUSec (external link)

Is that app you’re using for work a security threat?

It is fitting this article is in this instalment on the first day of the new year, because this will become one of the major issues for 2017, so far as it wasn’t an issue already.
It’s not only the issue of using cloud services with storage in countries with lesser levels of protection and safeguards, it is also the usage and privacy policies of some of these services. Notably Google and Dropbox actually get to own part of the copyright rights to the data you put on their servers, this as payment for you using them for free.
As most of these services used within companies never run by the IT department, cloud approval boards (if they exists) or any formal approval process, often the consumer pay-with-your-data or privacy versions are used.
And yes, the most important advice is indeed to give your employees the tools they really need to do their jobs efficiently and easily as well as educate them on security awareness and data classification. If your company needs help with this, don’t hesitate to contact me for advice.
source: BBC News (external link)

Android phones rooted by “most serious” Linux escalation bug ever

Bleeding harts, sniffing poodles and now a dirty cow. The names sometimes given to open source vulnerabilities are as interesting as the flaws are dangerous themselves. Okay poodle was an SSL protocol design flaw, but okay.
The interesting bit in this article isn’t the Android topic, but the fact that this bug has been in the Linux kernel since 2007, combined to the fact that a lot of today’s devices are running some form of linux from multifunctional printers to internet modems, from IOT devices to mini computers like the raspberry pi and from webservers to other server systems directly connected to the internet. So next to the Android problem we have a hole host of other devices that need patching or upgrading.
The only major difference, at least in most cases, is that there will actually be updates or patches available at all, compared to most Android devices which will stay vulnerable until the owner stops using them or the device stops working.
source: Ars Technica (external link)

Paypal fixes ‘worrying’ security bug

A very nice case of not sanitising input received from outside your application. If you can modify web traffic to show that you have answered the security questions correctly, there probably was no check on that server-side.
Yes even the big once can screw this stuff up. The fact that security questions are a fallback method for 2-factor authentication is by itself only done for convenience and certainly not for security reasons because any fraudster knowing your password and security questions answers can bypass the 2FA system at all times.
Since a lot of security questions and answers were leaked in some major breaches last year, the chance of that information floating around in the wild is quite significant.
That there must be some form of retrieving your account when your mobile phone is lost, stolen or broken is evident, however doing so by using security questions may not be the best idea.
source: BBC News (external link)

Schneider Electric Patches Major ICS Vulnerability

This should come as no surprise as software security is even worse in industrial systems then in your office stuff which is more or less under threat constantly. One of the reasons for this is that truly misusing industrial systems is certainly not trivial.
On the other hand, most of these systems were never designed with internet or network connectivity beyond the industrial networks in mind. Add to that the use of more and more off-the-shelf commodity hardware and software which can’t be patched easily or quickly and you may understand what the challenges are.
source: eSecurity Planet (external link)

Schneider Electric plugs gaping hole in industrial control kit

Yes most of those industrial control systems are indeed designed to run at least 20 to 30 years. With that fact, it aren’t the now 15 years or older systems I am particularly worried about. These are almost exclusively based on dedicated hardware and unix-style operating systems and almost always not network capable. It’s the newer systems based on Windows servers and desktops that are often network connected for remote maintenance that are the biggest issue.
Add to that the connections between the office and industrial environments within companies as a possible attack factor and can draw an interesting picture.
I have worked in the industrial sector previously, from that experience I know there is a lot more to do to secure our vital infrastructure and other industrial companies. Probably even more then office environments these days.
source The Register (external link)

It’s time: Patch Network Time Protocol before it loses track of time,

The problem with the NTPD daemon is not only it’s ubiquitous nature, it’s connection port (123 UDP) needs to be open in both directions for the server to function correctly. Where as with a TCP service you can close a port for incoming connections, with UDP that is not possible because of the nature of the protocol.
Because of this any vulnerability is automatically exploitable as long as the daemon is not patched or a patch is available in time.
Network Time Protocol is used to keep all systems on a network time synchronised which is e.g. used by security software to match logged events on different systems to detect patterns and events. Throwing the NTP system off wil make sure that over time systems will lose time.
All computer hardware will drift off the correct time by minuscule amounts every day, that’s what the NTP protocol and daemon are for to correct.
source: The Register (external link)

Oh no, software has bugs, we need antivirus. Oh no, bug-squasher has bugs, we need …

Okay so security software has vulnerabilities too, big deal and hardly surprising. However, according to this research shows that in a given period eleven out of the top 46 were security components which amounts to closely 25 percent which is somewhat on the high side.
The good thing is that there are patches available quickly, the article says often on the day the vulnerability becomes public but that has probably more to do with responsible disclosure and bug bounty programs then the speed the patch was created.
This is more a wake-up call for those who thought security software can’t be vulnerabile, think again and specifically for this type of software: patch immediately as soon as they are available.
source: The Register (external link)

Software can be more secure, says NIST, and we think we know how

In other words: security by design, security and privacy by default, secure coding standards, input validation and a good testing setup where more then just your basic scenario’s are tested.
What I find the most troubling bit in the National Institute of Standards and Technology (NIST) report is the three to seven years timeframe required to implement there suggestions. Why oh why are we in 2016 still requiring well into the next decade to maybe make our software more secure?
source: The Register (external link)

Other articles in this series