Stuck in Cybersecurity Hell?
For security professionals, it may seem no matter what steps you take, the intruders are still hard on your heels – and they probably are. The problem is we aren’t doing enough to address fundamental reasons that security vulnerabilities exist. It’s a situation we are looking to change through professional education initiatives at MIT.
If we want to escape from “cyber hell” – then professionals will need additional training, not only to learn the latest technology, but also to understand public policy and organizational management. By providing a global, systems-wide look at security, examining the past and learning from examples of what can go wrong, we can help organizations get off the security treadmill and transition to a more proactive, systematic way of managing attacks.
The heart of the issue is the way we implement system code and system design. Much of the software we’re using today has its roots in the 1970s when memory and processing power were at a premium. Instead of enforcing critical properties such as security at the lowest level – in hardware – it was left to programmers to deal with such issues in their software.
In short, the architects of those systems made perfectly reasonable engineering trade-offs for their world, but our world is much different. Between then and now, Moore’s Law has allowed a steady improvement in computer performance; systems today are more powerful. They are all networked together and entrusted with critical functions – yet we still use the architectures appropriate to an earlier era.
But to fix the problem requires a new way of thinking about how to design computer systems.
Novel hardware architectures can be used to help enforce the security properties that Operating Systems and Programming Languages expect, including memory safety, type safety, information flow and access control. But implementing these systems means abandoning or replacing popular programming languages, such as the C-family of languages that have been used for a very long time. That’s a lot of code. What would be the incentive for anyone to undertake the effort of rewriting all the system code and re-architecting all those systems? That’s the critical question we are looking to address right now. We have the technology and know-how to create a new system that will provide guarantees about cyber safety, but change will take time… and some creative thinking.
Meanwhile, there are other faster and easier approaches that we can implement immediately to provide some level of guaranteed security. Two-factor authentication is one such example. At MIT, any core system that contains data we care about has a two-factor authentication scheme. You have a token and a pin. This way, even if an attacker tricks you into finding out your credential - they still need your physical token. It is a very effective approach.
There are also some promising advances in memory safety that are moving things forward as well. Modern Intel family chips, for example, include a feature called MPX (Memory Protection Extension) that prevents buffer overflow attacks in which data overruns a buffer’s boundary and hackers have access to areas where they can execute their malicious code. Memory safety errors account for well over 50% of all vulnerabilities.
However, the critical question isn't whether safer computer systems can be designed, but whether key players – including technology makers – can be incentivized to adopt the technologies and processes needed to create a hack-proof computer. We’ll have to make it worthwhile to undertake the effort of rewriting all the system code and re-architecting systems.
It may be the cyber risk simply becomes great enough to spark such massive change. Or perhaps such an effort would be driven by insurance companies, which have an interest in reducing risk. Another vehicle may be companies banding together behind architectural principles they want to see in the products they buy.
What’s clear is that solving the problem – the complicated mess that is cybersecurity – will require professionals to take a holistic look at cybersecurity technologies, techniques and systems. New technologies will require new policies and incentives, and emerging policies must adapt to future technologies. Professionals can prepare for the future by examining the past, learning from other’s mistakes and understanding the capability of architectures. They must also be equipped with new tools to help develop and implement more creative and effective cybersecurity solutions.
The key enabler for all of that is having a well-specified set of architectural principles that will give some guarantees about cyber safety, and I think we know what enough of those are to proceed along those lines.
About Howard Shrobe
Howard Shrobe is a Principal Research Scientist at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He is a former Associate Director of CSAIL and is the Director of CSAIL’s CyberSecurity@CSAIL initiative.
Source: ITSP Magazine