Pages

8 July 2015

Even Einstein Couldn't Fix Cybersecurity

JULY 2, 2015

The Einstein and Continuous Diagnostics and Mitigation cybersecurity programs have been hailed as the cornerstone of repelling cyberthreats in real-time -- but it turns out this is not actually the case.
A massive cyberattack at the U.S. Office of Personnel Management (OPM) exposed the personal information of as many as 4 million federal employees. Though this type of news is not unusual, this particular case is different given that a multi-billion-dollar federal civilian cyberdefense systems was hacked. The cyberdefense systems supposedly protecting the OPM are Department of Homeland Security programs known as Einstein and Continuous Diagnostics and Mitigation (CDM) -- and were hailed as the cornerstone of repelling cyberthreats in real time.

Unfortunately this is not actually the case, as it took five months to discover the intrusion -- hackers hit the OPM in December, and the agency did not detect the intrusion until April. How bad the attack really was is still being analyzed. 
WHAT ARE EINSTEIN AND CDM?

Einstein (also known as the EINSTEIN Program) is an intrusion detection systemthat monitors the network gateways of government departments and agencies in the United States for unauthorized traffic. The software was developed by the United States Computer Emergency Readiness Team (US-CERT), which is the operational arm of the National Cyber Security Division (NCSD) of the U.S. Department of Homeland Security (DHS). The program was originally developed to provide "situational awareness" for the civilian agencies. The first version examined network traffic while the expansion in development could look at content. 

The CDM program provides IT security software and hardware tools and services for continuous protection of civilian agency networks and systems from cyberattack. This program is a dynamic approach to fortifying cybersecurity of government networks and systems. CDM provides federal departments and agencies with capabilities and tools that identify cybersecurity risks on an ongoing basis, prioritize these risks based on potential impacts, and enable cybersecurity personnel to mitigate the most significant problems first. Congress established the CDM program to provide adequate, risk-based and cost-effective cybersecurity, and more efficiently allocate cybersecurity resources. The CDM program lets government entities expand their continuous diagnostic capabilities by increasing their network sensor capacity, automating sensor collections and prioritizing risk alerts. 
WHY EINSTEIN AND CDM FAILED

One of the biggest problems with federal system security is the magnitude of connected and interconnected information systems, databases and agencies. These are often some of the largest systems in the world, with security upgrades often at different points of deployment in different locations and departments. Unfortunately this widespread approach allows for breach points within the centralized system security, offering weakest link vulnerability that is capable of breaching the entire system.

The more the federal government attempts to centralize these information services, the greater the attack vector. This is a problem seen in many government entities, as in large companies that have used the efficiencies of centralized digital information for years while continually playing catch-up in securing the system digital processes.

Another big issue that I have previously covered (see video at left) is that today we are securing enterprise service applications at the utility service level by analyzing historical trace logs. This is why it took the OPM months to detect the breach of 4 million employees’ clearance records and related files. Our current Intrusion Prevention Systems (IPS) and Intrusion Detection Systems (IDS) today are focused on securing the information utility transport services. This massive aggregation of data in motion, data at rest and intermittent active data are sitting ducks for hackers. Larger information systems are then connected to additional utility transport services offering the potential of multiple points of beach. The bigger the system, the more complex the data repositories -- and the more difficult it is to find what data has been compromised.

After a cyberattack, a cybersecurity analyst is then faced with the unenviable task of finding the needle in the haystack and sorting though sometimes terabytes of system logs to discover the point of cyberbreach. This is why it takes so long to find the source of the cyberattack. In general, this is why large databases both in government and big corporations are being hacked: They react to system beaches rather than proactively stop cyberattacks. Until we change the way we view our information services in our current cybersecurity systems, we will not effectively stop cyberattacks. 
NIST RESPONDS TO THE OPM ATTACKS

The National Institute of Standards and Technology (NIST) released guidelines for better security from government contractor covering 14 areas: access control, awareness and training, audit and accountability, configuration management, ID and authentication, incident response, maintenance, media protection, personnel security, physical protection, risk assessment, security assessment, system and communications protection, and system and information integrity.

The heart of all recommendations lie with the security system's audit and accountability and the time period it takes to complete this audit. If you cannot be reassured of what your workflow services are in real time, then none of the other recommendations really matter and your systems are susceptible to breach. Case in point: If a hacker has put in a hidden exploit in a system and has encrypted it, how would you stop this on demand real-time cyberattack? The only way an attack of this nature can be stopped is by knowing what the process workflow was supposed to do. Anything else -- even an authenticated encrypted hidden exploit -- could be considered an event anomaly and would be blocked. If utility machine events can occur in microseconds, then the cybersecurity solutions offered must be able to audit and block system exploits anomalies ahead of these microsecond worklflow process and machine actions. If this cannot be done, then the hacker will always have the first-to-strike advantage. 
MOVING FROM ALGORITHMIC TO NON-ALGORITHMIC APPROACHES

Both software code and the algorithmic analytic approaches have the same problem. They are susceptible to code and algorithm exploits that can take control of the system process services. Our systems today are at best auditing historical actions of system utility events not the actual workflow process services. When you are talking about microsecond events that can turn on or off critical digital services, algorithmic formulas and code take too long to audit. Both are also susceptible to exploit manipulation through reverse engineering or code and algorithm manipulation.

In cybersecurity, an audit should tell what is accepted as a proper workflow event or security policy -- not that something unusual has occurred and was found by historically viewing system logs.

Getting back to the question of how to stop a hacker that has hidden the exploit and encrypted it for activation at any time, if you know the correct workflow services and correct system security policies, then an exploit that isn't part of the workflow event services can be alarmed or blocked ahead of the activated exploit. Doing this in microseconds requires the recognition of these correct workflow services in codeless fifth-generation programming language (5GL) patterns, not code or algorithms. Code and algorithmic formulas are too slow and can only offer code patching cybersecurity technologies. These cyberbreach corrections are made after historical viewing system logs, not in real-time pre-emptive methods of blocking of cyberattacks.

When you have data in motion, data at rest and data in use available for access without real-time audits of the access and use of the data, then why shouldn't this data be hacked? When you have code written on top of code, cloud computing connecting to enterprise computing, and the Internet of Things (IoT) without any security standards (and a projection of billions of connected devices), why would you expect cyberattacks to be stopped? We can't keep doing the same things and expect different results. It is only when you can proactively audit your real-time workflow events in microseconds ahead of a potential cyberattack that you can stop exploits and achieve true cybersecurity. 
ADDRESSING HUMAN NATURE

People find change difficult even when it is greatly needed and can improve their lives. Technological change is even more difficult because people neither understand it nor have a vested interest in it, because they are employed in the use of inferior older technologies. A hacker makes new cyberattack technology every day and can activate these digital exploit breaching capabilities in microseconds. We are currently combating these attacks with published standards and guidelines that take years to develop, and are known to be ineffective in stopping a cyberattack after their release.

As the saying goes, necessity is the mother of invention. We are faced with an expanding use of connected services that if not secured, will stop the massive progress we currently have achieved in our current digital information systems while simultaneously halting technological innovations. When these information system technologies are continually breached and become too expensive and dangerous to operate, they will have to be stopped. This will put us back to the pre-digital age, which for most of us is incomprehensible.

We have not even touched the service capabilities that cloud computing and IoT services can offer us now and in the future. We can't depend on current IPS and IDS technologies in securing the billions of connected applications operating today and in the future. We can't keep thinking we can patch things, control cyberattacks or even win a cyberwar. Even power and money has shown its weaknesses in stopping the independent hacker (now groups of hackers) who are simply saying, "If you do, this then I can do that," while we slowly react to their daily ingenuity. 
BECOME PART OF THE CYBERSECURITY CHANGE

Before there was software, people did things manually while watching and auditing their progress -- which is still a big part of today’s business processes. This means of oversight is not perfect and is sometimes subjective, leaving much room for error. Today's information system technologies and the workflow they automate are no different. We need to find ways of auditing these digital assisted processes to assure that the workflow services and security policies they run on are correct. We have greatly increased our automation through digital workflows, but have not put the proper auditing services in to assure that these microsecond workflow event services are actually correct.

Cybersecurity is about the audit of how we use information technologies (digital workflow) and doing things correctly, not the historical analysis of what went wrong. If we do not implement the correct auditing technologies within the digital workflow services, then connecting our ever-expanding information system services to the workflow processes means they will surely be breached. To do this, we must do things differently -- we must not depend on current cybersecurity technologies that continue to show fault. Please take a look at this YouTube presentation (also embedded above) and become part of this needed change that will secure our current digital technologies while simultaneously securing the exciting future connected digital capabilities we can now only dream of.


Larry Karisny is the director of Project Safety.org, an advisor, consultant, speaker and writer supporting advanced cybersecurity technologies in both the public and private sectors.


No comments:

Post a Comment