Last week, I wrote an article addressing the concept of “assumption of breach” and, as if to put a big shining spotlight on that idea, in one short week, we have learned that JP Morgan (along with other unnamed banks) and Home Depot have both been victims of massive breaches of customers’ personal and financial data. Incredibly, this news is equally shocking as it is “old hat.”
Focus on the Fundamentals of Network Security
While details are not yet available, we know that the hack of JP Morgan’s network was a prolonged siphoning of data over the course of several months and speculation hints that there were potentially multiple points of access and that the hackers may have gained entry into the system through a sophisticated exploitation of vulnerabilities in remote access systems and possibly using phishing tactics.
Despite the unique determination and advanced tactics of the hackers (speculated to have been from Russia or Eastern Europe), we do know that the breach was ultimately detected by a routine scan of the systems. It could be easy to give in to “breach fatigue” and throw up our hands helplessly to say “If JP Morgan can be breached, what could we possibly do to protect our network and our data?” However, the nugget of encouragement is found in the success of the routine scan that discovered the breach. Rather than highlight the futility of your IT security efforts, the JP Morgan breach and other highly visible attacks we have seen actually only serve to illustrate the importance of the fundamentals in network security.
Routine and Repeated Security Scans
Officials believe that the hackers who stole data from JP Morgan were successful because they managed to stay under the radar. They pulled small amounts of information over a prolonged period of several weeks to avoid detection as long as possible. When the breach was detected, however it was by a routine scan – not a special, advanced, or new test.
Was the scan that revealed the breach delayed or postponed? Was the prescribed interval between scans simply too great? We do not have enough information to know. But, ultimately, the scan served its purpose.
Some industries are required to perform specific testing at certain intervals. Other testing is performed at the company’s discretion. It is easy to be caught up in the day to day operations and let vital scans be postponed or go undone. Let the example of JP Morgan encourage you to keep up with critical testing.
Risk assessment is a key element to measuring the performance of your security program.
Recommended annually for HIPAA regulated covered entities and business associates.
Required: The Office of Civil Rights has stated that risk assessments must be done and must be maintained. However, a clear requirement for frequency of assessments has not been provided. Although the specifics vary slightly, an annual risk assessment is required by most other IT Security standards including PCI, FISMA, MCUA, FFIEC and ISO 27001.
A company’s risk profile is a key factor in determining the need for a pen test.
Recommended: Evaluate your risk and complete appropriate level of testing annually or when a major change is implemented.
Required for some companies by PCI.
Vulnerability testing is important for anyone who has computers connected to a network (any network).
Recommended: Vulnerability scans should be performed by knowledgeable people using quality tools at least quarterly. This also means the issues identified will need to be remediated.
Required for some companies by PCI and FISMA.
Updates and Patches for Software and Systems
Just before we heard about the breach of JP Morgan, we received news of the loss of the data for approximately 4.5 million patients of Community Health Systems hospitals around the country. We now know that this breach can be tied directly to the Heartbleed SSL bug discovered in April.
Almost immediately following the announcement of Heartbleed, OpenSSL released a patch with detailed instructions to correct the vulnerability (see Heartbleed.com). Almost three months later, experts were still estimating that hundreds of thousands of sites were still unpatched and potentially vulnerable.
A zero day attack is one that takes advantage of the window of opportunity between a vulnerability being discovered and a patch or update being made available to correct it. In theory, the first day that the vulnerability or bug is detected is the zero day and hackers will rush to take advantage of the unpatched systems. When companies and systems administrators delay or neglect to patch known bugs because they are preoccupied with other issues, suffering from “breach fatigue” or just negligent, it can no longer be called a zero-day attack. As the vulnerability is allowed to persist, the company is really just inviting unnecessary risk.
It is critical for any company to maintain a schedule for patching and updating software and tools on a regular basis. It is also important to have a process established for critical patches that become available and require your attention off the set schedule. Heartbleed would have been one of those critical patches. If you may be one of the estimated hundreds of thousands of sites who have yet to fix this bug, do not be caught like Community Health Systems in a breach that could have been avoided.
Quick Response at the First Sign of Trouble
It seems we have been inundated with bad news for months. Of course this all began (or kicked into high gear) with the now infamous Target breach last Christmas. The Community Health Systems and JP Morgan attacks may rival the scope of that Target breach but that remains the one that got our attention.
Fallout from the loss of Target data is on-going. And we are still learning of more retailers caught in what appears to be the same scheme. Just this week, Home Depot became the latest, following PF Chang’s, Michael’s, Supervalu Grocers and others, to announce that it was investigating evidence of a similar attack.
From what we now know about the Target breach, systems put in place to warn of unauthorized activity in the network responded exactly as they were designed to respond, warning Target’s IT officials of a problem. For whatever reason, the warnings went unheeded and allowed attackers to continue to pull valuable customer data out of the network.
Human nature is to get caught in routine. When systems are prone to the occasional false positive, we inevitably lower our guard and, in terms of network security, this can be very dangerous. For all the dollars spent on security and their sophisticated systems, basic “cybersecurity 101” was neglected at Target and clear warnings were ignored.
Each new attack or breach revealed by these large companies can be more discouraging than the last. With presumably well-funded IT Security strategies and the very best teams and tools in place to protect their critical data, how can a small to mid-size company hope to do any better? But when we look objectively at where even the large, well-protected companies seem to be caught unprepared, the common thread seems to be in the fundamentals of IT security.
Rather than allowing your company to be paralyzed by breach fatigue or discouraged by the daunting threats, be assured that the fundamentals of good IT security best practices will go a long way to avoiding the worst type of data breach – the one you could easily have prevented.
For help with the fundamentals of scanning, patching, and responding appropriately to potential vulnerabilities in your network, contact us today.