Americas

  • United States

Asia

Oceania

Contributor

Here I am, hack me

Opinion
Aug 09, 20186 mins
Data and Information SecurityEndpoint ProtectionHacking

Bad actors are constantly trying to find ways to penetrate our networks. Recent attacks at LabCorp and the City of Atlanta demonstrate, however, that we are putting the welcome mat out for hackers by leaving key network ports open. This article discusses the severity of this problem, and what we can do to reduce or eliminate it.

security vulnerabilities in the IoT Internet of Things
Credit: Thinkstock

Those of us in healthcare are reeling from the recent ransomware attack at LabCorp. The company, one of the largest medical testing companies in the world, confirmed that a known group of bad actors penetrated their network late on a Friday night via an exposed RDP port, and infected more than 30,000 systems with SamSam ransomware. LabCorp deserves some kudos, given reports that they had the attack contained in less than 50 minutes, which is quite amazing, if true. Kudos notwithstanding, however, why did they allow their network to be penetrated in the first place?

If the attach had been due to a zero day vulnerability, or some brand new technique, I would cut them more slack. Instead, this was a well-known ransomware infection technique. Anyone remember the City of Atlanta attack, which was nearly identical to the LabCorp infection?

The basic problem here is a known, very common, attack scenario, which could have easily been prevented by LabCorp, the City of Atlanta, or any of the many other victims, by closing or properly securing any RDP ports exposed to the Internet. It seems that many have fallen victim to open RDP ports, however, since the BitCoin income wallet for the SamSam bad actors has exceeded $6 million to date.

While we don’t yet have all of the details, I find it unlikely that they had never heard of SamSam ransomware, or how infections with it happen. Given how fast their Security Operations Center responded, they clearly care about security. I am quite confident they have good firewalls capable of blocking RDP traffic. That only leaves two alternatives: they didn’t know about the open port, or they knew and allowed it. I find no comfort in either of those explanations.

It is imperative that any organization know what ports and services are open on their network perimeter. There is no valid reason not to know this. Scanning an address space for open ports could be easily handled by a second-year college student.

This incident should serve as a good reminder to you to check your network. Once you know what is open on the network, it is also critical that you not assume you are good today because you were last week. It is all too easy to open a port on the network. This is usually done as a “temporary” measure by a well-intentioned person who plans to close it again shortly, but never quite gets around to it.

The real issue here is the lack of ongoing diligence about ways a bad actor can penetrate a network perimeter. Having a good, ongoing monitoring program has a number of challenges, including:

It is too easy to open a new port

In many cases, far too many people at a company have firewall access. Even when access is properly restricted, many network teams don’t require formal approvals before opening

Network changes can result in a device being outside of the firewall

In the normal course of network maintenance, it is not uncommon for a device that was inside the firewall to suddenly be exposed via a network change. At times this is accidental, but other times it is well intentioned.

Despite these challenges, it is essential to stay on top of open network ports. If you don’t, sooner or later, a bad actor will find one and use it to launch a successful attack. If you are not convinced, consider the fact that with tools such as Zmap and Masscan, it is possible using an network connection  and PC to scan the entire 3.7 million IPv4 routable address space in less than 24 hours. With faster speeds and multiple systems being used, a bad actor could, in theory, track your open ports from day to day, looking for changes they can use for their purposes.

Managing open ports is much easier for you than the bad guys, because you have many fewer addresses to monitor. The secret, however, is to do the work needed to monitor your network ports, frequently. The following are some suggestions:

Document your network

It is important to keep records of your network configuration, firewall settings, and open addresses and ports. Such a record facilitates tracking of configuration changes from week to week. Further, this documentation his a requirement of various compliance standards, including PCI DSS (sections 1.1.2 and 1.1.3).

Restrict and control changes

Limit changes to the smallest number of individuals possible, and do not give others the privilege to make such changes. Use a formal change management process to track, approve, and document changes.

Scan and scan again

Tools are readily available to allow you to scan your public address range for open ports, so there is no reason not to do this regularly. You can use an open-source tool such as NMAP for this purpose, or a paid service like Qualys.

Perform regular penetration tests

A penetration test involves having someone, usually an outside expert, attempt to find openings on your network. Such a test is usually valuable, since the tester has the same knowledge (or lack thereof) about your network as a bad actor would. The penetration tester identifies gaps in your network, so you can close them before someone can take advantage of them. I have seen a good penetration tester find a small opening in a network, and pivot through various systems, until they ended up with administrative privileges. A good penetration tester can be of tremendous value in keeping your network secure.

Monitor your log entries

An outsider unfamiliar with your network often uses a trial and error process to look for gaps that can be exploited. If you maintain good logs which you can easily monitor, you can often spot and block penetration attempts while they are happening. A good Security and Incident Management System (SIEM) can help automate this process, and can generate alerts when such attempts are spotting, saving personnel time.

Bottom line — Many of us in the information security industry spend our days finding ways to keep the bad guys out. Unfortunately, some organizations choose to lay out the welcome mat for them and will usually pay the price. It’s time to pull up the mat and replace it with solid and consistent cyber security practices.

Contributor

Robert C. Covington, the "Go To Guy" for small and medium business security and compliance, is the founder and president of togoCIO.com. Mr. Covington has B.S. in Computer Science from the University of Miami, with over 30 years of experience in the technology sector, much of it at the senior management level. His functional experience includes major technology implementations, small and large-scale telecom implementation and support, and operations management, with emphasis on high-volume, mission critical environments. His expertise includes compliance, risk management, disaster recovery, information security and IT governance.

Mr. Covington began his Atlanta career with Digital Communications Associates (DCA), a large hardware/software manufacturer, in 1984. He worked at DCA for over 10 years, rising to the position of Director of MIS Operations. He managed the operation of a large 24x7 production data center, as well as the company’s product development data center and centralized test lab.

Mr. Covington also served as the Director of Information Technology for Innotrac, which was at the time one of the fastest growing companies in Atlanta, specializing in product fulfillment. Mr. Covington managed the IT function during a period when it grew from 5 employees to 55, and oversaw a complete replacement of the company’s systems, and the implementation of a world-class call center operation in less than 60 days.

Later, Mr. Covington was the Vice President of Information Systems for Teletrack, a national credit bureau, where he was responsible for information systems and operations, managing the replacement of the company’s complete software and database platform, and the addition of a redundant data center. Under Mr. Covington, the systems and related operations achieved SAS 70 Type II status, and received a high audit rating from the Federal Deposit Insurance Corporation and the Office of the Comptroller of the Currency.

Mr. Covington also served as Director of Information Technology at PowerPlan, a software company providing software for asset-intensive industries such as utilities and mining concerns, and integrating with ERP systems including SAP, Oracle Financials, and Lawson. During his tenure, he redesigned PowerPlan's IT infrastructure using a local/cloud hybrid model, implemented IT governance based on ITIT and COBIT, and managed the development of a new corporate headquarters.

Most recently, Mr. Covington, concerned about the growing risks facing small and medium business, and their lack of access to an experienced CIO, formed togoCIO, an organization focused on providing simple and affordable risk management and information security services.

Mr. Covington currently serves on the board of Act Together Ministries, a non-profit organization focused on helping disadvantaged children, and helping to strengthen families. He also leads technical ministries at ChristChurch Presbyterian. In his spare time, he enjoys hiking and biking.

The opinions expressed in this blog are those of Robert C. Covington and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.