I have been at this for a long time now. Roughly two decades of working for all sorts of companies, clients and now as a vendor. It has been an an interesting ride. One thing that I did over the years was keep journals. Notebooks where I scribbled down my day to day tasks, thoughts as well as solutions. Picking through some of these notebooks I was struck by, what I can only refer to as, stupid human security tricks. Naturally I will give a hat tip to David Letterman.
One of my all time favorites was when I was working for a company almost 10 years ago. I was in the process of having a network vulnerability scanning system deployed across the enterprise. No small feat when you consider how many business groups fought like hell to block it. There was a fear that any scanning would break systems left and right.
I was curious as to where this fear came from. I started digging and asking questions. I took a lot of folks for coffee to “have a chat” and eventually all roads lead back to one person in the organization who used to run scans. This person had broken systems by misconfiguring scan jobs. So, this person had managed to rationalize that only scanning up to port TCP/1024 was necessary as those were approved ports as per RFC.
I waited for the “HA! Gotcha!” but, sadly, it never came. For several years scans had been run against systems in the enterprise only up to TCP/1024. Anything running on a higher order port was “not permitted by RFC”.
While this broke my fragile little mind it did dovetail into the plug pullers. What’s that you ask? Well, on many occasions at multiple organizations I would have external penetration testing teams come in to test our environment. I would always be asked for the exact schedule from the business units. The first time I didn’t give it much thought.
When I received the report I was surprised that there was no information pertaining to a certain business units systems. It turns out that they were pulling plug on systems during pentest. How juvenile I thought. I learned in short order to fudge the numbers slightly regarding times for tests. Always ensuring that this didn’t run up against a maintenance window or when new code was being promoted to production.
Then I return to one of my all time favorites (tongue firmly planted in cheek). The manta of, "We have a firewall” is one that I have encountered far too many times. The team I was on would repeatedly demonstrate the need to fix something or deploy a new security measure to better secure the business only to be met with a negative response and a push back. The idea back then, and I still run into it now is that a firewall would cure all the ills of the network.
The last one that I’ll share (in what I may turn into a series) is on the subject of web security. Time and again when the subject of web application security would come up I would be met with, “no one wants to hack us”. My jaw resting on the table I rallied to regain composure. This fractured logic was a hard one to overcome in some organizations. There was a disconnect that seemed impossible to remedy. In one organization it took a data breach in order for people to start taking it seriously. When senior management demanded to know how it had happened, I shared my email from a month earlier where I spelled out what could happen. In fact, the system in question was breached.
There are no shortage of stories like these. The catch here is to not only learn from them but, take them as lessons to train others. These problems were ones that were (for the most part) remedied. Do you have one that you would like to share? How did you fix the problem? I’ll be sure to file off the serial numbers accordingly to protect the guilty.
Drop me a line at email@example.com