Who REALLY protects the Internet?

Who REALLY protects the Internet?.

This morning I read an interesting post
in which Susan says that in response to a 10 year old's question of
“Who protects the Internet?”, her response is “I would argue that we
all do.”

Susan always looks on the bright side of things. I, on the other
hand, look at the dark side of infosec and have to disagree. In a
perfect world, we all SHOULD be part of the solution, but we rarely
are.

Every workstation attached to a network has a great influence over
the security of everything else in the entire organization. Once
connected to the Internet this is compounded 10 fold. Thus the security
of information is literally in the hands of those using the
workstation, the end-users who rarely care about security. The zombies
out there used by botnets are a LIABILITY, not an ASSET. And that
liability impacts me. It impacts you. It impacts us all.

You see now I not only have to make infosec decisions to protect my
organization from traditional risks, I have to make decisions to
protect against the incompetency of lazy administrators or end-users
who have no clue how to manage security. In other words, I typically
have to include risk management decisions against the very same people
Susan believes are the protectors.

Everyone needs to be responsible for their own house. Good security
practices require the effort of the community wherein everyone does
their part to protect their own systems. Unfortunately reality sets in,
and that rarely is the case. Don't believe me? Look back in the last
few years. How many vulnerabilities were exploited due to people who
DON'T have the latest patches. In many cases, the patch was rolled out
MONTHS before the attack vector was utilized. Why aren't we using
better patch management? And adding technology like intrusion
prevention systems to aid in limiting the risks during the Exposure
Window of a new vulnerability?

Probably because such software is not a pancea. Recently I had my own issue in which Shavlik's HFNetChkPro™
Security Patch Management software failed (due to my human error) to
effectively protect me. I upgraded to ISA 2004 on my SBS 2003 box, and
they downgraded back to ISA 2000. In the midst of this I requested
HFNetChkPro to reinstall SP2 for ISA2000. It told me it was scheduled
and it even forced a reboot. I (erroneously) assumed the patch was in
place. It wasn't. Luckily for me it was found out within a couple of
days, before an exploit was found for the firewall. However, even with
my vigilant security practices I failed to manage the patches
effectively. Patch management software needs to get easier and more
reliable for us to take advantage of that. ESPECIALLY for the end-user.
You know… those zombies part of that hacker botnet that is spewing
forth DDOS against targets like you and me.

I would like to believe we are all doing our part to protect our
little corner of the Internet. Unfortunately I am a realist and know
this isn't the case. If it was… the massive destructive force of
malcious code wouldn't be taking down the critical infrastructure in
our society. Has your head been in the sand to not know what I am
talking about? Hostile code and poorly designed software has shown us
the vulnerable nature of the Internet:

These are just a few examples. Here is a quote from an MSNBC article I recently read on the subject:

Although corporations, governments and other institutions
have gotten more savvy at protecting their computers with firewalls and
security software, millions of PCs in people’s homes are sitting ducks
for invasive software. That’s why the Slammer virus was able to infect
75,000 computers in just 10 minutes. In South Korea, which has the
highest proportion of broadband-connected homes—70 percent—in the
world, the top three Internet service providers were shut down,
bringing virtually all of the country’s e-mail and Web browsing to a
halt. Slammer also disrupted the Davis-Besse nuclear power plant in
Ohio, froze a 911 emergency-call-dispatching system in suburban Seattle
and took down Continental Airlines’ ticketing and reservation systems.
The Blaster worm brought down CSX’s train-signaling system in 23 states
and Air Canada’s computer check-in service—and some experts speculate
that it might have been a factor in the power outage that threw much of
the Eastern United States into darkness.

We know about these problems, but we are having a hard time dealing
with it. Worse yet is that we are exposing ourselves to more risk by
connecting these things to the public Internet without the proper
safegaurds. WHAT THE HECK ARE SYSTEMS LIKE NUCLEAR POWER PLANTS, TRAIN
SIGNALLING SYSTEMS and 911 DISPATCH SYSTEMS DOING ON THE INTERNET IN
THE FIRST PLACE?

Many people in charge of these systems are just not getting it.
Why? Because security is a process and not a product. (Sorry Schneier)
In other words, you can't simply buy a product and be protected. The
latest OS isn't going to do it alone. Nor will the latest antivirus. Or
firewall. Or IDS. Or IPS. It takes a “higher level of thinking” in
which we apply technical safeguards to layer security to defend against
multiple attack points. We need to educate the end-user while at the
same time simplifying security so that they can get it. If security is thought upon as being too complex, we have FAILED… something is wrong in the designed process.

As security software engineers, we have to bridge that gap between
the user and security… in a way that is seen to be CONVENIENT for the
user. How do we do that? By applying infosec principles and practices
in the DESIGN of secure systems while remembering who is using it…
the user. We can't bolt it on later and assume end-users will welcome
it. Want an example? Read my Longhorn rant from last year on adopting a least privilege stance for users.

Now I know this next point is going to sound like I am hitting below
the belt, but someone has to say it. We have to stop buying security
products from vendors that are more concerned with profits over
protecting their clients. As an entrepreneur I understand the need for
a company to be profitable. And I fully support that. But not at the
sacrifice of the client. I am tired of seeing supposed “security
companies” popping up who have developers (and executive management for
that matter) who know NOTHING about infosec policies and practices.
Just because you are a good developer does NOT make you a security
expert. If you don't have skin in the game, you SHOULDN'T be leading
the development of security software. If you don't understand risk
management practices how can you undestand how customers will apply
your software to help mitigate their risks? And a company simply
shouldn't sell the next wiz bang computer security gadget because its
the current fad for the highest CAGR in software sales.

Our mantra of “Custodit Nuntium” is core to our
Code of Ethics and we will put the protection of our clients before the
protection of our profits, while still being responsible to our
stakeholders in the business.

And I stand by that thinking. The success of our company is through
the success of our customers, and every aspect of our business is
focused on refining processes to achieve this. Even if that means we
will go out and buy a competing product (at a similar price point) for
our customer if it is the right thing to do.

Who REALLY protects the Internet? The people that understand it. It's
the secure software engineers like me that write the tools to defend
against the adversaries in the digital divide. It's the network
engineers that facilitate the protection and flow of traffic. It's the
system engineers and administrators like Susan that keep the systems up
and running. And it's you. And me. And everyone else that understand
and applies higher thinking when it comes to information security. And
as part of that, we have a responsibility to educate the end-users…
and get them to HELP us secure the Internet, so we can be part of
Susan's vision that it's supposed to be all of us.

[Dana Epp's ramblings at the Sanctuary]

Leave a comment