Case study: Policy-based security and access control

SUNY Old Westbury has more than 3,000 students and hundreds of faculty using a plethora of devices. Their CIO, Marc Seybold, talks about their policy-based approach to securing the network and controlling bandwidth use

In a university environment, there is no time for the network to go down. The students and faculty at SUNY Old Westbury, a university located on Long Island, New York, demand 24-7 access to the internet, both on and off campus. And, of course, it isn't enough to simply keep things running, they need to be protected, too.

For SUNY Old Westbury CIO Marc Seybold, that is a tall order. He is dealing with many different devices, with many different types of users. He also strives to allow students to have almost-constant use of bandwidth, both for study and after-hours recreation, while still ensuring faculty have the bandwidth they need during class time.

These goals recently prompted Seybold to change to a different model to protect students, faculty, and the network itself. Seybold explained to CSO why he decided to switch from an agent-based control system to a policy-based approach for security and bandwidth control at the school.

CSO: Briefly give us a run down of challenges you face when it comes to securing a college environment such as the one at SUNY Old Westbury.One thing that is unique from a college perspective, as opposed to a business, is that we have very little control over the devices that people bring on to the network. We don't mandate, we don't own, we don't control the devices that students use. Students can bring anything from a laptop, to an iPad-type device, an Android, whatever is on the market right now, as well as smart phones, which are trying associate with Wi-Fi networks. So they are bringing those things back and forth.

Seybold:

Also see: Network Security: The basics

In our case, we have about 1000 students in the dorms and the balance, about 2000+ or so, are commuters. People are going back and forth and some of these machines can become infected with malware when they're at home. They bring them on-campus, and unknowingly put us at risk. They're not doing it on purpose. They're walking around with a machine that they're trying to get their class in school work done on, but those machines can have something that's trying to disrupt the operation of the school's network.

For a college, there is no distinction we can draw from a security point of view between things that come from the outside and things that are on the inside. We have to treat all of the devices as if they are untrusted. And that's a bit more of a severe environment that a typical organization would find.

Do you have policy to encourage people to practice safe computing?

Like all organizations, we have computer-use policies. But I think the analogy that is most apt for it is to take the example of what happens when people learn to drive. The average person knows that if they try to take a turn to tight they are going to risk losing control of the car and possibly even flipping it if it's an SUV. Or if they are driving at a high speed on the highway, if they don't leave X-number of car lengths per 10 miles of speed, they won't have enough distance to stop, and if they slam on the brakes they could rear-end someone.

Read about how Georgetown University redefined enterprise risk management in ERM: All systems go

Those kinds of rules haven't yet really cropped up as a social cultural issue yet with technology. I think that's where were coming into that era were people begin to understand it the same way. There are rules, metrics, cultural norms that have been established for other things, like driving, but people also have to exercise individual responsibility because there is a limit on how much an airbag and seatbelts can do to protect a driver. I think we've reached that point with computer technology. Of course everyone is going to try to apply as much technology as they can to protect it, but if we don't reach a point where there are social norms, that is people understanding that there are certain things that are putting them at risk, the technology will never be able to provide the full benefit. That's the point we try to drive home, but were dealing with 18 to 22-year-olds who are likely to engage in risky behavior with their computers.

How much has social networking use changed your security program?

It's tended to concentrate the attacks. While social media has become a very effective means of communication, it's also become a prime spot for malware writers. Instead of having to figure out how to get your malware to different places and hope somebody will land on a page, we now have this one enormous target. And because part of the way it works is to let small applications be installed and passed around, it's almost an ideal vector. All that's necessary is to trick people into thinking what they're about to click on or install or download is useful to them, and they could end up with a malware payload along with it.

Previously we saw more social engineering security incidents through spam, junk mail, things that would show up that said 'Hey, this is your friendly IT department. Please click on this link to reset your password.' Stuff like that. Now they don't have to do that.

So you see more incidents now from social networking sites than from spam e-mail?

Yes. It is driving a different approach to how were looking at IT security on the network side. We've actually made an abortive attempt to try and use agent-based approaches to deal with network access and security. It turns out it wasn't very viable. It looked good when we originally started it but the diversity of devices meant that you couldn't get agents for all devices, the students were fairly resistant to having an institutionally installed piece of software on their machines. It didn't always work properly, so it created a fairly large helpdesk situation. It drove us to step back and go at it from a totally different direction.

How are you handling it now?

We're convinced at this point, until something changes our mind, that doing it from a policy-based point of view is the better solution. What we're finding is that if you can come up with the method that will help increase a level of security for the users and the applications and your data center that's as transparent as possible, that essentially looks at the traffic and tries to do behavioral analysis on it, and is able to bind to that. I'm not talking about like in the old days when people would look at ports and protocols and do something fairly static, but instead look at it from the application layer and bind it to user IDs.

Let's use YouTube as an example. YouTube is used by many of the students and it takes up a great deal of bandwidth. But it also might be used by faculty member who needs to show it in the classroom. What if they are unable to use it because 500 students in public labs are gathered around campus and are actively on YouTube and using other social material that's not directly related to their classroom? What we want to be able to do is watch traffic that's gone by realize this is YouTube traffic, and also associate it with the user IDs you know belong to a faculty member. We want to be able to provide print priority access to this particular stream and ensure that the academic use gets through.

On the other hand, while that's being done, we're going to constrain the amount of bandwidth that's being used for recreational purposes. Ideally you want to be able to step back and say we're going to do this on a temporal basis because during the time when are running class. We care about the fact that our priority is to make the classrooms usable. But once we don't have any more classes in session, we don't need to prioritize it anymore for instructional use. There is no reason at that point not to make the full bandwidth available to the students.

This applies to security as well?

This same type of technology approach works for security. If you can bind the actual IDs to the applications that are going back and forth it's possible to pass it on to behavioral analysis devices and software, which will look at it and maybe realize 'You know what? I'm seeing this ID normally on a Monday-through-Friday, 9-to-5 basis. All of a sudden for the last two weeks I'm seeing this happen on a Sunday afternoon.' Then the behavioral-analysis systems will kick that back out into log alert.

Maybe it turns out someone came in on the weekend to catch up on work. But it's also possible that you track them down and their response is 'No, I'm sitting at home.' That means that further follow-up is necessary. We see that is the most viable approach moving forward.

How do you usually contact people to let them know there may be a problem?

It's usually by e-mail. The most common thing that will happen, depending on how malicious the activity is, is it may result in the system trying to block the user's access. In that case, they can go directly to one of the manned student labs where we have web managers and student help that can help determine the problem.

Do you think a policy-based approach works in a business environment, or is it really just useful in a university setting like your own?

I think this is absolutely the direction that security and bandwidth allocation resource decisions are going. They have to be dynamic, they have to be policy-based and they absolutely have to application-level services back to policies and IDs so that those can be bundled into some kind of profile. The days of a static environment are over.

Insider: How a good CSO confronts inevitable bad news
Join the discussion
Be the first to comment on this article. Our Commenting Policies