Mobile security

Mobile security Q&A: Securing the mobile minimum viable app

As enterprises struggle to keep up with their internal demand for mobile apps, more are turning to rapid development workflows. What does this mean for security?

1 2 Page 2
Page 2 of 2

I would imagine with an MVP teams want to move the app out as quickly as possible, so they don’t want to spend a lot of time threat modeling and going through a lot of additional process, because that’s all adding to more development time. So there seems to be a natural friction between the goals of MVP and good security. 

It's absolutely a friction. It's challenging because securing is mostly invisible. That means good security and bad security look exactly the same, until something goes wrong. Security is really visible when something is broken or somebody gets hacked and then you make the news. Then it kind of blows up in your face. We've seen this a few times, I don't know how many start-ups it's killed, it's probably killed a few, but it's definitely cost a lot of start ups when their first major news coverage is that they were hacked.

What are some ways organizations can ease that tension when it exists? Is there a way to bring security in so it's not too obtrusive? Is there a way to separate out apps by data type? And possibly greenlight MVP apps that don’t touch more sensitive data, and give a closer look at those apps that do?

I think that's a good approach. As you point out, one way is to say, let's see if we can do an MVP with data that's not as sensitive so you won't have to focus as strongly on security. Nowadays, it's a little more challenging. Even the minimum things you do you will need security. It kind of doesn't matter what your data is, you will get targeted, you will get attacked, and even if it's just with these automated bots that run around the Internet attacking everything. They'll use your infrastructure for sending spam at the very least, if that's all they can do. To me, the approach is you have to implement some of the industry best practices as far as the OWASP Top 10. You have to believe that security is an important part of a minimum viable product to start to even begin to get these user stories in there. 

What I like to tell people is think about user stories, even negative user stories or things like that are, as a user, I don't want to see my personal information leaked on the internet because I've shared something sensitive in your app or your website, I've stored something sensitive in your website. I don't want to see that in the hands of people who will use my private information against me.

That sounds like something a security team could put a guide together, or put in place a checkpoint on whether an app can go through. For instance, if the app has certain conditions that are true, or one of these conditions that are true, the app has to go through a security review. If not, it’s OK for a security light approach within certain guidelines.

That'd be perfect. Typically these lean approaches have at least some kind of testing methodology built in, or acceptance testing. Or, as some of them call say, "What's your definition of 'done'?" The first step is just saying, "We're going to include security in these definitions of done," and once you've at least penetrated that level, which I don't think a lot of people have, but once they get that, then they’re going to at least do the right things. You're either going to start to build it either into the user stories or the acceptance testing. 

But you can’t leave it to just be at the end of the process. If you leave security acceptance testing toward the end, and naturally your schedule is going to slip. Then you'll get to the security testing and find there's a lot more work to do. Then you'll be in this unfortunate decision of either having to fix things and let your schedule slip, or choose to let something go out the door that's not secure.

The real tragedy is when a system is kind of inherently insecure, it was built in a really insecure way that requires major rework, because you didn't think about security at the beginning. A lot of things are easy to add at the end with security, but sometimes you run into systems that are just kind of broken from the foundation. As with any of these things, the later you catch it, the costlier it's going to be be.

What are some indications organizations could look for that would indicate that they’re doing this right?

If you're looking at your to-do list, whatever that to-do list is, whether it's a list of stories or a big list of tasks and action items, you should be recognizing some security issues in there, as you go. You'll get to a point, you're developing something and one of your developers hopefully will say, "Well, look, our system is vulnerable to whatever cross site request forgery, cross site scripting attack. Which any system that's not designed to protect against it, is going to be.

If you look at your bug list, you should see that pop up there at some point. Some of these security issues will come up during development, because nothing will be perfect. That'll be an early indicator.

If you don't have anything, if you look at your bug list and you don't see anything, if your developers aren't actively talking about security or saying, "We're going to have to add some tasks for security," you're going to say, "Well, I want to add that feature for you but that's going to have an impact on security." If you're not hearing it as part of the conversation, then there's going to be a problem.

Copyright © 2016 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
Get the best of CSO ... delivered. Sign up for our FREE email newsletters!