Tuesday, April 5

Human Behavior = Biggest Security Risk

Two quick examples (both considered 'spear phishing' or targeted phishing attacks) from today's headlines:

1. The perpetrators of the RSA data breach which may have compromised the security of RSA's premium two-factor authentication solution, as it turns out, got help from RSA employees when they opened an email attachment. An Excel spreadsheet containing an Adobe Flash exploit opened the doors to RSA's network.

2. Conde Nast recently paid $8 Million to a fake company in response to a single believeable email that asked them politely to update their payee information on one of their vendors.

Both of these examples make the clear, simple point that it doesn't really matter how much technology you put between an attacker and your business assets. If an employee opens the door, they can walk right in. We're either going to get extreme in terms of limiting behavioral options (disallow all email attachments?) or we need to do much better in employee training.

Since employees are ultimately only motivated by what is easier, I don't think training will be the silver-bullet answer.

Thursday, March 31

Samsung Keylogger and Other False Positives

Samsung laptops do not have a keylogger installed at the factory. But it was reported as such because an AV program recognized a pattern used by a popular keylogger. GFI Labs, who provide the AV solution (VIPRE) stepped up and explained exactly what happened and accepted the blame (not that anyone should blame them - isn't this exactly why we buy AV products?).


If I were GFI Labs, I might be asking the industry why other AV vendors haven't had the same issue - most of them use pattern matching as the key method for finding viruses and other malware. Is having too many patterns a bad thing? With this approach, false positives are a necessary evil - just part of the intended design. There is an alternative, though.


I've been speaking lately with my own AV vendor, eEye, who bucks that trend a bit. eEye doesn't rely so heavily on pattern matching and instead uses protocol analysis to determine what installed programs are actually doing to determine if there's a threat. For example, it's not a keylogger if there's no information being collected or sent. eEye claims that Blink can be installed on a Windows PC with no security patches and protect it fully with several layers of protection.


The approach has two advantages. First, you won't get false positives related to a pattern match that just doesn't quite add up. Second (and more importantly), you get protection against zero-day attacks where there is no known pattern. If you have to wait for your AV vendor to provide a virus definition update, you're in a constant state of being behind the attack trying to catch up.


I was planning to write about eEye's vulnerability scanner and a specific issue I encountered with software on a personal PC, which is why I was speaking with them, so you may see another post soon on eEye but I'm not trying to make a commercial out of it. Just curious mostly if other AV vendors are looking at this approach. It seems to me to be more effective and based on my limited experience, consumes less resources than the other security packages I've tried.

Monday, March 28

Security must be Easier or Cheaper

Remember when I said that end-users (i.e. People) are motivated more by "easier" and "cheaper" than they are by "more secure"? Well I did. And they are. If this MSNBC article doesn't make the point, I don't know what will: Why should I care about digital privacy?

The article discusses how a number of people sitting in a coffee shop were actively hacked & eavesdropped, alerted to the hack, and they chose to simply ignore the alerts, continue browsing and even making online purchases.

I was once in the audience while security guru Bruce Schneier was speaking. He made the point that while the security industry is all jazzed up about privacy, all of our efforts may not have much impact because the people we're concerned about ("young people") just don't care. They live their lives on-line and don't have the same ideas about privacy that us old people have.

There is an on-going debate about whether sending (texting) a nude photos of oneself while underage is a criminal offence worthy of a 'sex offender' label. The simple fact that we need to have that debate is evidence that Schneier is right.

I find more evidence when I ask any facebook user about privacy. It seems to be common knowledge among young and old that facebook users should be concerned about privacy. There are unanswered questions about personal information. [I'll give facebook the benefit of the doubt for this discussion and say that the security tools are in place to protect yourself if you're knowledgeable and cautious. But that's a big IF.] The fact is, most seem to think that facebook is not secure, yet they continue to use it. Because it's a cheap and easy way to communicate with friends, stay in touch with gossip, get news, play games, etc. For any security mechanism to be truly effective in an environment where security can not be mandated (corporate setting), like the general public, it needs to be (say it with me) easy and cheap. Easy as in built-in and transparent. Cheap as in Free.

btw, SSL (secure browsing over HTTPS) is sort of like that, until some CA (not mentioning any names but maybe sounds like a type of dragon) gets breached and generates bad certificates. Then, it's less easy. Also, SSL requires that an end user actually observe the browser to confirm that it's a secure connection. And in reality, the observation only tells you that you're connected to a secure page right now - it doesn't tell you where the submission form will take you. So it's really only secure if we're trusting the sites we visit or are unusually saavy web users.

Wednesday, March 23

The RSA Breach

[Updated 5:36 pm ET 23 Mar 2011]

Sorry - two points of clarification:

1. Where I say "serial #" throughout my post below, we should keep in mind that the token has a hidden 'seed record' which is actually used in the algorithm, so that's another level of security. The serial # is not enough - you also need the seed # and the ability to match it to a given user's token.

2. I should've mentioned that there's also a feature which prevents brute-force attacks by disabling an account after x number of failed attempts, so if you have a very good educated guess on the PIN, along with the other data, you have a good shot. If you think you'll brute-force it, that isn't going to fly.

[end update]


I don't have any inside info, but it certainly sounds like the algorithm for generating the One-Time-Passwords (OTP) may have been accessed. This makes an attack much easier because to some degree it eliminates the "second factor" in the Two-Factor authentication. But not 100%.

If I know the algorithm, to spoof the token functionality, I still need the serial # of the token, matched to the user name, and the PIN. These things aren't impossible, though. You could stop by a person's desk, for example, and scribble down their serial # while they're getting coffee. If they're a co-worker, you probably know their user name and can make some guesses about the PIN.

Most people I know that use RSA tokens use a pretty simple PIN - a date, 4 digits of a phone number, something like that. So, if you use social engineering to get the serial # and user name, your down to having to guess the PIN, which is really a shorter, less secure password. And you're back to one-factor authentication. PINs may also be written down on the desk, scribbled on the back of the token (I've seen it), left in email or browser auto-fill, etc.

For an outsider to use this attack, it's a little more challenging than an insider. You'd need access to the serial numbers used by the company and the ability to match them with user names. Based on the info provided by RSA in their letter to customers, some or all of this may be in the RSA server logs. So, protecting those logs has just become critical.

If the token is installed on a smart phone or PC (software token), the token only works from that installed device. So, if the algorithm is public, the software tokens may have just become slightly more secure than the hardware tokens since it would be difficult to spoof the hardware configuration (or even to know the exact hardware) associated to that software token. ...at least, that's how I think it works.

So, a few shifts have been made if my assumptions are true:
- Software tokens may be more secure if the algorithm is known.
- Protecting the RSA server and logs, and access to those logs has become critical.
- Overall, the system is still somewhat secure, but people don't buy RSA tokens for 'somewhat secure'.
- RSA tokens have become pretty insecure against insider attacks.

If anyone knows something different, please correct me.

Friday, February 4

Business Case for Claims-Based Authorization

Jackson Shaw provided a great use-case for claims-based authorization this week. While I've always seen the value of the claims-based approach, I've always felt that the thing that's missing is the motivation. End-users and consumers are typically motivated by what is easier or cheaper. Corporations, similarly, are motivated financially and not, as we might hope, by security or privacy as an end in itself. But his example, which applies to any major corporation who gives discounts based on employer (hotels, car rentals, wireless phones, etc.) shows that there are millions of dollars on the line.

$$$ = motivation

It might be just what corporations need to push them toward adoption -- and that includes providing incentives for customers to move to a claims-based model. I think mobile phone companies are situated perfectly - they can provide the authentication mechanism built into the devices they sell, which makes it potentially easier for users to browse the web (could solve the 'numerous passwords' problem) - remember:

Easier = motivation

...and they have a huge financial motivator because many big companies negotiate mobile plan discounts for employees.

But perhaps budgets can be pooled together by a consortium of companies that are losing money to create a compelling solution that end-users will want to adopt. And in the end, we'll see better security and privacy as a result.

+1 for Continuous Compliance

Anton Chuvakin posted a blog entry today about Continuous Compliance. I've written on the subject in numerous places (here for example) and have even written a white paper and given a webinar on the topic.

As a software vendor, I often hear from organizations who are looking for the silver bullet. People actually say things like "your software is PCI compliant, right? ...because we need to be PCI compliant and I'm looking for software to get us there". It's not their fault. Apparently, the folks pushing down the requirements, despite their efforts, haven't done a great job at educating the people that need to be educated.

My paper and my responses explain that the idea isn't to find a piece of software or even a business process that will get you compliant for your audit next month and then you forget it until next year. The idea is to create what I've called a "culture of compliance" (a borrowed phrase) through which you remain in compliance continuously. Put controls in place, create a way to test controls, understand access rights, regularly monitor and review permissions, and you'll ultimately be able to respond to any new (related) regulation that comes at you.

Sure, I can map specific reports to specific subsections of a regulation or security framework, but that shouldn't be the goal. Take a look at our recent article on the topic: When compliance is at odds with security - sometimes focusing on the goal of point-in-time compliance can actually negatively affect your security posture. I hope Anton is right that the times may be upon us because I have to say that I often feel like people listen to what I'm saying but ultimately ignore it and really just want a set of reports labelled with the regulation du jour.

Thursday, February 3

A new tool for Identity Management: AD Event Push SDK

I had a brainstorm last night. Most Identity Management systems poll Active Directory to pull changes on a periodic basis. And admittedly, that approach is often sufficient. But, here's an alternative that gets you real-time push updates from AD with the person who made the change (which is often useful for work flow/approval and audit trail purposes):

A small utility service that sits on the DC waiting for changes
Picks up events in real-time
Sends event info to a DLL that you compile

The DLL determines what to do with the info based on:
- Who did it
- What groups the person is a member of
- What happened (add/mod/del)
- Which object or object-type was affected (admin accounts, service accounts, etc)
- Which attributes changed
- In which OU the event occurred
- etc.

Possible outcomes may be to trigger an Identity Management process or work flow, open a help desk ticket, generate an alert, etc.

The programming would be very similar to building an ILM management agent.

So, IDM friends, would this be useful? If you have a project in mind or just want to play with it, let me know. NetVision has had this agent for years but we add a lot of capability around it. I'm wondering if there's some value for you if we strip it down to a single DLL that you would control. Let me know.

Wednesday, August 11

Identity as a Platform

I was asked for my thoughts on an article titled Hosters Need to Think about Identity as a Platform Play. When I clicked to read the article, I was happy to see it was written by Novell's Dale Olds who always has interesting and informed things to say.

I agree with Olds' assessment. SaaS platform vendors (hosters) should really get on the ball with offering identity services as part of their hosting packages. They should do similar with data encryption as well (both to the endpoint and in storage). Security is complicated -- extremely important and extremely easy to get wrong. It only takes a small oversight somewhere along the line to break the chain. SaaS application vendors would be wise to leverage proven, trusted solutions for access management rather than trying to create their own.

I think Olds overstated how simple it would be for applications to switch platforms. It seems to me that it's pretty complicated even in the case of moving a simple PHP website to another host. And most SaaS applications will be much more complicated than that. And the other part of that thought was that providing identity services would tie-in the application provider to that platform. I would recommend to hosting providers that they make it easier rather than harder to move. That'll be a key differentiator and ultimately drive more business/revenue to your brand. (I'm not saying that Dale was recommending to purposely make it complicated - it's just how it is.) BUT - there's still a business driver to build identity into the platform. Removing the complexities of security from the application development process could save 30% of time and resources in standing up a new application versus having to build it all from scratch.

And to Steve (from Axciom)'s point (in the comments), yes! Ideally, Platform as a Service vendors will provide more than authentication. Baked in security could incorporate firewalls, authentication, multi-factor authentication (& transaction-based), authorization, encryption (in-motion and at-rest), activity and access audit, SoD monitoring, and more.

We're obviously very early in this whole process. I think we're moving in the right direction, but it'll take time to get it all right.