Thursday, March 31

Samsung Keylogger and Other False Positives

Samsung laptops do not have a keylogger installed at the factory. But it was reported as such because an AV program recognized a pattern used by a popular keylogger. GFI Labs, who provide the AV solution (VIPRE) stepped up and explained exactly what happened and accepted the blame (not that anyone should blame them - isn't this exactly why we buy AV products?).


If I were GFI Labs, I might be asking the industry why other AV vendors haven't had the same issue - most of them use pattern matching as the key method for finding viruses and other malware. Is having too many patterns a bad thing? With this approach, false positives are a necessary evil - just part of the intended design. There is an alternative, though.


I've been speaking lately with my own AV vendor, eEye, who bucks that trend a bit. eEye doesn't rely so heavily on pattern matching and instead uses protocol analysis to determine what installed programs are actually doing to determine if there's a threat. For example, it's not a keylogger if there's no information being collected or sent. eEye claims that Blink can be installed on a Windows PC with no security patches and protect it fully with several layers of protection.


The approach has two advantages. First, you won't get false positives related to a pattern match that just doesn't quite add up. Second (and more importantly), you get protection against zero-day attacks where there is no known pattern. If you have to wait for your AV vendor to provide a virus definition update, you're in a constant state of being behind the attack trying to catch up.


I was planning to write about eEye's vulnerability scanner and a specific issue I encountered with software on a personal PC, which is why I was speaking with them, so you may see another post soon on eEye but I'm not trying to make a commercial out of it. Just curious mostly if other AV vendors are looking at this approach. It seems to me to be more effective and based on my limited experience, consumes less resources than the other security packages I've tried.

Monday, March 28

Security must be Easier or Cheaper

Remember when I said that end-users (i.e. People) are motivated more by "easier" and "cheaper" than they are by "more secure"? Well I did. And they are. If this MSNBC article doesn't make the point, I don't know what will: Why should I care about digital privacy?

The article discusses how a number of people sitting in a coffee shop were actively hacked & eavesdropped, alerted to the hack, and they chose to simply ignore the alerts, continue browsing and even making online purchases.

I was once in the audience while security guru Bruce Schneier was speaking. He made the point that while the security industry is all jazzed up about privacy, all of our efforts may not have much impact because the people we're concerned about ("young people") just don't care. They live their lives on-line and don't have the same ideas about privacy that us old people have.

There is an on-going debate about whether sending (texting) a nude photos of oneself while underage is a criminal offence worthy of a 'sex offender' label. The simple fact that we need to have that debate is evidence that Schneier is right.

I find more evidence when I ask any facebook user about privacy. It seems to be common knowledge among young and old that facebook users should be concerned about privacy. There are unanswered questions about personal information. [I'll give facebook the benefit of the doubt for this discussion and say that the security tools are in place to protect yourself if you're knowledgeable and cautious. But that's a big IF.] The fact is, most seem to think that facebook is not secure, yet they continue to use it. Because it's a cheap and easy way to communicate with friends, stay in touch with gossip, get news, play games, etc. For any security mechanism to be truly effective in an environment where security can not be mandated (corporate setting), like the general public, it needs to be (say it with me) easy and cheap. Easy as in built-in and transparent. Cheap as in Free.

btw, SSL (secure browsing over HTTPS) is sort of like that, until some CA (not mentioning any names but maybe sounds like a type of dragon) gets breached and generates bad certificates. Then, it's less easy. Also, SSL requires that an end user actually observe the browser to confirm that it's a secure connection. And in reality, the observation only tells you that you're connected to a secure page right now - it doesn't tell you where the submission form will take you. So it's really only secure if we're trusting the sites we visit or are unusually saavy web users.

Wednesday, March 23

The RSA Breach

[Updated 5:36 pm ET 23 Mar 2011]

Sorry - two points of clarification:

1. Where I say "serial #" throughout my post below, we should keep in mind that the token has a hidden 'seed record' which is actually used in the algorithm, so that's another level of security. The serial # is not enough - you also need the seed # and the ability to match it to a given user's token.

2. I should've mentioned that there's also a feature which prevents brute-force attacks by disabling an account after x number of failed attempts, so if you have a very good educated guess on the PIN, along with the other data, you have a good shot. If you think you'll brute-force it, that isn't going to fly.

[end update]


I don't have any inside info, but it certainly sounds like the algorithm for generating the One-Time-Passwords (OTP) may have been accessed. This makes an attack much easier because to some degree it eliminates the "second factor" in the Two-Factor authentication. But not 100%.

If I know the algorithm, to spoof the token functionality, I still need the serial # of the token, matched to the user name, and the PIN. These things aren't impossible, though. You could stop by a person's desk, for example, and scribble down their serial # while they're getting coffee. If they're a co-worker, you probably know their user name and can make some guesses about the PIN.

Most people I know that use RSA tokens use a pretty simple PIN - a date, 4 digits of a phone number, something like that. So, if you use social engineering to get the serial # and user name, your down to having to guess the PIN, which is really a shorter, less secure password. And you're back to one-factor authentication. PINs may also be written down on the desk, scribbled on the back of the token (I've seen it), left in email or browser auto-fill, etc.

For an outsider to use this attack, it's a little more challenging than an insider. You'd need access to the serial numbers used by the company and the ability to match them with user names. Based on the info provided by RSA in their letter to customers, some or all of this may be in the RSA server logs. So, protecting those logs has just become critical.

If the token is installed on a smart phone or PC (software token), the token only works from that installed device. So, if the algorithm is public, the software tokens may have just become slightly more secure than the hardware tokens since it would be difficult to spoof the hardware configuration (or even to know the exact hardware) associated to that software token. ...at least, that's how I think it works.

So, a few shifts have been made if my assumptions are true:
- Software tokens may be more secure if the algorithm is known.
- Protecting the RSA server and logs, and access to those logs has become critical.
- Overall, the system is still somewhat secure, but people don't buy RSA tokens for 'somewhat secure'.
- RSA tokens have become pretty insecure against insider attacks.

If anyone knows something different, please correct me.