Friday, November 11
IAM vendors have only recently begun thinking about unstructured data at all. Some have the ability to look across file system permissions and perhaps include rights information in reports along with basic user and group data. I don't think any do a great job of including a view across file system, Sharepoint, SQL Server, and Exchange Public Folders. But regardless of platform, the capability seems to stop at reporting on rights as they exist at some point in time.
The next logical step would be to watch user activity and be able to provide recommendations and reporting on usage along with permissions. Then, you could make better decisions. Think about this: IAM gives department managers the ability to manage security groups. Maybe they know what the group should access. And maybe they have some idea of what users should be in the group. But, there's no easy way to see which members of the group have exercised those rights and actually accessed the resources in question. Or even whether those resources are actually still relevant. (Have they been accessed? By who? How does that affect the concept of 'least privilege'?)
I'd love to hear your thoughts.
BTW, this isn't purely rhetorical. But, you'll have to be patient if you want more details. ;)
Wednesday, September 14
Levity aside, I think the list is a great idea. It's always good to consolidate disparate information. But the formula for MPV, as Kaskade writes, is based on reach and not knowledge, usefulness of analysis, or trustworthiness. I'd like to dream that I might end up on one of those lists some day.
Kaskade's use of the word 'influencers' brought me right back to Gladwell's book The Tipping Point and made me wonder if this is really a list written for marketers rather than for security decision makers. Even if that is the case, then it's probably a good idea to follow the people on the list as they might identify emerging trends - perhaps by analysis, but as Gladwell points out, perhaps by causation (whether intentional or not).
Being a first attempt, Kaskade has already identified a few omissions and I'm sure everyone has opinions on others that should be included. For example, Art Coviello of RSA gives the opening keynote address at the biggest conference in the security industry every year and runs the security division of one of the world's biggest data storage and management companies. That's reach. And I can call almost anyone I know in the Identity space and they'll know what topic was covered in Dave Kearns' last NetworkWorld newsletter. A niche, perhaps, but certainly reach.
Having said all that, I'm still honored that anyone would consider me in such a list. Being included in any list with that group of individuals is humbling to say the least. It makes me feel like I need to work harder to earn the spot.
What's your take?
Friday, August 19
As a security professional, I'm very interested in the answer. As an industry, we spend a lot of time focused on information privacy and health information is among the most talked about types.
Health organizations spend a fortune on personal health information protection (perhaps primarily in an effort to comply with HIPAA). Johns Hopkins and similar organizations have reported spending $4-5 Million in their early HIPAA compliance efforts. Earlier this year, the HHS fined one provider in MD $4.3M for a privacy violation. CVS paid over $2M in 2009 and Rite Aid $1M in 2010. Walgreens is currently being investigated. Early on, Gartner estimated that the industry would spend near $4B per year on HIPAA and the HHS estimated it would cost the industry $18B in the first decade. (I couldn't find current/actual numbers)
So, protection of consumer health information is a big deal. A lot of time, money, and energy is expended in the process. But do people really care? If someone gave you an app for your phone that enabled you to carry around your complete medical history for easy distribution to doctors and health providers -- and it meant you'd never have to fill out another form in a doctor's office waiting room -- would you use it? My guess is that most people would. If it makes life easier, it will get used regardless of the privacy risk.
We all know that our phones and computers are susceptible to privacy breaches, eavesdropping, and other data leakage, but would that stop you from using an app that improved your health? Made life easier? I'd love to know, so if 2000 of you would please go answer Jennifer's question, I'd appreciate the info.
Friday, July 29
I've used a variety of browsers over the years from Netscape 2 and IE 3 through today's versions of Chrome and FireFox. Although it was considered uncool by many, I primarily used IE for a number of years. But today, I almost exclusively use FireFox 5. It's fast, intuitive, good security features, control over privacy, extensible via plugins, etc. But one of the killer features for me is FireFox Sync.
I have all my bookmarks and preferences synced across multiple computers and my smart phone. It's extremely convenient and even encouraged me to finally organize my bookmarks - something I hadn't really done in the 15 years I've been online. But, there's an aspect to Sync that's incredibly dangerous.
It's dangerous because it makes life so darned easy. It's a fantastic feature from a user perspective. Sync includes browser-stored passwords. So, sign in at home and get automatic logon from work and mobile without needing to remember or re-type passwords. I can't count the number of times I was mobile and couldn't access a site from my phone because I didn't have the password. With Firefox Sync, my passwords can be automatically sync'ed across all my devices saving time and making life easy. My typical blog audience should already know where I'm going with this.
When you organize your sites in bookmarks and auto-save passwords, you make it very easy for anyone who accesses your workspace to get quick access to ALL of your favorite sites. How likely is it that someone could get a hold of your smart phone or laptop? Well, it's not unlikely. Here are some stats. Losing a phone used to mean shelling out a few bucks for a new one. Today, it means someone could get immediate access to every site you use with your own credentials. You've made it way too easy. You even have a folder for your banking sites so they know where to quickly find all your account information.
The above scenario (which makes the user experience seamless and easy) is the security equivalent of leaving cash on your dashboard with the car unlocked, the windows rolled down, while you walk around the mall handing out maps that show how to find your car.
Firefox Sync raises your risk profile and should only be used in combination with locked down devices, smart selection of which sites you'll include in your bookmarks, discipline to not store sensitive passwords, and you should set a master password so everything isn't left wide open. The tech security industry is getting better with each new release, but we're still in the infancy. We need to stay alert.
Friday, July 22
If you don't know Kishan, it's because he hasn't spent much time blogging, tweeting, or hitting the conference circuit. He has spent 50+ hours a week for the past decade hands-on actually building identity management solutions (and winning the hearts of CIOs and project sponsors). Everybody that has worked with him has positive things to say about his technical skills, integrity, work ethic, and positive attitude. Most recently, he has earned an excellent reputation as one of the industry's leading integrators of Oracle's OIM 11g. In a previous role, he developed the first real-world implementation in the higher education vertical (possibly globally) of Oracle Identity Manager 11g. He also developed the industry's first set of cloud connectors for OIM.
FishEye Group has hit the ground running with it's first project already underway and is in the process of putting partnerships in place and placing a number of additional projects on the calendar.
If you're looking for assistance with OIM 11g, product evaluations, identity management strategy, or other identity-related services, please give Kishan a shout and hear what he has to say. His pragmatic approach and enthusiasm for the technology will no doubt win you over.
Monday, July 18
But that's not what I wanted to write about. I'm more interested in SC Magazine's article headlined Mozilla BrowserID "seriously flawed" and Roger Clarke's Reaction to Mozilla's BrowserID Proposal, which was the subject of the SC Mag article. My first point is simply: go read it. It gives you a lot to think about. My second point, though, is a little more complex.
Clarke makes some interesting and compelling arguments about Internet privacy and individual freedom. I can't say that his logic is incorrect or that his points are invalid, because they're not. But his anger (characterized by phrases like "seriously flawed 'identity management' schemes" and "its design is seriously threatening to individual freedoms") may be a bit misplaced.
I agree with Richard that BrowserID is not THE solution to solve the Internet's authentication and privacy problem. But that's not the challenge that Mozilla has sought to solve. Not every site that requires a logon is a major privacy risk. I have probably 50 or more web site accounts to manage and I welcome solutions to my credential management problem. I'm a security guy but I will gladly introduce some level of risk to make life easier when browsing a large number of those sites. We all do to a degree. e.g.) It's less risky to go to a library anonymously to look something up in a book but the Internet at home is just so much more convenient that we risk being eavesdropped or introducing malware to our systems every time we use it.
It reminds me of the old argument that two-factor authentication is useless because it's susceptible to MITM attacks. BrowserID won't be a silver bullet for all authentication scenarios and maybe not even for ANY scenarios that require high security or strong assertions about the user, but it could still be a useful way for end-users who want to simplify the logon process. Claiming that BrowserID is seriously flawed because it doesn't address issues outside of its own scope just seems wrong and even somewhat irresponsible. The IT industry's version of media sensationalism maybe?
I don't mean to pick on SC Mag - the title got me to read Richard's article, which is the purpose of a strong title, but I'm pulling for one of these solutions (OpenID, CardSpace, BrowserID) to make it into the mainstream so that my life will be a little easier. And creating hysteria and FUD around them doesn't help with user adoption.
Wednesday, July 13
I had the opportunity to sit in on a webinar provided by LogicTrends and CA. The topic was privileged accounts and compliance. I believe it was LogicTrends' CTO Phil Lentz who described part of the problem as this (paraphrased):
Security Policy doesn't always match operational needs or expectations.
What I believe he meant is that system administrators ignore security policies for tactical reasons. They are almost forced to breach policy in an effort to get their jobs done more efficiently. I don't think that's anything new, but I've traditionally chalked it up to human behavior. Lentz's description lead me to think the problem was more systemic.
It wouldn't matter how disciplined the person sitting behind the keyboard is. There is an inherent disconnect between the person's operational duties and the organization's security policies. It's an interesting perspective and may indicate that there's hope. By creating more synergy between policy and operational procedure, the human-nature problem can be at least muted if not eliminated. Again, not a new concept, but a new angle by which to see it.
Thursday, July 7
I'm sure there's a bunch missing and I can't do it all myself. If you're in the identity management space, please take a look and make sure you're represented. There's a contact link if you'd like to request an update. And thanks to those who have already submitted over the past few years!!
I've found it convenient for my own personal use over the years to have this list all in one place. I've also gotten notes from others saying the same. And if you've been in identity space for a while, it might be fun just to see where all those early companies ended up.
It Looks Weird.
I plan to improve the look and feel at some point, but right now I'm just trying to get the data right. I'd like to tag companies by capability and provide a more interactive UI but I'm not there yet. Bear with me - as you know, it's tough to find the time.
Tuesday, April 5
1. The perpetrators of the RSA data breach which may have compromised the security of RSA's premium two-factor authentication solution, as it turns out, got help from RSA employees when they opened an email attachment. An Excel spreadsheet containing an Adobe Flash exploit opened the doors to RSA's network.
2. Conde Nast recently paid $8 Million to a fake company in response to a single believeable email that asked them politely to update their payee information on one of their vendors.
Both of these examples make the clear, simple point that it doesn't really matter how much technology you put between an attacker and your business assets. If an employee opens the door, they can walk right in. We're either going to get extreme in terms of limiting behavioral options (disallow all email attachments?) or we need to do much better in employee training.
Since employees are ultimately only motivated by what is easier, I don't think training will be the silver-bullet answer.
Thursday, March 31
Samsung laptops do not have a keylogger installed at the factory. But it was reported as such because an AV program recognized a pattern used by a popular keylogger. GFI Labs, who provide the AV solution (VIPRE) stepped up and explained exactly what happened and accepted the blame (not that anyone should blame them - isn't this exactly why we buy AV products?).
If I were GFI Labs, I might be asking the industry why other AV vendors haven't had the same issue - most of them use pattern matching as the key method for finding viruses and other malware. Is having too many patterns a bad thing? With this approach, false positives are a necessary evil - just part of the intended design. There is an alternative, though.
I've been speaking lately with my own AV vendor, eEye, who bucks that trend a bit. eEye doesn't rely so heavily on pattern matching and instead uses protocol analysis to determine what installed programs are actually doing to determine if there's a threat. For example, it's not a keylogger if there's no information being collected or sent. eEye claims that Blink can be installed on a Windows PC with no security patches and protect it fully with several layers of protection.
The approach has two advantages. First, you won't get false positives related to a pattern match that just doesn't quite add up. Second (and more importantly), you get protection against zero-day attacks where there is no known pattern. If you have to wait for your AV vendor to provide a virus definition update, you're in a constant state of being behind the attack trying to catch up.
I was planning to write about eEye's vulnerability scanner and a specific issue I encountered with software on a personal PC, which is why I was speaking with them, so you may see another post soon on eEye but I'm not trying to make a commercial out of it. Just curious mostly if other AV vendors are looking at this approach. It seems to me to be more effective and based on my limited experience, consumes less resources than the other security packages I've tried.
Monday, March 28
Remember when I said that end-users (i.e. People) are motivated more by "easier" and "cheaper" than they are by "more secure"? Well I did. And they are. If this MSNBC article doesn't make the point, I don't know what will: Why should I care about digital privacy?
The article discusses how a number of people sitting in a coffee shop were actively hacked & eavesdropped, alerted to the hack, and they chose to simply ignore the alerts, continue browsing and even making online purchases.
I was once in the audience while security guru Bruce Schneier was speaking. He made the point that while the security industry is all jazzed up about privacy, all of our efforts may not have much impact because the people we're concerned about ("young people") just don't care. They live their lives on-line and don't have the same ideas about privacy that us old people have.
There is an on-going debate about whether sending (texting) a nude photos of oneself while underage is a criminal offence worthy of a 'sex offender' label. The simple fact that we need to have that debate is evidence that Schneier is right.
I find more evidence when I ask any facebook user about privacy. It seems to be common knowledge among young and old that facebook users should be concerned about privacy. There are unanswered questions about personal information. [I'll give facebook the benefit of the doubt for this discussion and say that the security tools are in place to protect yourself if you're knowledgeable and cautious. But that's a big IF.] The fact is, most seem to think that facebook is not secure, yet they continue to use it. Because it's a cheap and easy way to communicate with friends, stay in touch with gossip, get news, play games, etc. For any security mechanism to be truly effective in an environment where security can not be mandated (corporate setting), like the general public, it needs to be (say it with me) easy and cheap. Easy as in built-in and transparent. Cheap as in Free.
btw, SSL (secure browsing over HTTPS) is sort of like that, until some CA (not mentioning any names but maybe sounds like a type of dragon) gets breached and generates bad certificates. Then, it's less easy. Also, SSL requires that an end user actually observe the browser to confirm that it's a secure connection. And in reality, the observation only tells you that you're connected to a secure page right now - it doesn't tell you where the submission form will take you. So it's really only secure if we're trusting the sites we visit or are unusually saavy web users.
Wednesday, March 23
Sorry - two points of clarification:
1. Where I say "serial #" throughout my post below, we should keep in mind that the token has a hidden 'seed record' which is actually used in the algorithm, so that's another level of security. The serial # is not enough - you also need the seed # and the ability to match it to a given user's token.
2. I should've mentioned that there's also a feature which prevents brute-force attacks by disabling an account after x number of failed attempts, so if you have a very good educated guess on the PIN, along with the other data, you have a good shot. If you think you'll brute-force it, that isn't going to fly.
I don't have any inside info, but it certainly sounds like the algorithm for generating the One-Time-Passwords (OTP) may have been accessed. This makes an attack much easier because to some degree it eliminates the "second factor" in the Two-Factor authentication. But not 100%.
If I know the algorithm, to spoof the token functionality, I still need the serial # of the token, matched to the user name, and the PIN. These things aren't impossible, though. You could stop by a person's desk, for example, and scribble down their serial # while they're getting coffee. If they're a co-worker, you probably know their user name and can make some guesses about the PIN.
Most people I know that use RSA tokens use a pretty simple PIN - a date, 4 digits of a phone number, something like that. So, if you use social engineering to get the serial # and user name, your down to having to guess the PIN, which is really a shorter, less secure password. And you're back to one-factor authentication. PINs may also be written down on the desk, scribbled on the back of the token (I've seen it), left in email or browser auto-fill, etc.
For an outsider to use this attack, it's a little more challenging than an insider. You'd need access to the serial numbers used by the company and the ability to match them with user names. Based on the info provided by RSA in their letter to customers, some or all of this may be in the RSA server logs. So, protecting those logs has just become critical.
If the token is installed on a smart phone or PC (software token), the token only works from that installed device. So, if the algorithm is public, the software tokens may have just become slightly more secure than the hardware tokens since it would be difficult to spoof the hardware configuration (or even to know the exact hardware) associated to that software token. ...at least, that's how I think it works.
So, a few shifts have been made if my assumptions are true:
- Software tokens may be more secure if the algorithm is known.
- Protecting the RSA server and logs, and access to those logs has become critical.
- Overall, the system is still somewhat secure, but people don't buy RSA tokens for 'somewhat secure'.
- RSA tokens have become pretty insecure against insider attacks.
If anyone knows something different, please correct me.
Friday, February 4
$$$ = motivation
It might be just what corporations need to push them toward adoption -- and that includes providing incentives for customers to move to a claims-based model. I think mobile phone companies are situated perfectly - they can provide the authentication mechanism built into the devices they sell, which makes it potentially easier for users to browse the web (could solve the 'numerous passwords' problem) - remember:
Easier = motivation
...and they have a huge financial motivator because many big companies negotiate mobile plan discounts for employees.
But perhaps budgets can be pooled together by a consortium of companies that are losing money to create a compelling solution that end-users will want to adopt. And in the end, we'll see better security and privacy as a result.
As a software vendor, I often hear from organizations who are looking for the silver bullet. People actually say things like "your software is PCI compliant, right? ...because we need to be PCI compliant and I'm looking for software to get us there". It's not their fault. Apparently, the folks pushing down the requirements, despite their efforts, haven't done a great job at educating the people that need to be educated.
My paper and my responses explain that the idea isn't to find a piece of software or even a business process that will get you compliant for your audit next month and then you forget it until next year. The idea is to create what I've called a "culture of compliance" (a borrowed phrase) through which you remain in compliance continuously. Put controls in place, create a way to test controls, understand access rights, regularly monitor and review permissions, and you'll ultimately be able to respond to any new (related) regulation that comes at you.
Sure, I can map specific reports to specific subsections of a regulation or security framework, but that shouldn't be the goal. Take a look at our recent article on the topic: When compliance is at odds with security - sometimes focusing on the goal of point-in-time compliance can actually negatively affect your security posture. I hope Anton is right that the times may be upon us because I have to say that I often feel like people listen to what I'm saying but ultimately ignore it and really just want a set of reports labelled with the regulation du jour.
Thursday, February 3
A small utility service that sits on the DC waiting for changes
Picks up events in real-time
Sends event info to a DLL that you compile
The DLL determines what to do with the info based on:
- Who did it
- What groups the person is a member of
- What happened (add/mod/del)
- Which object or object-type was affected (admin accounts, service accounts, etc)
- Which attributes changed
- In which OU the event occurred
Possible outcomes may be to trigger an Identity Management process or work flow, open a help desk ticket, generate an alert, etc.
The programming would be very similar to building an ILM management agent.
So, IDM friends, would this be useful? If you have a project in mind or just want to play with it, let me know. NetVision has had this agent for years but we add a lot of capability around it. I'm wondering if there's some value for you if we strip it down to a single DLL that you would control. Let me know.