Thursday, December 10
Special Offer for Blog Readers!
Give Access Rights Inspector SSE a free trial on your own server. If you decide to buy, use the promo code "access10" until Dec 31, 2009 to get $300 off the price and pay only $495 to generate an unlimited number of effective rights reports on a single Windows Server. This can save an enormous amount of time during security audits.
Revenue Opportunity for Bloggers
We're looking for affiliates. Post a link from your blog and get 15% for each sale. That's ~$120 at full price. If you're lucky, you'll make that in a year with Google Adwords. Sell a dozen servers and you'll be picking out a brand new flat screen TV (maybe one of those backlit LED displays) ...or maybe making a down payment on a new car? It's easy. And it's a useful product. Give it a try for yourself and let me know if you're interested.
Thursday, December 3
For example, you could use this approach to extend the information available to an application without doing any data synchronization or introducing new data sources. If the application's logon ID is the user's email address, you could query AD based on that email and get info about the user's group memberships, attributes, manager, location, etc. and have that returned to the application as if the data were stored in the local app's database.
...another useful approach to keep in your development toolbox.
Tuesday, November 24
I published a whitepaper describing all the details. It describes how the controls work and covers the affect of group memberships, inheritance, deny ACEs, the owner attribute, and more. And of course, it provides some guidance for taking control of all that complexity.
You can register for a copy here:
Thursday, November 12
I knew there was a User Experience problem with SSL in that most people ignore that it's happening and therefore don't notice when it's not happening. I also knew that there are known potential attacks on SSL, but it seems there's a newly discussed renegotiation problem that makes the whole system seem suspect. This posting from RSA does a good job at providing an explanation.
This is a big deal. SSL really IS web security. So many other security solutions rely upon it -- assuming that communication is safe and secure because it's done over SSL. Even if all the major vendors get a fix out tomorrow, we'll probably see this problem around for years to come.
Monday, November 9
Cisco has acknowledged that it will stop adding support for additional devices on its MARS SIEM platform. While the plan is to continue providing updates for already-supported devices, it's difficult to argue that this isn't a strategic move toward completely dropping support for the product (in it's current form).
I, of course, wanted to use a title like "The END of SIEM", but it's hard to make that leap given that one of the biggest SIEM players was ranked among Deloitte's 2009 Technology Fast 500 with over $100 Million in revenue for 2008. And ArcSight has shown 32%, 34%, and 25% year over year growth in its last three quarters respectively.
Still, Cisco is thought to be the most widely deployed SIEM with over 4000 installations. For them to make a strategic move to discontinue addition of future platforms means (and read this with your favorite accent) something doesn't smell right in Denmark.
As I speak to organizations about NetVision (and we are clearly NOT a SIEM player), I hear concerns about SIEM tools and log management applications that are big, complex, difficult to implement, expensive, and not user-friendly. I have nothing against SIEM tools or the role they play. In fact, many of our customers integrate our product with SIEMs. ...which is why the topic comes up. But, I've been wondering if the fire-hose approach to data collection is proving to be too much. i.e.) too much data and too much complexity given the problem at hand.
I sense that the SIEM approach is troublesome and that SIEM vendors who can't adapt to changing market expectations for more readily available answers will start making announcements like Cisco's indicating that they won't be around forever continuing to support an ever-growing number of devices. There will likely continue to be a market for large scale event data collection into the foreseeable future. I'm not arguing against that. But a segment of the market seems to be defining itself as a group that wants easy answers in lieu of a data flood.
Am I reading too much into it? What do you think?
Thursday, October 22
Isn't that like claiming that firewalls are worthless because they don't prevent viruses from being installed on desktops? Strong authentication (which includes two-factor) was never intended to prevent MITM attacks. That problem was already (theoretically) solved with SSL.
Perhaps Dean was reading Bruce Schneier's thoughts from back in 2005. I get it. Issuing tokens to users is not a panacea. But, there is no cure-all in the security space. We rely on SSL to establish secure links to sites, which should both identify the site as being who it says and prevent snooping. Theoretically, that end-to-end encryption and use of trusted certificate authorities is what would prevent MITM attacks.
But even when using SSL correctly (and assuming there are no flaws in SSL), there is still an authentication challenge that strong authentication techniques such as two-factor rise to meet. Without it, users may share credentials or use weak passwords exposing numerous other potential attack vectors.
I think Dean's frustration is focused in the wrong direction. Strong authentication techniques are good at what they do and (still) have their place in the security infrastructure. I think the problem he's seeing mainly lies in the user interface of SSL. Like any good security feature should, it does a good job of staying transparent to the end user. But a little too good. So good, in fact, that most users don't even know when it's not there. And that's the problem.
If we could force users to look for and expect the SSL connection and to confirm the domain with which they're connected, phishing and MITM would become immediately unprofitable. I'm surprised browser vendors haven't done that yet (and EV certificates are not the answer). Personally, I'd want to see a white list approach for personal banking and other regular-use sites coupled with a per-use hoop to jump through for occasional other data transfers.
But don't blame strong authentication for SSL's incompetence.
Friday, October 9
The solution isn't new. Verisign has been offering its VeriSign Identity Protection (VIP) authentication services for quite some time. I've had a token that I use with my PayPal account (and my OpenID) for the past couple of years (made in China by ActiveIdentity). But adoption of the offering has been less than overwhelming.
We could probably all count on one hand the number of people we know with a non-work-based authentication token. And most of those are likely tokens handed out by banks and other financial companies that are tied to a single account. The VIP solution gives you a token to use across multiple sites. And there are a few other perks as well.
I don't know what they charge to add this strong authentication to your site. But, I expect that it's more competitive than implementing your own solution. And the end-users benefit from a single token that can be used across systems.
RSA hasn't been wildly successful in getting tokens into the hands of consumers. So, partnering with Verisign seems like a good move - leverage an existing solution to sell more product. And Verisign customers benefit from more choice. RSA has a lot of token options and some are impressive. Their manufacturing is done at their headquarters in MA and the quality assurance process is top rate (I've been through the tour).
In addition to overall quality, some provide additional convenience as well such as a token with an integrated smart chip (for access to encrypted laptops and digital signing) or the software tokens for BlackBerry, iPhone, Win Mobile, etc. that don't require an additional piece of hardware. I should note that the release only mentions hardware tokens, but in the consumer market, it would be a bad move to restrict usage to hardware only.
Thursday, September 24
Today, Optimal IdM announced their cloud provisioning solution. Similar to what Identropy is doing with IC2, Optimal IdM's solution leverages existing provisioning solutions and acts as a connector to cloud applications.
This use case of acting as a connector for remote, unknown, complex, or varied systems is a perfect fit for virtual directory technology. MaXware released a similar connector for Salesforce in 2006 while I was still an employee. Perhaps they were ahead of their time? The virtual directory solution can be added to virtually (no pun intended) any environment and provide immediate connections up to numerous, complex cloud systems, thus saving cost and effort as compared to developing custom connectors.
Having said all those nice things about the virtual directory approach and once again encouraging IAM integrators to consider virtual directory solutions while whiteboarding on how to meet requirements, I should be fair and point out an alternate viewpoint. If you already have a provisioning solution from the likes of Courion, Novell, Oracle or IBM, and a requirement to provision to cloud applications, you owe it to yourself to take a close look at Identropy's IC2 offering before making any purchase decisions. That's exactly what it's designed to do.
Another interesting note - I spoke to someone from Arcot today (think secure token-less authentication) who informed me that all of their solutions for secure authentication are now available as a service. They already have one of the most widely deployed authentication-as-a-service solutions on the market, so it seems to be a natural migration to offer their other solutions from the cloud as well.
Who recently said there was no more innovation in the IAM space? The latest innovation in this space is in direct response to the market complaints that IAM is too complex. Once simplicity is realized, innovation will no doubt trend elsewhere. I call that a success in meeting customer demand.
Friday, September 18
Thursday, September 10
Who has access to this file?
What does this user account or group have access to?
If so, take a look at this description of NetVision's latest - free reports that answer complex questions. Or to get started right away, go directly to the TryIt! edition product page.
It's nice to have something free to give away that is actually useful.
Two reports provided that every admin should care about are:
Direct User Assignments – report on all instances of permissions being assigned directly to user accounts (instead of via groups).
Explicit Deny Entries – report on all instances of explicitly denied permissions (these can cause headaches when trying to figure out why someone doesn't have expected permissions).
Friday, September 4
And we think we can stop them with an inanimate pile of clothes stuffed with hay!
Of course, there's a lesson to be learned for information security practicioners. Your company's employees and system adminstrators will learn and adapt. They can see the scarecrow that you've put in place to ensure security. And they figure out how work around it.
Security company RSA in their Oct. 2008 survey reported that:
53% [of employees] have felt the need to work around IT security policies in order to get their work done.Those are well-meaning employees just trying to do their best for the company.
A recent NetworkWorld article titled Inside a data leak audit provides a real-world example. It describes an organization that was seemingly doing everything right with regard to information security. But, a thorough audit revealed 11,000 potential leaks in two weeks. All the scarecrows you could imagine were hanging on posts all across the organization. They weren't enough.
Preventative security doesn't always get the job done. Many organizations would benefit from real-time audit and monitoring solutions. In addition to after-the-fact forensic and audit trail benefits, active monitoring can be a powerful deterrent and even enable real-time remediation.
Tuesday, September 1
Insider Risk Management: A Framework Approach to Internal Security (PDF)
It has some interesting data on the risk posed by insiders. Specifically, they look at the difference between risk from malicious attackers and the risk posed by unintentional breaches or well-intentioned employees (the 'Soft' Insider Threat).
Courion points out one of the most interesting data points:
"CXOs also revealed that the greatest financial impact to their organization was caused by risks related to out-of-date or excessive access rights"I was surprised by that. I intuitively know that soft breaches occur far more often than malicious attacks. But, my intuition also tells me that malicious attacks probably cause far more extensive financial harm. The respondents of this survey tell us that inappropriate permissions lead to greater financial harm than malware, internal fraud, deliberate policy violations, and unauthorized access (among others).
You should look directly at the data. It does vary by country. In the U.S. (where the greatest financial losses were reported by respondents), internal fraud edges out excessive rights, but I'm still surprised to see the financial impact of each is almost equal. And keep closer watch on contractors and temporary employees!
Thursday, August 27
It's titled: USB - Ubiquitous Security Backdoor
Despite the lame title, if you're trying to make sense of the threat posed by flash memory drives, it's worth a look. Of course, if you're already a security guru, you can give this one a pass.
Friday, July 24
Thursday, July 23
Cloud Identity - SaaS Identity and Access Management solution. Provides provisioning, workflow, audit reporting, and SSO. Leverages existing enterprise credentials. [more info]
Conformity - SaaS Identity and Access Management solution. Provides provisioning, workflow, and audit reporting for select cloud applications. Leverages Active Directory (or other on-premise repository) accounts as the source. [more info]
Identropy IC2 - Identity Management solution for SaaS applications. Leverages existing Identity infrastructure and work flow to provision accounts to cloud applications via the IC2 SPML gateway. [more info]
MyOneLogin - SaaS Identity and Access Management solution. Provides SSO and Secure Logon to cloud applications and web sites. Enables Account Management for select SaaS applications. Tracks SaaS application usage across apps from a single location. [more info]
Nordic Edge Opacus - SaaS Identity and Access Management solution. Provides Secure Logon to cloud applications. Synchronizes accounts from on-premise repository to cloud systems. Enables delegated user administration for cloud applications. [more info]
PingConnect - SaaS SSO solution for Salesforce CRM, Google Apps, and 60+ other SaaS applications. Leverages existing enterprise credentials, Google Apps Logon ID, or Salesforce credentials. [more info]
SecurAct - SaaS Identity and Access Management solution. Provides provisioning, work flow, SSO, and audit reporting for both local and cloud applications. Leverages Active Directory accounts as the source. [more info]
Symplified - SaaS Identity and Access Management solution. Provides provisioning, audit reporting, authentication, and SSO across local and cloud apps. Leverages existing enterprise credentials. Can also prevent side-door access. [more info]
NOTE TO VENDORS - please feel free to reach out if my description above is incorrect. I'm happy to make corrections where appropriate or provide additional differentiators. I also encourage comments that help identify how products stand apart from the others by end-users or vendors.
[Updated Aug 04, 2009]
[Updated Jul 29, 2009]
Tuesday, June 30
Being that my employer is a small, nimble, innovative software company, I especially liked this quote from CPS Energy CIO Christopher Barron:
"With software from smaller vendors, it can take 20% to 40% less time to implement, and if it works, it could save you between three and eight times as much. The catch, of course, is that it doesn't always work. But even failing seems to be cheaper than going with the big guys."I've always heard the adage that 'Nobody gets fired for buying IBM', meaning that even if you spend a little more, you're playing it safe by going with a trusted, well-known name. But the only projects I've ever heard becoming a colossal failure involve solutions from big name vendors with multi-million dollar price tags. And the really cool success stories you hear involve someone accomplishing something great with minimal budget.
Don't get me wrong - I know that many large businesses are run on big name solutions from IBM, SAP, Oracle and the like, but I think we need to be clear that the adage is not an axiom. That is, it's not self-evident. In fact, to some, it might even be nonsensical. Why would it make sense to spend 4x the amount of money to decrease your risk of over-expenditure?
What do you think? Does the adage hold up in today's economy? Will it hold up when we recover? Is it simply a question of finding the right solution for the job, or should it be part of a CIO's objective to put cost out in front of the decision?
Wednesday, June 24
Don't do anything online that you absolutely want to keep private.Case in point:
I was looking through the form submissions to my company's web site. There is consistently some percentage of submissions that are auto-submitted SPAM. Sometimes, it's obvious and sometimes not.
Today, I was researching one submission and googled her name and email. The search brought me to a page that listed a spreadsheet of form submissions to another site - complete with names, email, phone numbers, and comments. Some obvious spam, but others obviously real.
They're showing up because of a technical glitch or security issue on the site. The google search brought me directly to the site's administrative page with no logon.
What makes this story interesting is that the site is a Las Vegas escort service and some of the form submissions read as follows:
- From a student (@uwec.edu) - "very interested"
- From a student (@wvu.edu) - "I need a price on ____"
- From someone claiming to work at Microsoft - "Hi, I'm planning a trip to Vegas with my fiance but I wanna get away from her for one night. What is the limit to your services and who would you recommend? I need a girl with _____. Thank you for your time." (how polite) ...he may not have put his real company, but another quick search found his email address with a profile telling me that he lives in Seattle(!)
- From a Web Developer in MN - "I am interested in an escort to accompany me to dinner" - (I found his LinkedIn profile because he provided his real company name)
- First, the obvious one - don't trust web sites to keep your information private.
- Second, (to the security practitioners who read this blog) - don't underestimate how willing people are to give up their personal information to even the most suspect organizations.
btw - Who thinks this privacy breach will be reported?
Monday, June 15
Windows File System Permissions – As labelled in the Windows Security dialog with descriptions for both folders and files.
Wednesday, June 10
A new requirement (one of the HITECH Act provisions of the American Recovery and Reinvestment Act (ARRA), signed by President Obama on February 17, 2009) will have business associates of covered entities required to comply with the Security Rule safeguard standards, beginning February 17, 2010.
from the article:
Covered entities are required to have in place audit controls to(emphasis added)
monitor activity on their electronic systems that contain or use electronic protected health information. In addition, they have to have a policy in place for regularly monitoring and reviewing of audit records to ensure that activity on those electronic systems is appropriate. Such activity would include, but is not limited to, logons and logoffs, file accesses, updates, edits, and any security incidents.
Monitoring and review of audit trails must be as close to real time as possible to be useful. There is no benefit in discovering a problem days or weeks after it has occurred. How a covered entity sets its policies and procedures will be based on outcomes of the covered entity’s risk analysis. If a security incident occurs, failure to exercise this audit control standard may be proof in an inquiry that a covered entity had the capability of knowing what was occurring, but failed to exercise timely corrective action.
Interesting. I need to track down the source docs to see what's real and what is interpretation.
Wednesday, May 13
It's fairly easy to see that malicious attacks cause immediate and expansive financial harm. But, the unintentional or at least non-malicious insider breaches, which I'll call the Soft Insider Threat, occurs far more often – perhaps hundreds of times every day.
Today, I read a story in NetworkWorld titled Inside a Data Leak Audit that illustrates my story.
The IT Director at a pharmaceutical firm facilitated a data leakage audit for his company. Before the audit, the firm believed they "were in good shape". They "had done internal and external audits" and "extensive penetration testing". They had intrusion detection and prevention solutions, laptop encryption, and employee training. What they found out is that "you can do all that and it's just not enough."
The audit, conducted by Networks Unlimited, revealed gaping holes, including:
- 700 leaks of critical information, such as Social Security numbers, pricing, financial information and other sensitive data in violation of the PCI-DSS standards.
- Over 4,000 incidents that ran counter to HIPAA and Defense Department Information Assurance Certification rules.
- More than 1,000 cases of unencrypted password dissemination, such as to access personal, Web-based e-mail accounts.
- Employees sent ZIP files and attachments of confidential documents in unencrypted emails.
- An employee attached a clinical study report in an unencrypted email to an outside vendor.
- An employee sent sensitive employee compensation data to an outside survey company inc. salary, bonuses, sales quota, stock options, granted share price and more.
I call them soft breaches because they're not intended to be harmful and may not ever cause harm or get noticed. But if they happen 10,000 times over the course of two weeks, that's 260,000 security violations each year. And those are real breaches that may violate HIPAA or PCI-DSS, expose employee and customer information, violate business contracts, and otherwise cause potential for harm. It should be pretty apparent that if this happens 260,000 times each year, that's a pretty big attack surface.
As the author and auditor say in the article, don't leave security in the hands of end-users. Automate the important stuff and track activity on a regular basis to ensure that your attack-surface is in-line with your risk tolerance. Don't ignore the soft insider threat just because it gets overlooked. That's the exact reason why you need to address it.
Monday, May 11
An on-demand delivery model for IT services or applications with the characteristics of multi-tenant hosting, elasticity (variable capacity) and utility based billing.My version was:
Shared computing infrastructure over the web that distributes cost across participants and lowers the cost for each.I actually like Yee's better than mine. I was focused more on the business purpose than actually describing what it is.
In thinking further, I think we should remove applications from the definition. Applications are delivered As a Service or On Demand. But it is infrastructure that is provided 'in the Cloud'. When we talk about Cloud Computing, we're talking about shared infrastructure (hardware, OS, security mechanisms, backup, etc.). I personally wouldn't use cloud terminology to describe what salesforce.com has made famous.
Salesforce isn't sharing infrastructure with other software providers. They're just including the infrastructure as part of the value they provide to customers. Their delivery mechanism internally looks a lot like what cloud computing providers offer, but they're offering it to their own customers.
Cloud Computing is a service for software or solution developers that can reduce cost by leveraging a shared infrastructure that is billed based on use. Those developers then offer their solution As A Service. But, they can also offer their solution As A Service without utilizing a Cloud infrastructure. They can, as Salesforce did, build their own infrastructure.
What do you think? Worthwhile distinction? Clear?
Friday, May 8
If the UI fails, the application fails.I probably wasn't the first or only person to have ever said that, but I think it rings true today and is especially applicable to information security practices.
Luther is specifically talking about cryptography and uses an analogy of mechanical clocks. If people had to understand how the clock worked in order to read the time, the clock would no doubt have failed to reach widespread adoption.
But, we have no trouble assuming that end users should understand that they need the HTTPS and should verify certificate authorities because obviously without proper SSL, the information they pass to their bank is exposed to snooping attacks and they are susceptible to phishing attacks. What?!? That statement contained five terms that most people off the street wouldn't even be able to define -- never mind understand well enough to use the technology properly to safeguard against relevant threats.
Security needs to be built-in. And the User Interface needs to be easy-to-use and simple to understand. Otherwise, as we've seen, the security mechanisms will fail.
Wednesday, April 22
I've also not yet weighed in on Oracle-Sun. In a letter, Oracle's President (Charles Philips) says they're planning to:
Engineer and deliver an integrated system—applications to disk—where all the pieces fit and work together so customers do not have to do it themselves. Customers benefit as their systems integration costs go down while system performance, reliability, and security go up.That makes sense from a business perspective. The key Sun technologies that were clearly interesting to Oracle are hardware, Java, and Solaris. And a hidden dark desire perhaps to mold MySQL as a non-enterprise solution so that there's no competition with Oracle's flagship product line. ...maybe that's just a bonus.
My personal opinion is that Identity Management had little or nothing to do with the purchase. In fact, it's probably considered a headache to the acquisition team. Clearly, it gives Oracle the number 1 spot in terms of IAM market share. And arguably the best suite of IAM products on the market. But, I don't know what that will mean to Oracle in their quest for world domination.
I was part of a very talented IAM team that got absorbed into a multi-billion dollar organization for which IAM was not a priority. And the team quickly disintegrated. I don't think that will happen at Oracle, but the IAM product teams will need to show management a strong revenue number to get the attention they'll need to integrate the Sun and Oracle suites properly.
Deborah Volk at Identigral wrote a nice post on the two product lines. I haven't used either enough to speak intelligently on which product might win the starting position. And Ash Motiwala captured one of my first thoughts. People always chose Sun because they were the big guy. The product wouldn't 'go away'. Well, there goes that theory. To quote Andre Durand from the NetworkWorld article:
This is yet one more reason companies should consider standards-based, loosely coupled approaches.Perhaps the most intriguing aspect of this acquisition for the IAM world is the combination of all of those bright engineering minds in one room. The Sun Directory team, the OID team, the OVD team can join together and help shape the future of directory services while the Oracle Access Manager and OpenSSO teams can do the same for their piece of the puzzle. ...assuming of course that big-company bureaucracy doesn't get in the way.
[UPDATE: link to Felix Gaehtgens' Oracle-Sun product line comparison]
Speaking of innovation, one last thing before I close - NetVision announced a Series B round of funding today. The goal is to enable the innovation that we started with the industry's first managed service for directory and file system audit and monitoring. Be sure to keep your ear to the ground as we make another innovation announcement in the weeks to come.
Wednesday, April 1
It’s often claimed that multi-factor authentication is inherently more secure than single-factor authentication, but if you look at the history of this claim,it actually came from a vendor that wanted to make their multi-factor authentication product sound better than competitors' products.(I'm not sure if these are his thoughts or if he's saying that was the consensus at a recent X9 meeting.)
Martin goes on to suggest that using two authentication mechanisms of the same factor may be as secure as using two factors and lays out scheme A & B to discuss.
So here's my thoughts:
Wouldn't scheme A be more secure because you can't brute force it? Isn't that the whole point of having the second factor? All passwords can be brute-forced given enough time. Having the second factor removes that threat.
Of course, you could implement a strong password plus a kill switch after 10 bad tries, but that still relies on the user to implement safe password storage. And I generally think it's better to remove any responsibility from the end-user (especially if there's a convenience trade off).
Requiring users to carry/remember two username-password combinations for every system doesn't seem practical. Security will fail if users try to subvert it for the sake of convenience. And they will.
Usability needs to be a key consideration. A token/pin combination is a secure and easy-to-use way to beat the threat of brute-force attacks and poor password management. ...as is having a certificate installed on a particular PC and other second factor solutions.
Friday, March 27
Well, in case you haven't seen it, Courion now has a blog. And so far, there's a lot of good content being written there. It appears to be a nice combination of industry analysis, business-value, and technical insight that remains on-topic. ...thought you might enjoy the pointer.
Wednesday, March 18
I haven't actually done this in a few years, so my information may be out of date, but I'm sure someone will speak up if I'm wrong. ...they always do ;) I will assume you know what ADAM is, why you'd use it, where to get it, and how to install & configure it.
A few quick scenarios where pass through authentication is useful:
- You want to put a portion of your Active Directory users into the DMZ for authentication by publicly-facing applications, but you don't want to expose an AD DC in the DMZ. In this scenario, the app can leverage a DMZ'ed ADAM for authentication. ADAM will still need to make a request to a DC, so AD is partially exposed, but in a more controlled way.
- You want to leverage AD credentials for application authentication, but the app wants to store information about users that is not currently in AD and you don't want to extend the schema. You could stand up an ADAM instance, extend its schema however you want, enable passthrough authentication, and point the app to ADAM instead of AD.
- You have an app that is used by people that have an AD account AND people that don't. And your app only accepts a single authentication store.
Here's what you need to know about using ADAM for pass through authentication to AD:
- The ADAM installation process gives you the ability to import the schema for UserProxy objects from a file called ms-userproxy.ldf – you'll need to import that object to enable the passthrough functionality. You can do it after the install if you need to.
- For an account to perform a passthrough authentication (aka a bind redirection) from ADAM, the account must be configured as a UserProxy object. A standard user account can not authenticate through to Active Directory.
- The UserProxy object has an attribute called ObjectSid, which is critical to this functionality. For passthrough authentication to work, the account's ObjectSid must be populated with the SID of the associated Active Directory user account (this will actually work with any security principal object).
- When a UserProxy account attempts to bind to the ADAM instance, ADAM recognizes the account as a proxy and forwards the authentication request to Active Directory.
- Passthrough authentication only works for accounts that live in the forest to which the ADAM server is joined (or to a trusted domain or forest). So, that's how ADAM knows where to send the request.
- Passthrough authentication only works with simple binds. So, the password is being passed to ADAM in clear text. You'll want to be aware of that and use SSL as appropriate.
That's pretty much all you need to know to get passthrough authentication working. As I recall, it's that simple.
PLEASE leave a comment if you're here because you want to do this for a reason I haven't mentioned above or if you have additional information. Am I wrong? Did I leave anything out?
Microsoft refers to this as Bind Redirection for ADAM Proxy Objects. So, that's the terminology you'll want to use to find more information.
Thursday, March 12
By 2011, hosted IAM and IAM as a service will account for 20 per cent of IAM revenue.I've discussed managed services for Identity Management in previous posts. I think it's a natural progression. Identity and Access Management is an extremely complicated technology-set. Any given IT shop's ability to maintain the right skills to support an IAM environment is probably more costly (in effort and dollars) than outsourcing that function to specialists. And this certainly appears to be the beginning of an Era of Cost where cost has moved up the list of decision influencers.
I'm honestly a bit surprised and impressed to see Gartner come out on this one. I tend to think of them as a bit more conservative – making predictions that follow a trend that has already begun. Has this trend started to take shape or is Gartner a bit agressive on this one?
Last night, I wasted four hours manually removing a virus that I pretty much knew would come back, but I had to try just to see if I could identify the how-to. (Kudos to Microsoft for XP's restore feature building a restore point without me having to enable it.)
If you've ever purposely went to a phishing site or intentionally opened an email attachment that you knew was malicious, you might want to give it a read. And next time it goes bad, just remind yourself that you're sort of a hero.
...and good job Kristen pushing Sara to deliver the goods!
Wednesday, March 11
NetVision believes in the value of the SBN and its members. We backed that up by signing on in early 2009 as an advertiser. Go check it out. And Happy Reading!
Tuesday, February 24
I agree with each of his four bullet points. And I would add that when they collectively fail, the #1 reason is that people aren't being honest with each other. Sometimes, consultants aren't honest with clients about lack of expertise or resources. Other times, someone on the client side isn't being honest with the consultant because of some defensiveness (they don't want to admit inability to get something done, or they're playing CYA).
The reality is that we're all human. Clients shouldn't expect consultants to be super heroes. And if both sides set realistic expectations and allow for faults, imperfections, and mistakes, it's much easier to achieve an honest dialog toward success. Consultants need to avoid both (1) the arrogant assumption that client personnel is less capable and (2) the assumption that client personnel should know everything they do. And clients need to be forgiving of human/imperfect consultants who can't possibly know everything about everything.
There is a very human side to project management. It's not just charts and methodology. It's about making the problems, roadblocks, and challenges expected and OK. ...instead of trying to cover them up. So, don't just have regular status meetings, demand open and honest dialog and create an environment where it's OK to make mistakes. It's all part of the process.
Friday, February 20
If you read this blog, you probably know that nothing is 100% safe. And you probably distrust this type of offering. But, Verisign knows encryption as well as anyone. Verisign spun off from RSA (then RSA Data Security) in 1995 with some of RSA's public- and private-key cryptography technologies. They're really good at authentication and encryption which are exactly the two specialties I expect from an online storage vendor.
They're giving you 2GB of storage space free - it requires Two-Factor authentication to get in and encrypts data on the back end. And it's an easy-to-use UI with no software install. It's probably a better option than backing up my docs on a USB key (subject to damage and loss) or using some other non-security-focused vendor.
I also like the business model. We all wonder how OpenID providers will make a profit. Verisign seems to be ahead of the pack in providing value-add to users. You get more than just an OpenID credential. You get strong authentication, secure storage, and a personal identity page (probably the least interesting, but still somewhat fun and on the right track).
So, they can sell 100 million tokens to customers who get real value above and beyond reducing the number of credentials they need to remember. And of course, Verisign can license this technology to banks, governments, or anyone else who wants to resell online safety deposit boxes along with secure two-factor authentication solutions under their own brand. Paypal already re-brands the token to protect their customer accounts.
I could easily imagine brick and mortar banks handing out tokens with every new on-line bill pay account and/or offering a virtual safety deposit box to every physical box customer. It's value for the customer and a business model that makes sense. I'd even pay for a new token every few years just to maintain a secure place to archive my important files.
I knew there was a reason I never setup that Amazon S3/JungleDisk account.
Tuesday, February 17
- I continue to find and add new members to my blogroll. My criteria is that they should consistenly write content relevant to my audience, generally maintain a positive attitude (no bashing), and have something worthwhile to say.
- I didn't recently shorten my blogroll, but to clean things up, I now only show the 10 most recent postings. You can click "more" to see the entire list.
- Members of my blogroll are also searchable via the SEARCH box under the my content area in the upper right area. For example, try searching for "virtual directory cache". The first results you'll see are a collection of my related content from blog, twitter, flickr, etc. Next, select the Network tab and you'll get results from everyone in my network (my blogroll).
- You can also search across the entire list of the Security Bloggers Network by using the SBN badge on the right. The SBN boasts the brightest minds in the industry. It's not Identity focused, but covers all aspects of information security.
- I'm no longer doing any advertising on this site. I tried a few things in the past, but found it intrusive and not worthwhile. I may choose to use a small space for highly relevant ads in the future, but I will hand-select something that will be relevant to my audience (no adwords or auto-generated-content ads).
- I recently started a NetVision blog. This site will be the home for NetVision-specific posts. It will take some of that content away from here, but many of you might see that as a good thing. I'll provide pointers when I think it makes sense.
Thanks for reading and please let me know if you have suggestions for me.
Friday, February 13
Ash, my experience says that Mark is correct. I believe the top vendors can all brag about similar throughput. But, my understanding is that's only what the VD puts on top of the process. There's still the back end data lookups, etc. To Mark's later point, that may not be a big deal either if those sources perform well.
Let's use a telecom example:
In a scenario where the VD serves an attribute that is a composite of multiple attributes from various sources (a mixed ODBC and LDAP call) or across numerous sources (customer databases from companies that merged or partner) and the attribute is needed to make a decision (does the subscriber get this feature) in real-time (the time between hitting "send" and hearing a ring) for millions of requests each second, effective use of cache can help – even though throughput is already relatively quick. In many (perhaps most) enterprise identity infrastructure uses, cache may not be of enormous value – or at least it's not the most compelling reason to use a Virtual Directory.
I can tell you that customers ask for it whether they need it or not. Probably because they have performance or availability concerns. But, from what I've seen the performance concerns are usually unfounded (unless the back end systems have serious problems). And VD cache isn't a great way to provide redundancy because it's implemented at an attribute level and cached based on use. If the idea is to put the entire data set somewhere else, you could argue that it'd be better to just have another directory instance there and do real-time synch (replication).
My opinion is that it's a nice feature to have in the tool bag when needed, but it's not always needed.
Tuesday, February 10
Number one on the list? Excessive Access Rights. Will I be accused of FUD for pointing out that this is a problem? View the presentation for yourself to see how numbers 1, 3, 4, 6, 7, & 8 are tightly related and even solved with the same swoosh of your magic wand (or samurai sword, depending on what type of geek you are).
Tuesday, February 3
The panelists (Amrit Williams, Martin McKeay, and Mike Murray) covered a number of aspects of ESM-SEM solutions. My one line summary conclusion of the discussion is that:
SIEM's are not able to effectively correlate information and provide actionable intelligence.
A few of the supporting statements:
The consensus seemed to be that vendors do a good job of gathering and storing logs to meet compliance requirements that mandate storage of those logs. What customers really need and want from these vendors, however, is actionable intelligence.
Murray: They lack the ability to "take data and pull information out of it"
Williams: The problem "can't be solved in a centralized way." The only way SIEMs would meet their goal is via "cooperation, communication and cognizance distributed out so the agents are essentially communicating with each other and responding to events that are being provided to each other" "I've talked to customers that are 18 months in and still can't get it properly deployed"
Murray: "there are vendors out there that you still have to manually setup every agent... the cost is staggering"
McKeay: "when I think SIEM, I think glorified log management"
Williams: "rarely are these things being used to detect and respond to incidents in real time... the market driver [...] is compliance... it is unfortunate"
McKeay: "it comes down to being able to understand your own environment... it's the definition of the problem that we don't have yet"
Williams concisely defined the goal of information security:
"to limit the possibility of an incident from occurring... and when it does occur, to limit its impact (by identifying it quickly and responding)"He continued "...what the ultimate goal of an intelligence system would be is that it's able to detect what are seemingly innocuous events and provide some actionable level of intelligence that shows that that's actually an incident occurring and you can respond to it and limit its impact on the environment - that's what they'd like to be, but they're not that"
Murray added that customers want the solution to "just tell me the five things I need to do - that's what SIEMs should do"As an industry, we're "really good at generating reams of data, but we're not very good at handling information... turning it into 'here's the 5 things'..." SIEM tools are great if "you have defined the problem that you're trying to solve and you know what the information is that you're trying to manage and you can setup a way to manage that."
It was a really interesting discussion. I've enjoyed each of the video discussions in this series so far which have also covered DLP and Firewalls/IPS.
Next, I'll tell you what NetVision is doing about the problem. We're not a SIEM vendor, but we beat the SIEMs to the finish line of actionable intelligence. Actionable Intelligence has been our internal mantra for the past year or so and it is the motivator behind our latest solution to market (as well as a few that are still on the road map).
Tuesday, January 27
I like his basic definition:
"If it does what it’s supposed to, to the degree it’s supposed to"This highlights the need for a thorough analysis of what a control is supposed to do – and how well it's supposed to work. ...which I think sometimes gets missed among all the vendor sales and marketing materials that are designed to talk about the big picture (compliance, etc.) rather than actual functionality.
Wednesday, January 14
- Be diligent about monitoring – catching this early saved close to $4 Million
- De-Provision (it's unclear whether the employee still had an account)
- Include hosted and Internet systems in your de-provisioning process
- Do security audits to find and fill holes
Tuesday, January 6
The Washington Post reported today that Data Breaches were up 50% in 2008. There are probably lots of contributing factors to the increase in stats:
- As the article points out, an increase in participation and sophistication of organized crime with regard to electronic crimes. I've heard this in multiple places.
- Stricter adherence to regulations that require notification of breaches (as pointed out by Shannon McNaught on Twitter -- where I stumbled across the article)
- Continued lack of deterrents for Crimes of Opportunity. Organizations have been slow to get serious about monitoring admin activity.
- An increasing reliance on electronic forms of data - people and companies have increasingly become more trusting and more reliant on electronic media. This makes the data increasingly more valuable and therefore a bigger target.
- Improved tools and sophistication that enables theft. A 16 GB USB key is an extremely effective way to quickly transfer large amounts of data without being detected. Improved technology and lower cost has introduced new and stronger threats.
The article also states that "The largest single cause of data breaches came from human error" once again affirming my proposal that by far most breaches are not malicious. I recently heard a genuine real-world story that an admin made an error on a windows drag-and-drop (as we all sometimes do) and an entire factory was brought to a standstill -- an OU was moved in AD.
It also points out that statistics "mask the extent of the problem" because many organizations fail to report data breaches. As I said before:
Nobody calls a forensics team when an admin opens up an HR doc containing a co-worker's salary. Or when an admin creates a new account and grants full system rights in order to get a new application up and running.
We all know the implications. If you've got sensitive data, understand your risk, know what your threats are, and be proactive before you become one of the stats.