Thursday, March 30

Common Virtual Directory Scenarios

The discussion regarding possible uses for Virtual Directory is on-going. The following are 8 easy-to-understand scenarios for Virtual Directory in no particular order. This is by no means an exhaustive list, but I think it covers the simplest scenarios. I look forward to questions or comments.

  1. Protocol Translation - Provide access to relational and other non-standardized data over standard LDAP and Web Services protocols without altering the data.

  2. Web Service Enablement - Respond to identity data requests made via DSML, SPML or any other service-oriented data format (standards-based or custom).

  3. Multi-Repository Search - Enable a single search over standard protocols to return a single clean result-set containing identity data that resides in multiple repositories in multiple formats.

  4. Joined Identity View - Enable a search that returns a view of single identities that are comprised of data from multiple repositories. e.g.) A single user record is presented with name and phone number from the HR system and the email address from Active Directory.

  5. Permission-Based Results - Enable a customized view into a single data universe based on which application or which user is performing the search. e.g.) Employees inside the corporate firewall see a full view of fellow employees while customers accessing an external-facing application see a reduced set of attributes and phone number is formatted using the (toll-free + extension) format.

  6. Dynamic DIT - Build an on-the-fly Directory Information Tree based on identity data attributes. e.g.) The application calls for LDAP views based on job title so the virtual directory dynamically presents an OU for each job title in the database and presents employees within the appropriate OU based on their job title.

  7. Authentication - Enable pass-through authentication from a single point of entry into multiple identity data stores. e.g.) Authentication requests are directed to a single point. The Virtual Directory authenticates non-employees against a back-end Sun Directory and employees against Active Directory.

  8. Real-Time Data Access - Provide real-time access into back-end systems. Because requests are passed to the originating data source, the search results can be as real-time as required.

Summary

Virtual Directory technologies eliminate boundaries. Hassles related to LDAP object types, attribute definitions and other schema-related issues are eliminated by virtualizing the view into the backend identity stores. You're no longer limited by the existing data format or database branding. There's no requirement to migrate the data from a relational database into an LDAP directory in order to make the data LDAP- or Web Service- accessible.

Thursday, March 23

Showdown: MIIS vs. DSE

Prior to joining MaXware, I spent a significant amount of time working with Microsoft Identity Integration Server (MIIS). Since joining, I've had a lot of questions from old friends about how the new software stacks up (and vice-versa from new friends). So, I put together a summary of what I've found thus far. I'll discuss MIIS and MaXware Data Synchronization Engine (DSE). Thanks to its very descriptive product name, you can probably guess what DSE does. MaXware also offers the MaXware Identity Center (MIC) for advanced user lifecycle management (workflow, etc.). But for now, I'll stick to data synchronization.

INFRASTRUCTURE

I think the most obvious difference between DSE and MIIS is infrastructure. MIIS requires Windows Server 2003 and SQL Server. For practical purposes, it also requires Visual Studio. DSE, on the other hand, can run on any platform supporting Java Runtime 1.4 or later. DSE also doesn't require a central database, but can leverage Oracle, SQL Server, MySQL, Access or any other ODBC, JDBC or OLEDB compliant database. With DSE, you can build very simple jobs that read/write directly to/from one data source directly to another without setting up a central database. If you want to perform data joins, you can setup a collect database as part of the solution (similar to an MIIS metaverse). So, the DSE footprint is very small. And DSE runs very efficiently using Access as a central store even with a few hundred thousand entries. If performance is critical, support for the bigger databases is available.

DELTAS

Another advantage of DSE is that it doesn't rely on deltas to be managed at the connected data store. So, for example, you don't need to enable the Sun directory retro changelog in order to only process changed entries. DSE handles this by its ability to store its own delta table in the central database with a hash of each record and the ability to compare the hash before performing an action on the record. If you'd prefer to rely on the changelog, that's OK too. You may be able to improve performance that way. However, you might lose some level of audit capability since DSE won't know what attributes have changed.

CONFIGURATION

Another difference is in the storage of server configuration. MIIS stores its server configuration in the SQL Server database. DSE stores its configuration in a single XML file. MIIS, though, does a pretty good job of providing XML-based server configuration export and import. It seems a little more complicated than DSE, but not by much.

SERVER FLEXIBLIITY

MIIS uses a single metaverse for each MIIS instance. So, a set of Management Agents writes to and from the metaverse to perform actions on the user objects. This is fine until you want to manage multiple sets of users. If, for example, you want to manage test data and production data, it becomes difficult to manage due to join rules. It's easy to join incorrect objects by running an MA against a test directory instance with different data. With DSE, jobs are organized in groups. Each group is a set of Passes (a pass is similar to a management agent and a corresponding run profile). And you would setup each group to have its own collect database (or metaverse). This allows you to do more with a single instance of DSE and provides extensive flexibility.

DIRECTION

DSE passes are one-way. MIIS management agents are bi-directional. So, if you want to read and write to a given directory, you would create a FROM pass, probably run a few other from passes and perform some actions on the data, then you would run a TO pass back to the directory. In the MIIS world, it's common to run a single MA multiple times to capture data changes that happened after the initial import run. While I prefer the way DSE handles this, I don't see this as a big advantage for either solution.

SCRIPTING

On the Windows platform, DSE can support VBScript, JScript and Perlscript. On the Java platform, DSE supports Javascript. Obviously, VBScript can call a DLL written in VB or C# and Javascript can call compiled Java code. MIIS supports C# and VB.NET. Both provide a set of native functions to interact with objects as you're moving them around. Openness is generally a good thing -- especially if you like Java, but VB.NET and C# can probably perform any type of functionality that you're looking to accomplish. I think DSE has the advantage here, but if you will be using C#, MIIS is more directly integrated which can be considered a positive.

AUDIT DATA

Both solutions provide logging, but DSE has the advantage of local delta tables for audit and reporting. Also, the separation of TO and FROM passes also make it easy to include a pass that writes to an audit table before losing the old and/or new attribute values. While XML data mining is possible with MIIS, I would much rather work with DSE if advanced audit reporting is a requirement -- especially at the attribute level. I'm not really sure if attribute-level auditing is possible with MIIS without some heavy scripting being built into each attribute flow rule.

SQL FLEXIBILITY

DSE FROM and TO passes have a SOURCE tab and a DESTINATION tab. When importing data from a SQL database, the SOURCE tab allows you create a custom query that is as advanced as you would like. In addition, if you are creating a TO pass to any type of data source, you can write an advanced query against the collect database to select whatever subset of users you can imagine. MIIS is limited to import only from a single table (or view). And management agent operations are run against the entire connector space set for that agent (or the deltas). MIIS lacks the ability to create custom queries that define a subset of users to which a set of actions would be applied.

DATA TYPES

MIIS is obviously an identity data tool. You can modify the metaverse schema to accommodate whatever object types you need, but it's designed to manage identity data. DSE is designed to handle any kind of data. There are no pre-configured data structures. DSE does schema discovery on any kind of data source and can easily be used to synchronize any type of data. For example, if you want to synchronize customer order data from one type of database to another, DSE can handle it as easy as managing the customer identity data.

COST

I think they two products are similar in price, but DSE allows the customer to leverage existing server and database infrastructure rather than requiring new licenses for Windows Server and SQL Server.

CONCLUSION

I realize that most of these points favor DSE over MIIS but that wasn't my predetermined intention. These are what came to mind when thinking about the differences between the two products. If you are a C# developer who lives in Visual Studio and already owns an open SQL Server license, you may prefer MIIS. Outside of that scenario, I think there are a number of compelling reasons to consider DSE. I'd be interested, though, in hearing opposing viewpoints -- there may be a perspective that I haven't considered. My hope is that this summary paints a picture of the basic differences between the two products and allows the reader to think about how these differences may or may not affect a given environment.

Wednesday, March 22

Identity Management Project Continuum

I recently had a discussion with some sales folks who were interested in the Identity Management project lifecycle. The question came from a product sales perspective as in How do we know where a client stands in the big picture of IdM? ...or more pointedly How do we know which product to pitch to a given company based on how far along they are? I laid out what I like to call the Identity Management project continuum.

Implementing IdM is not a single project. Nor is it even a few stand-alone projects. I call it a continuum. The folks at TNT recently posted a blog describing IdM as a lifestyle. I think that's a great way to think about it. I was, though, a little annoyed about their claims regarding software vendors. They suggested that because we sell software we don't understand the customer perspective. I think they're wrong about that -- at least with some of us. In an ideal world, businesses looking to deploy IdM would have someone competent driving the boat -- maybe an employee, maybe a consultant, but probably not a software vendor. We provide tactical tools to get the job done but that doesn't mean we don't get the big picture. We just typically wouldn't want ownership of the big picture. That's not our focus.

To get back to the topic, the continuum is not black and white. It necessarily varies for every business based on their data, infrastructure, processes and business needs. For any given business, the phases will occur in different orders, their prioritations will vary and some phases may not be required at all.

Below is a sample outline of what I described as the continuum. It's meant to be a general guideline and a starting point for discussion. It's certainly not trying to be a one-size-fits-all project plan

Vision and Roadmap - This is important. You should identify and clearly document the goals, business drivers and overall approach. List the general timeframes and expectations.

Data Cleansing and Reconciliation - Most organizations have multiple data sources that are stored in different formats with different technologies. Step one is usually identifying the data sources, cleaning the data as-needed and creating attributes that can be used to join records together.

Basic Account Provisioning - The first step provisioning may be as simple as automated account creation but could also include single-step workflow or automation of group/role memberships.

Basic Password Management - Management of passwords is often a key driver for IdM projects due to the organizational cost savings.

Basic Auditing - This step should involve initiating the collection of audit data and a few basic reports. Advanced reporting based on captured data can be implemented down the road.

Build/Strengthen Centralized App Authentication - This can be implementing SSO, consolidating authentication mechanisms, reducing the number of authentication stores or otherwise improving the application authentication infrastructure.

Advanced Provisioning - Build upon the basic provisioning infrastructure with advanced workflow, additional business rules, improved deprovisioning functionality and inclusion of additional data sources and/or applications.

Internal Federation - With an established infrastructure for authentication and entitlement, federation may be the next step. Here you adopt a standard and think about how you want to pass authentication information across security boundaries.

External Federation - After the basic federation infrastructure is in place, you may be ready to enagage in cross-organizational federation with customers, service providers and partners.

One other thing...

While I'm writing about projects, I'd like to give a nod to a great set of blogs by Mark Dixon about IdM project risks. While I don't think the ideas are completely original (I'm sure Mark would agree), they are indeed brilliant. And Mark organizes and explains the information very well. If you are embarking on an IdM journey, these are a must read:

Seven Identity Mgt Implementation Risks, Mark Dixon (1/25/06)
Identity Risks - Poor Pre-Project Preparation, Mark Dixon (1/31/06)
Identity Risks - Poor Requirements Definition, Mark Dixon (2/04/06)
Identity Risks - Large Initial Scope, Mark Dixon (3/14/06)
Identity Risks - Inexperienced Resources, Mark Dixon (4/14/06)
Identity Risks - Poor Project Methodology, Mark Dixon (4/24/06)
Identity Risks - Scope Creep, Mark Dixon (7/26/06)
Identity Risks - Not Using Available Support, Mark Dixon (7/27/06)

I'm really glad Mark is writing about these. If he hadn't, I might have felt the need to try it myself. I probably wouldn't have done such a nice job and definitely wouldn't have had the audience reach. ...I look forward to reading more from Mark about the four other risks.

Tuesday, March 21

MaXware HQ: Trip Report

On Saturday, I returned from Norway just in time for my family's St. Pat's celebration. It was a very impressive trip. ...and the party wasn't bad either. The folks in Trondheim are certainly on the ball. The first few days of training were focused on our synch and provisioning products. On day four, I learned how to implement all of the virtual directory scenarios I've been talking about in this blog. Day five was all about federation and federated auto-provisioning. Very cool stuff. These guys are brilliant!

I heard a story that the concept of Virtual Directory came out of a discussion between Kim Cameron (with ZoomIt at the time) and our CTO. Something like, "wouldn't it be nice if we didn't need a persistent data store between the originating data stores?". Well, he went back to the office and wrote what I believe to be the first Virtual Directory (then called an LDAP proxy). That's a nice legacy.

Some of the training discussion was review, but I also learned a number of advanced techniques. For example, our data synch tool doesn't require the data source to keep track of delta changes because we can store the deltas in a central database, which is vital for auditing down to the attribute level. We can also rollback account provisioning if creation fails in one of the downstream apps. And in one day, I installed MaXware Virtual Directory and setup custom directory views based on logon, created a joined account scenario, searched a SQL table via LDAP and restructured the virtual LDAP DIT based on the querying app. I was amazed at how simple this stuff was to install and configure. (Sorry if I sound like an infomercial, but I am very quickly learning to be product biased). It's a weird feeling to come from a product-agnostic environment (where I always preached product independence) and now find myself part of a product company. When I joined the company, I knew MaXware by reputation, but didn't have much hands-on experience. So, I'm glad to find that when I look under the hood I'm seeing some very nice technology -- feature rich, easy to use and a solid, mature code-base.

I was also impressed with the Norwegian people. Not that I was surprised, but everywhere I went people were helpful and friendly. And what a beautiful country! I really couldn't have asked for more.

Wednesday, March 8

One more post on Virtual vs. Meta

So, I already wrote that the question of Virtual vs. Meta is not the right question - they are complimentary solutions. One more thought on identifying where each would fit.

Every organization embarking on an IdM journey needs to begin by identifying data stores and collecting, cleansing, transforming & reconciling data. At this stage, traditional synch tools (metadirectory) are probably the right tool for the job.

If you already have a reasonably good set of data and are looking to provide additional or customized views into that data (via LDAP, SOAP, etc.), then Virtual Directory is probably the answer.

Of course, this is a very generalized view, but I think a good starting point. One of the things that make this question confusing is the functional overlap between the two solutions -- and there is plenty. But the point is to find the best fit solution for a given set of business challenges.