Tuesday, November 30

Introducing OCI IAM Identity Domains

A little over a year ago, I switched roles at Oracle and joined the Oracle Cloud Infrastructure (OCI) Product Management team working on Identity and Access Management (IAM) services. It's been an incredibly interesting (and challenging) year leading up to our release of OCI IAM identity domains

We merged an enterprise-class Identity-as-a-Service (IDaaS) solution with our OCI-native IAM service to create a cloud platform IAM service unlike any other. We encountered numerous challenges along the way that would have been much easier if we allowed for customer interruption. But we had a key goal to not cause any interruptions or changes in functionality to our thousands of existing IDaaS customers. It's been immeasurably impressive to watch the development organization attack and conquer those challenges.

Now, with a few clicks from the OCI admin console, customers can create self-contained IDaaS instances to accommodate a variety of IAM use-cases. And this is just the beginning. The new, upgraded OCI IAM service serves as the foundation for what's to come. And I've never been more optimistic about Oracle's future in the IAM space.

Here's a short excerpt from our blog post Introducing OCI IAM Identity Domains:

"Over the past five years, Oracle Identity Cloud Service (IDCS) has grown to support thousands of customers and currently manages hundreds of millions of identities. Current IDCS customers enjoy a broad set of Identity and Access Management (IAM) features for authentication (federated, social, delegated, adaptive, multi-factor authentication (MFA)), access management, manual or automated identity lifecycle and entitlement management, and single sign-on (SSO) (federated, gateways, proxies, password vaulting).

In addition to serving IAM use cases for workforce and consumer access scenarios, IDCS has frequently been leveraged to enhance IAM capabilities for Oracle Cloud Infrastructure (OCI) workloads. The OCI Identity and Access Management (OCI IAM) service, a native OCI service that provides the access control plane for Oracle Cloud resources (networking, compute, storage, analytics, etc.), has provided the IAM framework for OCI via authentication, access policies, and integrations with OCI security approaches such as compartments and tagging. OCI customers have adopted IDCS for its broader authentication options, identity lifecycle management capabilities, and to provide a seamless sign-on experience for end users that extends beyond the Oracle Cloud.

To better address Oracle customers’ IAM requirements and to simplify access management across Oracle Cloud, multi-cloud, Oracle enterprise applications, and third-party applications, Oracle has merged IDCS and OCI IAM into a single, unified cloud service that brings all of IDCS’ advanced identity and access management features natively into the OCI IAM service. To align with Oracle Cloud branding, the unified IAM service will leverage the OCI brand and will be offered as OCI IAM. Each instance of the OCI IAM service will be managed as identity domains in the OCI console."

Learn more about OCI IAM identity domains

Wednesday, June 9

Bell Labs, the Colonial Pipeline and Multi-Factor Authentication (MFA)

A simple technology invented by Bell Labs over 20 years ago (and widely used today) could have prevented the Colonial Pipeline attack.

In 1880, the French government awarded Alexander Graham Bell roughly the equivalent of $300K as a prize for inventing the telephone. He used the award to fund the research laboratory that became colloquially known as Bell Labs. If you’re not familiar with Bell Labs, you should be. In the 140+ years that followed, researchers at Bell Labs invented radio astronomy, transistors, lasers, solar cells, information theory, and UNIX, just to name a few of the many accomplishments. Among the many prestigious awards granted to Bell Labs researchers are nine Nobel prizes and twenty-two IEEE Medals of Honor.

In 1998, I joined AT&T Labs, which was a research group that the company retained when they spun out most of Bell Labs to Lucent Technologies in 1996. I was a Web Application developer; one of the least technical roles in the Labs. If I ever thought for a moment that I knew technology, I was quickly humbled when I built an app that tracked the Labs' actually important projects. The experience of working in the Labs stuck with me in the form of humility and curiosity. I accepted that I may never be the foremost expert in any given technology and I assumed the mindset of a forever student. Even today, I constantly question what I think I know because there are always holes in my knowledge or perspectives that I haven’t seen.

1998 was the same year that researchers at AT&T Labs were issued a patent (filed in 1995) for what became known in our industry as Multi-Factor Authentication (MFA). As a Product Manager at a tech firm, I don’t review patents for legal reasons. But I recently saw an excerpt of the abstract for the AT&T patent and there was one line that I found entertaining: “A preferred method of alerting the customer and receiving a confirmation to authorize the transaction back from the customer is illustratively afforded by conventional two-way pagers.” Not much has changed in 23 years. Pagers have been largely replaced by SMS but text messaging through the telecom provider’s network remains one of the most popular delivery mechanisms for MFA (despite some potential security flaws). 

I have no personal insight into AT&T’s motivations at the time, but I read Kevin Mitnick’s book a few years ago (Ghost in the Wires) and can’t help but wonder if AT&T was at the forefront of developing security technologies because they were such a target of hackers for so many years. I also reached out to Steve Greenspan, one of the inventors named in the patent to get his thoughts on the project. He noted:

"Two-way pagers had just come out (1994-1995), and our cybersecurity friends were debating whether quantum computing would undermine password-based security. The goal was to explore business applications for two-way pagers and to put humans in-the-loop for secure access."

Quantum computing is a a pretty interesting business driver for MFA, especially in the mid-1990's. The concern is even more relevant today as we inch closer to quantum compute becoming a practical reality. Today's authentication systems should store password data in non-reversible hashes (theoretically preventing the quantum threat), but it's clear that credentials are being stolen all the time (often via large databases that are just left unprotected) and MFA remains a top solution to mitigate the damage. Steve and team were clearly on the right track when they dreamed up out-of-band authentication and deserve some credit and recognition for the foresight.

You may be wondering how this relates to the pipeline attack that led to fuel shortages across the U.S. East Coast. Bloomberg reported that the Colonial Pipeline, which is the largest fuel pipeline in the country, was taken down by a single compromised password. That should never happen given the variety of tools available to limit and control access, starting with MFA – a relatively simple solution that would likely have prevented the attack. The entry point to the system was a Virtual Private Network (VPN) account. If you’re using a VPN and expose anything sensitive inside the VPN, you should implement strong authentication that includes at least two authentication factors (something you know, something you have, something you are). These are widely available technologies that are very effective against lost or stolen credentials.

Of course, authentication isn’t the end of the story. Today’s widely distributed and highly dynamic environments require multiple layers of security. We all know how popular email and phishing attacks have become. It only takes one person inside a network to open an email, click a link, or logon to a phishing site to give an adversary a foothold in the network. We have to assume that will happen and build layers of strong security between any one user and the potential targets.

To illustrate the point, here’s a quick example:

Grocery stores who sell small, high-value items have traditionally struggled with theft. (Ask me over a beer sometime about how I helped take down a recurring thief when I worked at a grocery store.) If the only answer was to authenticate users (check ID) on the way into the store, it wouldn't be enough. Once inside, someone can still pocket items and walk out without paying. If you walk into a grocery store today, you’ll see cameras in the healthcare aisle where small, expensive medications line the shelves. But that’s not enough either. Each item is also locked in an anti-theft device that’s removed at the register. And some items are found in a locked cabinet that requires employee assistance. Theft still happens, but each layer reduces the risk. Our IT environments are much more complicated in terms of the various pathways to theft and our responses to reduce risk typically require more than a few layers of security.

Sensitive data should only be stored in a secure area of the network with access controls and Least Privilege enforcement. Access should be limited to specific hosts or networks. Data should be encrypted (inside the file when possible - so if the file is stolen, the data is still unusable). There should be strong authentication to get into the network and monitoring of all activity. There should be alerts on unusual behavior and Data Loss Prevention (DLP) to evaluate the sensitivity of data moving across the network. The environment should be scanned regularly for vulnerabilities and misconfigurations. And on and on. Any one of these security mechanisms alone is not enough. This multi-layered approach to security is critical in developing a strong security posture that minimizes risk.

We could argue about where to start or which security controls are most important. But, it seems like a no-brainer to implement MFA for employees accessing corporate data and applications. Microsoft, who deals with 300 million fraudulent sign-in attempts daily concluded that “MFA can block over 99.9 percent of account compromise attacks.” That sounds about right. While targeted attacks have increased in prevalence, most attacks are not targeted at specific companies or individuals. Most start with automated scripting or broad-scale phishing attacks that span across potentially thousands of companies and/or millions of people at the same time. When a foothold is found (a script finds a vulnerability or an open port, a user enters credentials into the phishing site, etc.), the attack begins. Implementing a few simple security technologies like automated vulnerability scanning and MFA can prevent most attacks before they begin. Even if a sophisticated phishing attack succeeds despite MFA, the credentials will not be very useful beyond the initial session (which should be limited in scope by other controls).

No single technology will solve all cybersecurity problems. But, implementing MFA is low-cost, easy-to-implement, and highly effective. It may even make life easier for end-users. Password requirements can be loosened because there’s less risk associated with cracked passwords. And there are numerous implementations of passwordless authentication that, while they may not always meet the strict definition of MFA, provide similar (sometimes higher) levels of security as MFA without requiring a password. Combined with context-aware adaptive security (that verifies device, network, location, time-of-day, etc.), these passwordless authentication options may provide the right level of balance between security and user experience. At this point, this isn’t scare tactics or FUD. Attacks on National infrastructure or other high-profile targets can impact the lives of millions with a single execute command. MFA is an easy layer to add to improve security and it’s commonly included with authentication solutions, so there’s really no excuse. It’s time to get it done.

Monday, February 8

Comprehensive Identity-as-a-Service (IDaaS): Protect all your apps with cloud access management

Comprehensive Identity-as-a-Service (IDaaS): Protect all your apps with cloud access management

Over a decade ago, the need for quicker SaaS onboarding led to Siloed IAM for early IDaaS adopters. For many, IDaaS evolved to a Hybrid IAM approach. Today, Oracle’s IDaaS provides comprehensive coverage for enterprise apps. 

"IDaaS has matured quite a bit over the last several years and no longer relies as much on SAML or pre-built app templates. Today, Oracle Identity Cloud Service helps manage access to virtually any enterprise target. To accomplish that, we’ve introduced several technical approaches to bringing more applications into the IDaaS fold with less effort. These approaches, combined, provide the easiest path toward enabling the service to manage access for more systems and applications."

Read more on the Oracle Cloud Security Blog > Comprehensive Identity-as-a-Service (IDaaS): Protect all your apps with cloud access management.

Tuesday, December 22

Oracle Strengthens Interoperability and User Experience with General Availability of FIDO2 WebAuthn Support for Cloud Identity

"Given the distributed nature of today’s technology environment, zero trust has become the standard for security. Every interaction must be authenticated and validated for every user accessing every system or application every time. To that end, interoperability is more important than ever.To that end, interoperability is more important than ever. FIDO2 Web Authentication (WebAuthn) is quickly emerging as an important interoperability standard that enables users to select and manage an authenticator of their own (security keys, or built-in platform authenticators, such as a mobile device) that works with their web browser of choice (Google Chrome, Mozilla Firefox, Microsoft Edge, Apple Safari, etc.) for secure access to any websites or applications that support the WebAuthn standard."

"Oracle is happy to announce the general availability of FIDO2 WebAuthn for our cloud identity service. This means that websites and applications that are protected by Oracle can enable their audience of users to authenticate with FIDO2 authenticators for multi-factor authentication (MFA) as well as passwordless authentication. This simplifies the user experience and may reduce the number of authenticators that users need to access the variety of web applications they interact with on a regular basis. Ultimately, this gives users more choice, more control, and a frictionless user experience.

Read more on the Oracle Cloud Security Blog > Oracle Strengthens Interoperability and User Experience with General Availability of FIDO2 WebAuthn Support for Cloud Identity.

Tuesday, November 24

Modernization of Identity and Access Management

From the Oracle IAM blog:

"Oracle has been in the IAM business for more than 20 years and we’ve seen it all. We’ve addressed numerous IAM use-cases across the world’s largest, most complex organizations for their most critical systems and applications. We’ve travelled with our customers through various highs and lows. And we’ve experienced and helped drive significant technology and business transformations. But as we close out our second decade of IAM, I’m too distracted to be nostalgic. I’m distracted by our IAM team’s enthusiasm for the future and by the impact we’ll have on our customers’ businesses in the decade to come. Central to that is the focus to respect our customer's identity and access journey and meet them with solutions that fit their individual needs."

 

Monday, August 24

Addressing the Cloud Security Readiness Gap

Cloud security is about much more than security functionality. The top cloud providers all seem to have a capable suite of security features and most surveyed organizations report that they see all the top cloud platforms as generally secure. So, why do 92% of surveyed organizations still report a cloud security readiness gap? They’re not comfortable with the security implications of moving workloads to cloud even if they believe it’s a secure environment and even if the platform offers a robust set of security features. 

Two contributing factors to that gap include:

  • 78% reported that cloud requires different security than on-prem. With security skills at a shortage, the ability to quickly ramp up on a new architecture and a new set of security capabilities can certainly slow progress.
  • Only 8% of respondents claimed to fully understand the cloud security shared responsibilities model; they don’t even know what they’re responsible for; never mind how to implement the right policies and procedures, hire the right people, or find the right security technologies.

I recently posted about how Oracle is addressing the gap on the Oracle Cloud Security blog. There's a link in the post to a new whitepaper from Dao Research that evaluates the cloud security capabilities offered by Amazon AWS, Google Cloud Platform, Microsoft Azure, and Oracle Cloud Infrastructure.

Oracle took some criticism for arriving late to the game with our cloud infrastructure offering. But, several years of significant investments are paying off. Dao's research concludes that “Oracle has an edge over Amazon, Microsoft, and Google, as it provides a more centralized security configuration and posture management, as well as more automated enforcement of security practices at no additional cost. This allows OCI customers to enhance overall security without requiring additional manual effort, as is the case with AWS, Azure, and GCP.

A key take-away for me is that sometimes, the competitive edge in security in delivered through simplicity and ease of use. We've heard over and over for several years that complexity is the enemy of security. If we can remove human error, bake-in security by default, and automate security wherever possible, then the system will be more secure than if we're relying on human effort to properly configure and maintain the system and its security.

Click here to check out the post and the Dao Research whitepaper.

Monday, October 15

Improve Security by Thinking Beyond the Security Realm

It used to be that dairy farmers relied on whatever was growing in the area to feed their cattle. They filled the trough with vegetation grown right on the farm. They probably relied heavily on whatever grasses grew naturally and perhaps added some high-value grains like barley and corn. Today, with better technology and knowledge, dairy farmers work with nutritionists to develop a personalized concentrate of carbohydrates, proteins, fats, minerals, and vitamins that gets added to the natural feed. The result is much healthier cattle and more predictable growth.

We’re going through a similar enlightenment in the security space. To get the best results, we need to fill the trough that our Machine Learning will eat from with high-value data feeds from our existing security products (whatever happens to be growing in the area) but also (and more precisely for this discussion) from beyond what we typically consider security products to be.

In the post, I make the case that "we shouldn’t limit our security data to what has traditionally been in-scope for security discussions" and how understanding Application Topology (and feeding that knowledge into the security trough) can help reduce risk and improve security.

Here's an excerpt:

We’re all guilty of thinking myopically at times. It’s easy to get caught up thinking about the objects in our foreground and to lose our sense of depth. We forget about the environment and the context and we focus too narrowly on some singular subject. It’s not always a bad thing. Often, we need to focus very specifically to take on challenges that would otherwise be too big to address. For example, security professionals spend a lot of time thinking about specific attack vectors (or security product categories). And each one perhaps necessarily requires a deep level of focus and expertise. I’m not arguing against that. But I’d like to suggest that someone on the team should expand their focus to think about the broader environment in which cyberattacks and security breaches take place. When you do, I suspect that you’ll find that there are data points from outside of the typical security realm that, if leveraged correctly, will dramatically improve your ability to respond to threats within that realm.

I posted recently about the importance of convergence (of security functionality). I noted that “Security solutions are evolving toward cloud, toward built-in intelligence via Machine Learning, and toward unified, integrated-by-design platforms.” I went on to suggest that forward-looking security platforms are autonomous and operate with minimal human intervention. I believe that’s where we’re heading. But to better enable machine learning and autonomous security, we need to feed as much relevant data as possible into the system. We need to feed the machine from an expanding trough of data. And with Internet scale as an enabler, we shouldn’t limit our security data to what has traditionally been in-scope for security discussions.

As an example, I’m going to talk about how understanding Application Topology (and feeding that knowledge into the security trough) can help reduce risk and improve your security posture.

What is Application Topology?

As you likely know, modern applications are typically architected into logical layers or tiers. With web and mobile applications, we’ve traditionally seen a presentation layer, an application or middleware tier, and a backend data tier. With serverless compute and cloud microservice architectures, an application’s workload may be even more widely distributed. It’s even common to see core application functions being outsourced to third parties via the use of APIs and open standards. Application Topology understands all the various parts of an application and how they’re interrelated. Understanding the App Topology means that you can track and correlate activity across components that may reside in several different clouds.

How does Application Topology impact security?

Consider an application that serves a package delivery service. It has web, mobile, and API interfaces that serve business line owners, delivery drivers, corporate accounts, and consumer customers. It’s core application logic runs on one popular cloud platform while the data storage backend runs on another. The application leverages an identity cloud service using several authentication techniques for the several audiences. It calls out to a third-party service that feeds traffic & weather information and interacts with other internal applications and databases that provide data points such as current pricing based on regional gas prices, capacity planning, and more. Think about what it means to secure an application like this.

Many popular security tools focus only on one layer or one component. A tool may scan the web application or the mobile app but probably not both. An app like this might have a few different security products that focus on securing APIs and a few others that focus on securing databases. Even if all components feed their security events into a common stream, there’s not likely a unified view of the risk posture for the application as a whole. None of the security tools are likely to understand the full application topology. If the app owner asked for a security report for the entire application, would you be able to provide it? How many different security products would you need to leverage? Would you be able to quantify the impact of a single security configuration issue on the application as a whole?

If a security solution fully understands the application topology and incorporates that knowledge, here are a few of the benefits: You can generate a holistic report on the application to the app owner that covers all components whether on-premises, in the cloud, or via third-parties. You can monitor user activity at one tier and understand how that impacts your risk posture across other tiers. You can monitor for security configuration changes at all components via a unified service and automatically adjust risk scores accordingly. In other words, a deep understanding of the IT infrastructure underneath the application yields a more robust understanding of security issues and an increased ability to respond quickly and automatically.

Summary

Challenge yourself to expand the scope of which data points might be useful for improving security. Are security appliance event logs and threat feeds enough? As we enter an era dominated by AI and Machine Learning, we need to add as much high-value data as possible into the security trough. ML performs better as it incorporates more information. And as Larry Ellison famously said, the threats are becoming increasingly more sophisticated. “It can't be our people versus their computers. We're going to lose that war. It's got to be our computers versus their computers.” We must rely on Machine Learning and we have to feed it with as much intelligence from as many sources as possible.

Tuesday, September 18

Convergence is the Key to Future-Proofing Security

I published a new article today on the Oracle Security blog that looks at the benefits of convergence in the security space as the IT landscape grows more disparate and distributed.

Security professionals have too many overlapping products under management and it's challenging to get quick and complete answers across hybrid, distributed environments. It's challenging to fully automate detection and response. There is too much confusion about where to get answers, not enough talent to cover the skills requirement, and significant hesitation to put the right solutions in place because there's already been so much investment.

Here's an excerpt:

The whole of your security portfolio should provide significantly more value than the sum of its parts.

The challenge facing security professionals seems to grow bigger and more complex by the hour. New threats and risk factors are constantly emerging while the IT landscape continuously evolves. At times, it feels like we’re patching holes on a moving target that’s endlessly shape-shifting. One of the major contributing factors to those feelings of chaos and disorder is the sheer quantity of security products that we rely on to cover our vast IT landscapes.

The Oracle and KPMG Cloud Threat Report 2018 found that cybersecurity professionals manage an average of 46 different security products. 7% of respondents reported being personally responsible for managing over 100 different products. 100 different security products! I don’t imagine that those folks can possibly have a complete understanding of what’s happening across 50 or 100 different security products or what value each of those products is contributing to reducing their risk. This quantity of products alone contributes to the overall challenge in several ways, including:

  • Product Overlap: Security products often have significant functional overlap. In an environment with several security products, it quickly becomes unclear which product will answer which questions. The result is wasted time and effort and longer delays getting critical answers. When addressing an on-going attack or a breach, the speed of the response effort is critical. The longer it takes, the broader the damage will be.
  • Skills Shortage: Organizations spend too much time finding or developing talent across security products. It’s rare for security professionals to have the exact mix of skills and experience that an organization needs. And with an on-going skills shortage, it’s difficult to retain top talent over long periods of time. Again, not having the right expertise in place means that you’re more likely to miss the signals of developing attacks or on-going breaches and to demonstrate longer response times to security events.
  • Delays in Addressing Gaps: Nobody likes wasted money or shelfware. When a gap is found in an organization’s security posture, security professionals are less likely to find and deploy the right solution if they have numerous other security solutions in place that may (or may not) fix the problem. Of course, without a complete understanding of where the limits are on each of those products, it could take months to sort through them and to formulate an approach. It’s the classic human response of freezing in indecision when there are too many factors to consider. When it comes to addressing information security issues, the last thing you want to do is freeze.

So, what can be done and how can we address the issue?

Here’s the good news: Security solutions are evolving toward cloud, toward built-in intelligence via Machine Learning, and toward unified, integrated-by-design platforms. This approach eliminates the issues of product overlap because each component is designed to leverage the others. It reduces the burden related to maintaining skills because fewer skills are needed and the system is more autonomous. And, it promotes immediate and automated response as opposed to indecision. While there may not be a single platform to replace all 50 or 100 of your disparate security products today, platforms are emerging that can address core security functions while simplifying ownership and providing open integration points to seamlessly share security intelligence across functions.

For example, you know that you need an identity and access component for addressing access management needs across numerous SaaS applications and IaaS services. And you need a Cloud Access Security Broker (CASB) to scan SaaS applications and Cloud Infrastructures for insecure configurations and to monitor user activity. But, for the most part, these functions are silo’ed today. One doesn’t talk to the other. But they can. And they should.

Understanding what a user is doing across cloud applications (visibility often provided by CASB) enables you to create a risk score for that user that can then be used by the Identity function to make decisions and take actions such as stepping up authentication, requesting approvals, initiating an access review, or denying access. Understanding that a target system’s configuration was modified recently or that it doesn’t conform to the organization’s security policies also increases risk. And there are numerous sources of additional risk data: identity, CASB, security configuration scanning, SIEM, UEBA, external threat feeds, session context, etc.

Forward-looking security platforms will leverage hybrid cloud architecture to address hybrid cloud environments. They’re autonomous systems that operate without relying on human maintenance, patching, and monitoring. They leverage risk intelligence from across the numerous available sources. And then they rationalize that data and use Machine Learning to generate better security intelligence and feed that improved intelligence back to the decision points. And they leverage built-in integration points and orchestration functionality to automate response when appropriate.

In other words, your security platform should serve as a central brain that doesn’t only import the various security data points but also makes sense of it without relying on human eyes to catch potential threats. And it adds intelligence, identifies patterns, recognizes anomalies, and responds appropriately and within seconds. This is much more advanced than the old SIEM model which simply aggregates data from numerous sources and tries to raise alerts for humans to evaluate. This is a system that thinks for you and leverages advanced analytics to make decisions across those numerous disparate systems. It’s a cloud service so you don’t need to administer and manage it. You become a user; a consumer of its benefits rather than a caretaker. And the result is much more value and further reduced risk than you’d get from the parts alone.