Tuesday, December 22

Oracle Strengthens Interoperability and User Experience with General Availability of FIDO2 WebAuthn Support for Cloud Identity

"Given the distributed nature of today’s technology environment, zero trust has become the standard for security. Every interaction must be authenticated and validated for every user accessing every system or application every time. To that end, interoperability is more important than ever.To that end, interoperability is more important than ever. FIDO2 Web Authentication (WebAuthn) is quickly emerging as an important interoperability standard that enables users to select and manage an authenticator of their own (security keys, or built-in platform authenticators, such as a mobile device) that works with their web browser of choice (Google Chrome, Mozilla Firefox, Microsoft Edge, Apple Safari, etc.) for secure access to any websites or applications that support the WebAuthn standard."

"Oracle is happy to announce the general availability of FIDO2 WebAuthn for our cloud identity service. This means that websites and applications that are protected by Oracle can enable their audience of users to authenticate with FIDO2 authenticators for multi-factor authentication (MFA) as well as passwordless authentication. This simplifies the user experience and may reduce the number of authenticators that users need to access the variety of web applications they interact with on a regular basis. Ultimately, this gives users more choice, more control, and a frictionless user experience.

Read more on the Oracle Cloud Security Blog > Oracle Strengthens Interoperability and User Experience with General Availability of FIDO2 WebAuthn Support for Cloud Identity.

Tuesday, November 24

Modernization of Identity and Access Management

From the Oracle IAM blog:

"Oracle has been in the IAM business for more than 20 years and we’ve seen it all. We’ve addressed numerous IAM use-cases across the world’s largest, most complex organizations for their most critical systems and applications. We’ve travelled with our customers through various highs and lows. And we’ve experienced and helped drive significant technology and business transformations. But as we close out our second decade of IAM, I’m too distracted to be nostalgic. I’m distracted by our IAM team’s enthusiasm for the future and by the impact we’ll have on our customers’ businesses in the decade to come. Central to that is the focus to respect our customer's identity and access journey and meet them with solutions that fit their individual needs."

 

Monday, August 24

Addressing the Cloud Security Readiness Gap

Cloud security is about much more than security functionality. The top cloud providers all seem to have a capable suite of security features and most surveyed organizations report that they see all the top cloud platforms as generally secure. So, why do 92% of surveyed organizations still report a cloud security readiness gap? They’re not comfortable with the security implications of moving workloads to cloud even if they believe it’s a secure environment and even if the platform offers a robust set of security features. 

Two contributing factors to that gap include:

  • 78% reported that cloud requires different security than on-prem. With security skills at a shortage, the ability to quickly ramp up on a new architecture and a new set of security capabilities can certainly slow progress.
  • Only 8% of respondents claimed to fully understand the cloud security shared responsibilities model; they don’t even know what they’re responsible for; never mind how to implement the right policies and procedures, hire the right people, or find the right security technologies.

I recently posted about how Oracle is addressing the gap on the Oracle Cloud Security blog. There's a link in the post to a new whitepaper from Dao Research that evaluates the cloud security capabilities offered by Amazon AWS, Google Cloud Platform, Microsoft Azure, and Oracle Cloud Infrastructure.

Oracle took some criticism for arriving late to the game with our cloud infrastructure offering. But, several years of significant investments are paying off. Dao's research concludes that “Oracle has an edge over Amazon, Microsoft, and Google, as it provides a more centralized security configuration and posture management, as well as more automated enforcement of security practices at no additional cost. This allows OCI customers to enhance overall security without requiring additional manual effort, as is the case with AWS, Azure, and GCP.

A key take-away for me is that sometimes, the competitive edge in security in delivered through simplicity and ease of use. We've heard over and over for several years that complexity is the enemy of security. If we can remove human error, bake-in security by default, and automate security wherever possible, then the system will be more secure than if we're relying on human effort to properly configure and maintain the system and its security.

Click here to check out the post and the Dao Research whitepaper.

Monday, October 15

Improve Security by Thinking Beyond the Security Realm

It used to be that dairy farmers relied on whatever was growing in the area to feed their cattle. They filled the trough with vegetation grown right on the farm. They probably relied heavily on whatever grasses grew naturally and perhaps added some high-value grains like barley and corn. Today, with better technology and knowledge, dairy farmers work with nutritionists to develop a personalized concentrate of carbohydrates, proteins, fats, minerals, and vitamins that gets added to the natural feed. The result is much healthier cattle and more predictable growth.

We’re going through a similar enlightenment in the security space. To get the best results, we need to fill the trough that our Machine Learning will eat from with high-value data feeds from our existing security products (whatever happens to be growing in the area) but also (and more precisely for this discussion) from beyond what we typically consider security products to be.

In the post, I make the case that "we shouldn’t limit our security data to what has traditionally been in-scope for security discussions" and how understanding Application Topology (and feeding that knowledge into the security trough) can help reduce risk and improve security.

Here's an excerpt:

We’re all guilty of thinking myopically at times. It’s easy to get caught up thinking about the objects in our foreground and to lose our sense of depth. We forget about the environment and the context and we focus too narrowly on some singular subject. It’s not always a bad thing. Often, we need to focus very specifically to take on challenges that would otherwise be too big to address. For example, security professionals spend a lot of time thinking about specific attack vectors (or security product categories). And each one perhaps necessarily requires a deep level of focus and expertise. I’m not arguing against that. But I’d like to suggest that someone on the team should expand their focus to think about the broader environment in which cyberattacks and security breaches take place. When you do, I suspect that you’ll find that there are data points from outside of the typical security realm that, if leveraged correctly, will dramatically improve your ability to respond to threats within that realm.

I posted recently about the importance of convergence (of security functionality). I noted that “Security solutions are evolving toward cloud, toward built-in intelligence via Machine Learning, and toward unified, integrated-by-design platforms.” I went on to suggest that forward-looking security platforms are autonomous and operate with minimal human intervention. I believe that’s where we’re heading. But to better enable machine learning and autonomous security, we need to feed as much relevant data as possible into the system. We need to feed the machine from an expanding trough of data. And with Internet scale as an enabler, we shouldn’t limit our security data to what has traditionally been in-scope for security discussions.

As an example, I’m going to talk about how understanding Application Topology (and feeding that knowledge into the security trough) can help reduce risk and improve your security posture.

What is Application Topology?

As you likely know, modern applications are typically architected into logical layers or tiers. With web and mobile applications, we’ve traditionally seen a presentation layer, an application or middleware tier, and a backend data tier. With serverless compute and cloud microservice architectures, an application’s workload may be even more widely distributed. It’s even common to see core application functions being outsourced to third parties via the use of APIs and open standards. Application Topology understands all the various parts of an application and how they’re interrelated. Understanding the App Topology means that you can track and correlate activity across components that may reside in several different clouds.

How does Application Topology impact security?

Consider an application that serves a package delivery service. It has web, mobile, and API interfaces that serve business line owners, delivery drivers, corporate accounts, and consumer customers. It’s core application logic runs on one popular cloud platform while the data storage backend runs on another. The application leverages an identity cloud service using several authentication techniques for the several audiences. It calls out to a third-party service that feeds traffic & weather information and interacts with other internal applications and databases that provide data points such as current pricing based on regional gas prices, capacity planning, and more. Think about what it means to secure an application like this.

Many popular security tools focus only on one layer or one component. A tool may scan the web application or the mobile app but probably not both. An app like this might have a few different security products that focus on securing APIs and a few others that focus on securing databases. Even if all components feed their security events into a common stream, there’s not likely a unified view of the risk posture for the application as a whole. None of the security tools are likely to understand the full application topology. If the app owner asked for a security report for the entire application, would you be able to provide it? How many different security products would you need to leverage? Would you be able to quantify the impact of a single security configuration issue on the application as a whole?

If a security solution fully understands the application topology and incorporates that knowledge, here are a few of the benefits: You can generate a holistic report on the application to the app owner that covers all components whether on-premises, in the cloud, or via third-parties. You can monitor user activity at one tier and understand how that impacts your risk posture across other tiers. You can monitor for security configuration changes at all components via a unified service and automatically adjust risk scores accordingly. In other words, a deep understanding of the IT infrastructure underneath the application yields a more robust understanding of security issues and an increased ability to respond quickly and automatically.

Summary

Challenge yourself to expand the scope of which data points might be useful for improving security. Are security appliance event logs and threat feeds enough? As we enter an era dominated by AI and Machine Learning, we need to add as much high-value data as possible into the security trough. ML performs better as it incorporates more information. And as Larry Ellison famously said, the threats are becoming increasingly more sophisticated. “It can't be our people versus their computers. We're going to lose that war. It's got to be our computers versus their computers.” We must rely on Machine Learning and we have to feed it with as much intelligence from as many sources as possible.

Tuesday, September 18

Convergence is the Key to Future-Proofing Security

I published a new article today on the Oracle Security blog that looks at the benefits of convergence in the security space as the IT landscape grows more disparate and distributed.

Security professionals have too many overlapping products under management and it's challenging to get quick and complete answers across hybrid, distributed environments. It's challenging to fully automate detection and response. There is too much confusion about where to get answers, not enough talent to cover the skills requirement, and significant hesitation to put the right solutions in place because there's already been so much investment.

Here's an excerpt:

The whole of your security portfolio should provide significantly more value than the sum of its parts.

The challenge facing security professionals seems to grow bigger and more complex by the hour. New threats and risk factors are constantly emerging while the IT landscape continuously evolves. At times, it feels like we’re patching holes on a moving target that’s endlessly shape-shifting. One of the major contributing factors to those feelings of chaos and disorder is the sheer quantity of security products that we rely on to cover our vast IT landscapes.

The Oracle and KPMG Cloud Threat Report 2018 found that cybersecurity professionals manage an average of 46 different security products. 7% of respondents reported being personally responsible for managing over 100 different products. 100 different security products! I don’t imagine that those folks can possibly have a complete understanding of what’s happening across 50 or 100 different security products or what value each of those products is contributing to reducing their risk. This quantity of products alone contributes to the overall challenge in several ways, including:

  • Product Overlap: Security products often have significant functional overlap. In an environment with several security products, it quickly becomes unclear which product will answer which questions. The result is wasted time and effort and longer delays getting critical answers. When addressing an on-going attack or a breach, the speed of the response effort is critical. The longer it takes, the broader the damage will be.
  • Skills Shortage: Organizations spend too much time finding or developing talent across security products. It’s rare for security professionals to have the exact mix of skills and experience that an organization needs. And with an on-going skills shortage, it’s difficult to retain top talent over long periods of time. Again, not having the right expertise in place means that you’re more likely to miss the signals of developing attacks or on-going breaches and to demonstrate longer response times to security events.
  • Delays in Addressing Gaps: Nobody likes wasted money or shelfware. When a gap is found in an organization’s security posture, security professionals are less likely to find and deploy the right solution if they have numerous other security solutions in place that may (or may not) fix the problem. Of course, without a complete understanding of where the limits are on each of those products, it could take months to sort through them and to formulate an approach. It’s the classic human response of freezing in indecision when there are too many factors to consider. When it comes to addressing information security issues, the last thing you want to do is freeze.

So, what can be done and how can we address the issue?

Here’s the good news: Security solutions are evolving toward cloud, toward built-in intelligence via Machine Learning, and toward unified, integrated-by-design platforms. This approach eliminates the issues of product overlap because each component is designed to leverage the others. It reduces the burden related to maintaining skills because fewer skills are needed and the system is more autonomous. And, it promotes immediate and automated response as opposed to indecision. While there may not be a single platform to replace all 50 or 100 of your disparate security products today, platforms are emerging that can address core security functions while simplifying ownership and providing open integration points to seamlessly share security intelligence across functions.

For example, you know that you need an identity and access component for addressing access management needs across numerous SaaS applications and IaaS services. And you need a Cloud Access Security Broker (CASB) to scan SaaS applications and Cloud Infrastructures for insecure configurations and to monitor user activity. But, for the most part, these functions are silo’ed today. One doesn’t talk to the other. But they can. And they should.

Understanding what a user is doing across cloud applications (visibility often provided by CASB) enables you to create a risk score for that user that can then be used by the Identity function to make decisions and take actions such as stepping up authentication, requesting approvals, initiating an access review, or denying access. Understanding that a target system’s configuration was modified recently or that it doesn’t conform to the organization’s security policies also increases risk. And there are numerous sources of additional risk data: identity, CASB, security configuration scanning, SIEM, UEBA, external threat feeds, session context, etc.

Forward-looking security platforms will leverage hybrid cloud architecture to address hybrid cloud environments. They’re autonomous systems that operate without relying on human maintenance, patching, and monitoring. They leverage risk intelligence from across the numerous available sources. And then they rationalize that data and use Machine Learning to generate better security intelligence and feed that improved intelligence back to the decision points. And they leverage built-in integration points and orchestration functionality to automate response when appropriate.

In other words, your security platform should serve as a central brain that doesn’t only import the various security data points but also makes sense of it without relying on human eyes to catch potential threats. And it adds intelligence, identifies patterns, recognizes anomalies, and responds appropriately and within seconds. This is much more advanced than the old SIEM model which simply aggregates data from numerous sources and tries to raise alerts for humans to evaluate. This is a system that thinks for you and leverages advanced analytics to make decisions across those numerous disparate systems. It’s a cloud service so you don’t need to administer and manage it. You become a user; a consumer of its benefits rather than a caretaker. And the result is much more value and further reduced risk than you’d get from the parts alone.

Tuesday, January 30

New World, New Rules: Securing the Future State

I published an article today on the Oracle Cloud Security blog that takes a look at how approaches to information security must adapt to address the needs of the future state (of IT). For some organizations, it's really the current state. But, I like the term future state because it's inclusive of more than just cloud or hybrid cloud. It's the universe of Information Technology the way it will be in 5-10 years. It includes the changes in user behavior, infrastructure, IT buying, regulations, business evolution, consumerization, and many other factors that are all evolving simultaneously.

As we move toward that new world, our approach to security must adapt. Humans chasing down anomalies by searching through logs is an approach that will not scale and will not suffice. 

Here's an excerpt:

If you never change tactics, you lose
the moment the enemy changes theirs

While chasing down a domestic terrorist, FBI Agent Will Brody found himself in an unfamiliar and dangerous environment. (Brody is the protagonist in Marcus Sakey's 2017 novel Afterlife.) To survive in its perilous conditions, its residents commit to two simple rules: (1) pull your own weight and (2) only kill in self-defense. These rules have kept them safe from the obvious imminent threats around them for decades. But Brody sees a change happening in the environment that others don't yet see and warns his new community: "If you never change tactics, you lose the moment the enemy changes theirs." His mantra becomes "New World, New Rules." In other words, you must adapt to changing threats or face the consequences.

As Information Security professionals, we find ourselves in a similar situation. Our environment is transforming rapidly. The assets we're protecting today look very different than they did just a few years ago. In addition to owned data centers, our workloads are being spread across multiple cloud platforms and services. Users are more mobile than ever. And we don’t have control over the networks, devices, or applications where our data is being accessed. It’s a vastly distributed environment where there’s no single, connected, and controlled network. Line-of-Business managers purchase compute power and SaaS applications with minimal initial investment and no oversight. And end-users access company data via consumer-oriented services from their personal devices. It's grown increasingly difficult to tell where company data resides, who is using it, and ultimately where new risks are emerging. This transformation is on-going and the threats we’re facing are morphing and evolving to take advantage of the inherent lack of visibility.

Organizations are in varying stages of migration toward this future state of IT where we have massive distribution and where visibility is elusive. But we all seem to be moving in the same direction. So, we simply can't live by the same old rules. We can’t rely on old security techniques. New World, New Rules.

The old SIEM approach won't suffice
in the future state.

Traditionally, security professionals have relied heavily on SIEM (Security Information and Event Management) solutions to track activity in their environments. The SIEMs resided somewhere on the network and collected logs and event information from other network-connected systems and devices. SIEMs measured themselves by their ability to ingest data from anything and everything on the network. But SIEM users have struggled to translate that event data into actionable intelligence. In many cases, because of the enormous quantity of event data and the inability to parse it quickly and efficiently, SIEM solutions became forensic tools; used after-the-fact to research what may have happened after a breach was detected. The old SIEM approach won't suffice in the future state.

Although many organizations report struggling with the complexity and cost of SIEM solutions, the SIEM market continues to expand. This is because the need for visibility has only grown more urgent with increasing regulations and more aggressive and sophisticated attack techniques. But you want more. Traditional SIEM approaches aren't enough. There simply aren't enough hands-on-deck to rely on manual processes for investigating event data or identifying on-going attacks.

The technologies that have exacerbated the
problem can also be used to address it

Here's the good news: The technologies that have exacerbated the problem can also be used to address it. On-premises SIEM solutions based on appliance technology may not have the reach required to address today's IT landscape. But, an integrated SIEM+UEBA designed from the ground up to run as a cloud service and to address the massively distributed hybrid cloud environment can leverage technologies like machine learning and threat intelligence to provide the visibility and intelligence that is so urgently needed.

Machine Learning (ML) mitigates the complexity of understanding what's actually happening and of sifting through massive amounts of activity that may otherwise appear to humans as normal. Modern attacks leverage distributed compute power and ML-based intelligence. So, countering those attacks requires a security solution with equal amounts of intelligence and compute power. As Larry Ellison recently said, "It can't be our people versus their computers. We're going to lose that war. It's got to be our computers versus their computers."

But to effectively secure the future state, you need more than a SIEM designed for cloud. Here are a few other innovations that we should demand from our security platform:

  • Application Topology Awareness: Detect multi-tier application attacks and lateral movement indicators. Alert application owners not server administrators.
  • Threat Stage Awareness: Map potential and in-progress threats to well understood attack stages to provide better contextual data on how to respond. See developing threats before they happen.
  • Data-Deep Visibility: Detect data access anomalies for any user, database or application.
  • Broad Data Capture: Don't rely solely on security logs. Leverage operational logs, threat feeds, embedded reputation data, and more.
  • User Attribution: Report the identity even if the user context is missing via composite identity awareness and rich user baselines.
  • Configuration Change Awareness: Inject configuration drift context into threat detection.
  • Orchestration: Respond to threats immediately and with precision via REST, scripts, or 3rd party automation frameworks.

Obviously, we're writing about this for a reason. These features are built into Oracle's Security Monitoring and Analytics service (SMA). When we say that our SIEM was designed from the ground up for cloud, we're not just talking about the product architecture. We're talking about its features and functionality. It was designed to address the complexity and peril of distributed cloud environments. It was designed to secure the future state; to be the new rules for the new world.

SMA is built on Oracle’s unified platform for future-state security that also includes Identity, CASB, and Configuration Compliance. It was built 100% in the cloud to address the security needs of hybrid, multi-cloud environments. Traditional SIEMs lack Identity, CASB, and Configuration Compliance functions. And they typically only layer UEBA on top of their legacy SIEM architecture. They lack advanced features like data-deep visibility, user attribution, orchestration, and awareness of threat stages and application topology. Leveraging these innovations, Oracle's approach enables shorter investigations and faster response times while accommodating for all the complexity of the future state.

Oracle simplifies management and
security for the future state.

And, to top it off, Oracle's security services are built on Oracle Management Cloud which, in addition to security, provides a single pane of glass for IT monitoring, management, and analytics. Oracle simplifies management and security for the future state, reducing cost and effort, and providing richer intelligence across increasingly complex environments.

Learn more about how Oracle is addressing these security concerns and incorporating machine learning into adaptive intelligence by reading our whitepaper, "Machine Learning-Based Adaptive Intelligence: The Future of Cybersecurity."

Monday, September 25

Hyperbole in Breach Reporting

While reading the news this morning about yet another successful data breach, I couldn't help but wonder if the hyperbole used in reporting about data breaches is stifling our ability to educate key stakeholders on what they really need to know.

Today's example is about a firm that many rely on for security strategy, planning, and execution. The article I read stated that they were "targeted by a sophisticated hack" but later explains that the attacker compromised a privileged account that provided unrestricted "access to all areas". And, according to sources, the account only required a basic password with no two-step or multi-factor authentication. That doesn't sound too sophisticated, does it? Maybe they brute-forced it, or maybe they just guessed the password (or found it written down in an office?)

It reminded me of an attack on a security vendor back in 2011. As I recall, there was a lot of talk of the sophistication and complexity of the attack. It was called an Advanced Persistent Threat (and maybe some aspects of it were advanced). But, when the facts came out, an employee simply opened an email attachment that introduced malware into the environment - again, not overly sophisticated in terms of what we think a hack to be.

The quantity, availability, and effectiveness of attack techniques are enough to make anyone uncomfortable with their security posture. I previously wrote about a German company who, in a breach response, wrote that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." CISOs are being told that they should expect to be breached. The only questions are about when and how to respond. It makes you feel like there's no hope; like there's no point in trying.

However, if you look at the two examples above that were described as highly sophisticated, they may have been avoided with simple techniques such as employee education, malware detection, and multi-factor authentication. I don't mean to over-simplify. I'm not saying it's all easy or that these companies are at-fault or negligent. I'm just calling for less hyperbole in the reporting. Call out the techniques that help companies avoid similar attacks. Don't describe an attack as overly sophisticated if it's not. It makes people feel even more helpless when, perhaps, there are some simple steps that can be taken to reduce the attack surface.

I'd also advocate for more transparency from those who are attacked. Companies shouldn't feel like they have to make things sound more complicated or sophisticated than they are. There's now a growing history of reputable companies (including in the security industry) who have been breached. If you're breached, you're in good company. Let's talk in simple terms about the attacks that happen in the real world. An "open kimono" approach will be more effective at educating others in prevention. And again, less hyperbole - we don't need to overplay to emotion here. Everyone is scared enough. We know the harsh reality of what we (as security professionals) are facing. So, let's strive to better understand the real attack surface and how to prioritize our efforts to reduce the likelihood of a breach.

Wednesday, September 20

Encryption would NOT have saved Equifax

I read a few articles this week suggesting that the big question for Equifax is whether or not their data was encrypted. The State of Massachusetts, speaking about the lawsuit it filed, said that Equifax "didn't put in safeguards like encryption that would have protected the data." Unfortunately, encryption, as it's most often used in these scenarios, would not have actually prevented the exposure of this data. This breach will have an enormous impact, so we should be careful to get the facts right and provide as much education as possible to law makers and really to anyone else affected.

We know that the attack took advantage of a flaw in Apache Struts (that should have been patched). Struts is a framework for building applications. It lives at the application tier. The data, obviously, resides at the data tier. Once the application was compromised, it really doesn't matter if the data was encrypted because the application is allowed to access (and therefore to decrypt) the data.

I won't get into all the various encryption techniques that are possible but there are two common types of data encryption for these types of applications. There's encryption of data in motion so that nobody can eavesdrop on the conversation as data moves between tiers or travels to the end users. And there's encryption of data at rest that protects data as it's stored on disk so that nobody can pick up the physical disk (or the data file, depending on how the encryption is applied) and access the data. Once the application is authenticated against the database and runs a query against the data, it is able to access, view, and act upon the data even if the data was encrypted while at rest.

Note that there is a commonly-applied technique that applies at-rest encryption at the application tier. I don't want to confuse the conversation with too much detail, but it usually involves inserting some code into the application to encrypt/decrypt. I suspect that if the application is compromised then app-tier encryption would have been equally unhelpful.

The bottom line here is that information security requires a broad, layered defense strategy. There are numerous types of attacks. A strong security program addresses as many potential attack vectors as possible within reason. (My use of "within reason" is a whole other conversation. Security strategies should evaluate risk in terms of likelihood of an attack and the damage that could be caused.) I already wrote about a layered approach to data protection within the database tier. But that same approach of layering security applies to application security (and information security in general). You have to govern the access controls, ensure strong enough authentication, understand user context, identify anomalous behavior, encrypt data, and, of course, patch your software and maintain your infrastructure. This isn't a scientific analysis. I'm just saying that encryption isn't a panacea and probably wouldn't have helped at all in this case.

Equifax says that their "security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure." Clearly, humans need to rely on technology to help identify what systems exist in the environment, what software is installed, which versions, etc. I have no idea what tools Equifax might have used to scan their environment. Maybe the tool failed to find this install. But their use of "at that time" bothers me too. We can't rely on point-in-time assessments. We need continuous evaluations on a never ending cycle. We need better intelligence around our IT infrastructures. And as more workloads move to cloud, we need a unified approach to IT configuration compliance that works across company data centers and multi-cloud environments.

100% protection may be impossible. The best we can do is weigh the risks and apply as much security as possible to mitigate those risks. We should also all be moving to a continuous compliance model where we are actively assessing and reassessing security in real time. And again... layer, layer, layer.