Monday, October 15

Improve Security by Thinking Beyond the Security Realm

It used to be that dairy farmers relied on whatever was growing in the area to feed their cattle. They filled the trough with vegetation grown right on the farm. They probably relied heavily on whatever grasses grew naturally and perhaps added some high-value grains like barley and corn. Today, with better technology and knowledge, dairy farmers work with nutritionists to develop a personalized concentrate of carbohydrates, proteins, fats, minerals, and vitamins that gets added to the natural feed. The result is much healthier cattle and more predictable growth.

We’re going through a similar enlightenment in the security space. To get the best results, we need to fill the trough that our Machine Learning will eat from with high-value data feeds from our existing security products (whatever happens to be growing in the area) but also (and more precisely for this discussion) from beyond what we typically consider security products to be.

In the post, I make the case that "we shouldn’t limit our security data to what has traditionally been in-scope for security discussions" and how understanding Application Topology (and feeding that knowledge into the security trough) can help reduce risk and improve security.

Here's an excerpt:

We’re all guilty of thinking myopically at times. It’s easy to get caught up thinking about the objects in our foreground and to lose our sense of depth. We forget about the environment and the context and we focus too narrowly on some singular subject. It’s not always a bad thing. Often, we need to focus very specifically to take on challenges that would otherwise be too big to address. For example, security professionals spend a lot of time thinking about specific attack vectors (or security product categories). And each one perhaps necessarily requires a deep level of focus and expertise. I’m not arguing against that. But I’d like to suggest that someone on the team should expand their focus to think about the broader environment in which cyberattacks and security breaches take place. When you do, I suspect that you’ll find that there are data points from outside of the typical security realm that, if leveraged correctly, will dramatically improve your ability to respond to threats within that realm.

I posted recently about the importance of convergence (of security functionality). I noted that “Security solutions are evolving toward cloud, toward built-in intelligence via Machine Learning, and toward unified, integrated-by-design platforms.” I went on to suggest that forward-looking security platforms are autonomous and operate with minimal human intervention. I believe that’s where we’re heading. But to better enable machine learning and autonomous security, we need to feed as much relevant data as possible into the system. We need to feed the machine from an expanding trough of data. And with Internet scale as an enabler, we shouldn’t limit our security data to what has traditionally been in-scope for security discussions.

As an example, I’m going to talk about how understanding Application Topology (and feeding that knowledge into the security trough) can help reduce risk and improve your security posture.

What is Application Topology?

As you likely know, modern applications are typically architected into logical layers or tiers. With web and mobile applications, we’ve traditionally seen a presentation layer, an application or middleware tier, and a backend data tier. With serverless compute and cloud microservice architectures, an application’s workload may be even more widely distributed. It’s even common to see core application functions being outsourced to third parties via the use of APIs and open standards. Application Topology understands all the various parts of an application and how they’re interrelated. Understanding the App Topology means that you can track and correlate activity across components that may reside in several different clouds.

How does Application Topology impact security?

Consider an application that serves a package delivery service. It has web, mobile, and API interfaces that serve business line owners, delivery drivers, corporate accounts, and consumer customers. It’s core application logic runs on one popular cloud platform while the data storage backend runs on another. The application leverages an identity cloud service using several authentication techniques for the several audiences. It calls out to a third-party service that feeds traffic & weather information and interacts with other internal applications and databases that provide data points such as current pricing based on regional gas prices, capacity planning, and more. Think about what it means to secure an application like this.

Many popular security tools focus only on one layer or one component. A tool may scan the web application or the mobile app but probably not both. An app like this might have a few different security products that focus on securing APIs and a few others that focus on securing databases. Even if all components feed their security events into a common stream, there’s not likely a unified view of the risk posture for the application as a whole. None of the security tools are likely to understand the full application topology. If the app owner asked for a security report for the entire application, would you be able to provide it? How many different security products would you need to leverage? Would you be able to quantify the impact of a single security configuration issue on the application as a whole?

If a security solution fully understands the application topology and incorporates that knowledge, here are a few of the benefits: You can generate a holistic report on the application to the app owner that covers all components whether on-premises, in the cloud, or via third-parties. You can monitor user activity at one tier and understand how that impacts your risk posture across other tiers. You can monitor for security configuration changes at all components via a unified service and automatically adjust risk scores accordingly. In other words, a deep understanding of the IT infrastructure underneath the application yields a more robust understanding of security issues and an increased ability to respond quickly and automatically.


Challenge yourself to expand the scope of which data points might be useful for improving security. Are security appliance event logs and threat feeds enough? As we enter an era dominated by AI and Machine Learning, we need to add as much high-value data as possible into the security trough. ML performs better as it incorporates more information. And as Larry Ellison famously said, the threats are becoming increasingly more sophisticated. “It can't be our people versus their computers. We're going to lose that war. It's got to be our computers versus their computers.” We must rely on Machine Learning and we have to feed it with as much intelligence from as many sources as possible.

Tuesday, September 18

Convergence is the Key to Future-Proofing Security

I published a new article today on the Oracle Security blog that looks at the benefits of convergence in the security space as the IT landscape grows more disparate and distributed.

Security professionals have too many overlapping products under management and it's challenging to get quick and complete answers across hybrid, distributed environments. It's challenging to fully automate detection and response. There is too much confusion about where to get answers, not enough talent to cover the skills requirement, and significant hesitation to put the right solutions in place because there's already been so much investment.

Here's an excerpt:

The whole of your security portfolio should provide significantly more value than the sum of its parts.

The challenge facing security professionals seems to grow bigger and more complex by the hour. New threats and risk factors are constantly emerging while the IT landscape continuously evolves. At times, it feels like we’re patching holes on a moving target that’s endlessly shape-shifting. One of the major contributing factors to those feelings of chaos and disorder is the sheer quantity of security products that we rely on to cover our vast IT landscapes.

The Oracle and KPMG Cloud Threat Report 2018 found that cybersecurity professionals manage an average of 46 different security products. 7% of respondents reported being personally responsible for managing over 100 different products. 100 different security products! I don’t imagine that those folks can possibly have a complete understanding of what’s happening across 50 or 100 different security products or what value each of those products is contributing to reducing their risk. This quantity of products alone contributes to the overall challenge in several ways, including:

  • Product Overlap: Security products often have significant functional overlap. In an environment with several security products, it quickly becomes unclear which product will answer which questions. The result is wasted time and effort and longer delays getting critical answers. When addressing an on-going attack or a breach, the speed of the response effort is critical. The longer it takes, the broader the damage will be.
  • Skills Shortage: Organizations spend too much time finding or developing talent across security products. It’s rare for security professionals to have the exact mix of skills and experience that an organization needs. And with an on-going skills shortage, it’s difficult to retain top talent over long periods of time. Again, not having the right expertise in place means that you’re more likely to miss the signals of developing attacks or on-going breaches and to demonstrate longer response times to security events.
  • Delays in Addressing Gaps: Nobody likes wasted money or shelfware. When a gap is found in an organization’s security posture, security professionals are less likely to find and deploy the right solution if they have numerous other security solutions in place that may (or may not) fix the problem. Of course, without a complete understanding of where the limits are on each of those products, it could take months to sort through them and to formulate an approach. It’s the classic human response of freezing in indecision when there are too many factors to consider. When it comes to addressing information security issues, the last thing you want to do is freeze.

So, what can be done and how can we address the issue?

Here’s the good news: Security solutions are evolving toward cloud, toward built-in intelligence via Machine Learning, and toward unified, integrated-by-design platforms. This approach eliminates the issues of product overlap because each component is designed to leverage the others. It reduces the burden related to maintaining skills because fewer skills are needed and the system is more autonomous. And, it promotes immediate and automated response as opposed to indecision. While there may not be a single platform to replace all 50 or 100 of your disparate security products today, platforms are emerging that can address core security functions while simplifying ownership and providing open integration points to seamlessly share security intelligence across functions.

For example, you know that you need an identity and access component for addressing access management needs across numerous SaaS applications and IaaS services. And you need a Cloud Access Security Broker (CASB) to scan SaaS applications and Cloud Infrastructures for insecure configurations and to monitor user activity. But, for the most part, these functions are silo’ed today. One doesn’t talk to the other. But they can. And they should.

Understanding what a user is doing across cloud applications (visibility often provided by CASB) enables you to create a risk score for that user that can then be used by the Identity function to make decisions and take actions such as stepping up authentication, requesting approvals, initiating an access review, or denying access. Understanding that a target system’s configuration was modified recently or that it doesn’t conform to the organization’s security policies also increases risk. And there are numerous sources of additional risk data: identity, CASB, security configuration scanning, SIEM, UEBA, external threat feeds, session context, etc.

Forward-looking security platforms will leverage hybrid cloud architecture to address hybrid cloud environments. They’re autonomous systems that operate without relying on human maintenance, patching, and monitoring. They leverage risk intelligence from across the numerous available sources. And then they rationalize that data and use Machine Learning to generate better security intelligence and feed that improved intelligence back to the decision points. And they leverage built-in integration points and orchestration functionality to automate response when appropriate.

In other words, your security platform should serve as a central brain that doesn’t only import the various security data points but also makes sense of it without relying on human eyes to catch potential threats. And it adds intelligence, identifies patterns, recognizes anomalies, and responds appropriately and within seconds. This is much more advanced than the old SIEM model which simply aggregates data from numerous sources and tries to raise alerts for humans to evaluate. This is a system that thinks for you and leverages advanced analytics to make decisions across those numerous disparate systems. It’s a cloud service so you don’t need to administer and manage it. You become a user; a consumer of its benefits rather than a caretaker. And the result is much more value and further reduced risk than you’d get from the parts alone.

Tuesday, January 30

New World, New Rules: Securing the Future State

I published an article today on the Oracle Cloud Security blog that takes a look at how approaches to information security must adapt to address the needs of the future state (of IT). For some organizations, it's really the current state. But, I like the term future state because it's inclusive of more than just cloud or hybrid cloud. It's the universe of Information Technology the way it will be in 5-10 years. It includes the changes in user behavior, infrastructure, IT buying, regulations, business evolution, consumerization, and many other factors that are all evolving simultaneously.

As we move toward that new world, our approach to security must adapt. Humans chasing down anomalies by searching through logs is an approach that will not scale and will not suffice. 

Here's an excerpt:

If you never change tactics, you lose
the moment the enemy changes theirs

While chasing down a domestic terrorist, FBI Agent Will Brody found himself in an unfamiliar and dangerous environment. (Brody is the protagonist in Marcus Sakey's 2017 novel Afterlife.) To survive in its perilous conditions, its residents commit to two simple rules: (1) pull your own weight and (2) only kill in self-defense. These rules have kept them safe from the obvious imminent threats around them for decades. But Brody sees a change happening in the environment that others don't yet see and warns his new community: "If you never change tactics, you lose the moment the enemy changes theirs." His mantra becomes "New World, New Rules." In other words, you must adapt to changing threats or face the consequences.

As Information Security professionals, we find ourselves in a similar situation. Our environment is transforming rapidly. The assets we're protecting today look very different than they did just a few years ago. In addition to owned data centers, our workloads are being spread across multiple cloud platforms and services. Users are more mobile than ever. And we don’t have control over the networks, devices, or applications where our data is being accessed. It’s a vastly distributed environment where there’s no single, connected, and controlled network. Line-of-Business managers purchase compute power and SaaS applications with minimal initial investment and no oversight. And end-users access company data via consumer-oriented services from their personal devices. It's grown increasingly difficult to tell where company data resides, who is using it, and ultimately where new risks are emerging. This transformation is on-going and the threats we’re facing are morphing and evolving to take advantage of the inherent lack of visibility.

Organizations are in varying stages of migration toward this future state of IT where we have massive distribution and where visibility is elusive. But we all seem to be moving in the same direction. So, we simply can't live by the same old rules. We can’t rely on old security techniques. New World, New Rules.

The old SIEM approach won't suffice
in the future state.

Traditionally, security professionals have relied heavily on SIEM (Security Information and Event Management) solutions to track activity in their environments. The SIEMs resided somewhere on the network and collected logs and event information from other network-connected systems and devices. SIEMs measured themselves by their ability to ingest data from anything and everything on the network. But SIEM users have struggled to translate that event data into actionable intelligence. In many cases, because of the enormous quantity of event data and the inability to parse it quickly and efficiently, SIEM solutions became forensic tools; used after-the-fact to research what may have happened after a breach was detected. The old SIEM approach won't suffice in the future state.

Although many organizations report struggling with the complexity and cost of SIEM solutions, the SIEM market continues to expand. This is because the need for visibility has only grown more urgent with increasing regulations and more aggressive and sophisticated attack techniques. But you want more. Traditional SIEM approaches aren't enough. There simply aren't enough hands-on-deck to rely on manual processes for investigating event data or identifying on-going attacks.

The technologies that have exacerbated the
problem can also be used to address it

Here's the good news: The technologies that have exacerbated the problem can also be used to address it. On-premises SIEM solutions based on appliance technology may not have the reach required to address today's IT landscape. But, an integrated SIEM+UEBA designed from the ground up to run as a cloud service and to address the massively distributed hybrid cloud environment can leverage technologies like machine learning and threat intelligence to provide the visibility and intelligence that is so urgently needed.

Machine Learning (ML) mitigates the complexity of understanding what's actually happening and of sifting through massive amounts of activity that may otherwise appear to humans as normal. Modern attacks leverage distributed compute power and ML-based intelligence. So, countering those attacks requires a security solution with equal amounts of intelligence and compute power. As Larry Ellison recently said, "It can't be our people versus their computers. We're going to lose that war. It's got to be our computers versus their computers."

But to effectively secure the future state, you need more than a SIEM designed for cloud. Here are a few other innovations that we should demand from our security platform:

  • Application Topology Awareness: Detect multi-tier application attacks and lateral movement indicators. Alert application owners not server administrators.
  • Threat Stage Awareness: Map potential and in-progress threats to well understood attack stages to provide better contextual data on how to respond. See developing threats before they happen.
  • Data-Deep Visibility: Detect data access anomalies for any user, database or application.
  • Broad Data Capture: Don't rely solely on security logs. Leverage operational logs, threat feeds, embedded reputation data, and more.
  • User Attribution: Report the identity even if the user context is missing via composite identity awareness and rich user baselines.
  • Configuration Change Awareness: Inject configuration drift context into threat detection.
  • Orchestration: Respond to threats immediately and with precision via REST, scripts, or 3rd party automation frameworks.

Obviously, we're writing about this for a reason. These features are built into Oracle's Security Monitoring and Analytics service (SMA). When we say that our SIEM was designed from the ground up for cloud, we're not just talking about the product architecture. We're talking about its features and functionality. It was designed to address the complexity and peril of distributed cloud environments. It was designed to secure the future state; to be the new rules for the new world.

SMA is built on Oracle’s unified platform for future-state security that also includes Identity, CASB, and Configuration Compliance. It was built 100% in the cloud to address the security needs of hybrid, multi-cloud environments. Traditional SIEMs lack Identity, CASB, and Configuration Compliance functions. And they typically only layer UEBA on top of their legacy SIEM architecture. They lack advanced features like data-deep visibility, user attribution, orchestration, and awareness of threat stages and application topology. Leveraging these innovations, Oracle's approach enables shorter investigations and faster response times while accommodating for all the complexity of the future state.

Oracle simplifies management and
security for the future state.

And, to top it off, Oracle's security services are built on Oracle Management Cloud which, in addition to security, provides a single pane of glass for IT monitoring, management, and analytics. Oracle simplifies management and security for the future state, reducing cost and effort, and providing richer intelligence across increasingly complex environments.

Learn more about how Oracle is addressing these security concerns and incorporating machine learning into adaptive intelligence by reading our whitepaper, "Machine Learning-Based Adaptive Intelligence: The Future of Cybersecurity."