Monday, September 25

Hyperbole in Breach Reporting

While reading the news this morning about yet another successful data breach, I couldn't help but wonder if the hyperbole used in reporting about data breaches is stifling our ability to educate key stakeholders on what they really need to know.

Today's example is about a firm that many rely on for security strategy, planning, and execution. The article I read stated that they were "targeted by a sophisticated hack" but later explains that the attacker compromised a privileged account that provided unrestricted "access to all areas". And, according to sources, the account only required a basic password with no two-step or multi-factor authentication. That doesn't sound too sophisticated, does it? Maybe they brute-forced it, or maybe they just guessed the password (or found it written down in an office?)

It reminded me of an attack on a security vendor back in 2011. As I recall, there was a lot of talk of the sophistication and complexity of the attack. It was called an Advanced Persistent Threat (and maybe some aspects of it were advanced). But, when the facts came out, an employee simply opened an email attachment that introduced malware into the environment - again, not overly sophisticated in terms of what we think a hack to be.

The quantity, availability, and effectiveness of attack techniques are enough to make anyone uncomfortable with their security posture. I previously wrote about a German company who, in a breach response, wrote that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." CISOs are being told that they should expect to be breached. The only questions are about when and how to respond. It makes you feel like there's no hope; like there's no point in trying.

However, if you look at the two examples above that were described as highly sophisticated, they may have been avoided with simple techniques such as employee education, malware detection, and multi-factor authentication. I don't mean to over-simplify. I'm not saying it's all easy or that these companies are at-fault or negligent. I'm just calling for less hyperbole in the reporting. Call out the techniques that help companies avoid similar attacks. Don't describe an attack as overly sophisticated if it's not. It makes people feel even more helpless when, perhaps, there are some simple steps that can be taken to reduce the attack surface.

I'd also advocate for more transparency from those who are attacked. Companies shouldn't feel like they have to make things sound more complicated or sophisticated than they are. There's now a growing history of reputable companies (including in the security industry) who have been breached. If you're breached, you're in good company. Let's talk in simple terms about the attacks that happen in the real world. An "open kimono" approach will be more effective at educating others in prevention. And again, less hyperbole - we don't need to overplay to emotion here. Everyone is scared enough. We know the harsh reality of what we (as security professionals) are facing. So, let's strive to better understand the real attack surface and how to prioritize our efforts to reduce the likelihood of a breach.

Wednesday, September 20

Encryption would NOT have saved Equifax

I read a few articles this week suggesting that the big question for Equifax is whether or not their data was encrypted. The State of Massachusetts, speaking about the lawsuit it filed, said that Equifax "didn't put in safeguards like encryption that would have protected the data." Unfortunately, encryption, as it's most often used in these scenarios, would not have actually prevented the exposure of this data. This breach will have an enormous impact, so we should be careful to get the facts right and provide as much education as possible to law makers and really to anyone else affected.

We know that the attack took advantage of a flaw in Apache Struts (that should have been patched). Struts is a framework for building applications. It lives at the application tier. The data, obviously, resides at the data tier. Once the application was compromised, it really doesn't matter if the data was encrypted because the application is allowed to access (and therefore to decrypt) the data.

I won't get into all the various encryption techniques that are possible but there are two common types of data encryption for these types of applications. There's encryption of data in motion so that nobody can eavesdrop on the conversation as data moves between tiers or travels to the end users. And there's encryption of data at rest that protects data as it's stored on disk so that nobody can pick up the physical disk (or the data file, depending on how the encryption is applied) and access the data. Once the application is authenticated against the database and runs a query against the data, it is able to access, view, and act upon the data even if the data was encrypted while at rest.

Note that there is a commonly-applied technique that applies at-rest encryption at the application tier. I don't want to confuse the conversation with too much detail, but it usually involves inserting some code into the application to encrypt/decrypt. I suspect that if the application is compromised then app-tier encryption would have been equally unhelpful.

The bottom line here is that information security requires a broad, layered defense strategy. There are numerous types of attacks. A strong security program addresses as many potential attack vectors as possible within reason. (My use of "within reason" is a whole other conversation. Security strategies should evaluate risk in terms of likelihood of an attack and the damage that could be caused.) I already wrote about a layered approach to data protection within the database tier. But that same approach of layering security applies to application security (and information security in general). You have to govern the access controls, ensure strong enough authentication, understand user context, identify anomalous behavior, encrypt data, and, of course, patch your software and maintain your infrastructure. This isn't a scientific analysis. I'm just saying that encryption isn't a panacea and probably wouldn't have helped at all in this case.

Equifax says that their "security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure." Clearly, humans need to rely on technology to help identify what systems exist in the environment, what software is installed, which versions, etc. I have no idea what tools Equifax might have used to scan their environment. Maybe the tool failed to find this install. But their use of "at that time" bothers me too. We can't rely on point-in-time assessments. We need continuous evaluations on a never ending cycle. We need better intelligence around our IT infrastructures. And as more workloads move to cloud, we need a unified approach to IT configuration compliance that works across company data centers and multi-cloud environments.

100% protection may be impossible. The best we can do is weigh the risks and apply as much security as possible to mitigate those risks. We should also all be moving to a continuous compliance model where we are actively assessing and reassessing security in real time. And again... layer, layer, layer.

Monday, April 10

Layered Database Security in the age of Data Breaches

We live in a time of daily breach notifications. One recently affected organization in Germany put out a statement which said: "The incident is not attributable to security deficiencies." and "Human error can also be ruled out." They went on say that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." It's a tough climate we find ourselves in. It  just feels too hard or impossible at times. And there's some truth to that. There are way too many potential attack vectors for comfort.

Many breaches occur in ways that make it difficult to pinpoint exactly what might have prevented it. Or, the companies involved hide details about what actually happened or how. In some cases, they lie. They might claim there was some Advanced Persistent Threat on the network when in reality, it was a simple phishing attack where credentials were simply handed over.

In one recent case, a third party vendor apparently uploaded a database file to an unsecured Amazon AWS server. A media outlet covering the story called out that it was not hacking because the data was made so easily available. Numerous checkpoints come to mind that each could have prevented or lessened the damage in this scenario. I’d like to paint a picture of the numerous layers of defense that should be in place to help prevent this type of exposure.

Layer 1: Removing Production Data
The data should have been long removed from the database.
Assuming this is a non-production database (and I sure hope it is), it should have been fully masked before it was even saved as a file. Masking data means completely removing the original sensitive data and replacing it with fake data that looks and acts real. This enables safe use of the database for app development, QA, and testing. Data can be masked as it’s exported from the production database (most secure) or in a secure staging environment after the initial export. Had this step been done, the database could safely be placed on an insecure AWS server with limited security concerns because there’s no real data. An attacker could perhaps use the DB schema or other details to better formulate an attack on the production data, so I’m not recommending posting masked databases publicly, but the risk of data loss is severely limited once the data is masked.

Layer 2: Secure Cloud Server Configuration
The researcher should never have been able to get to the file.
A security researcher poking around the web should never have been able to access this database file. Proper server configuration and access controls should prevent unauthorized access to any files (including databases). In addition to documenting proper security configuration, certain Cloud Security Access Brokers can be used to continuously monitor AWS instances to ensure that server configurations match the corporate guidelines. Any instances of configuration drift can be auto-remediated with these solutions to ensure that humans don’t accidentally misconfigure servers or miss security settings in the course of daily administration.

Layer 3: Apply Database Encryption
Even with access to the database file, the researcher should not have been able to access the data.
At-rest data encryption that is built into the database protects sensitive data against this type of scenario. Even if someone has the database file, if it were encrypted, the file would essentially be useless. An attacker would have to implement an advanced crypto attack which would take enormous resources and time to conduct and is, for all intents and purposes, impractical. Encryption is a no-brainer. Some organizations use disk-layer encryption, which is OK in the event of lost or stolen disk. However, if a database file is moved to an unencrypted volume, it is no longer protected. In-database encryption improves security because the security stays with the file regardless of where it’s moved or exported. The data remains encrypted and inaccessible without the proper encryption keys regardless of where the database file is moved.

Layer 4: Apply Database Administrative Controls
Even with administrative permissions to the database, the researcher should not have been able to access the sensitive data.
I’m not aware of similar capabilities outside of Oracle database, but Oracle Database Vault would have also prevented this breach by implementing access controls within the database. Database Vault effectively segregates roles (enforces Separation of Duties) so that even an attacker with DBA permissions and access to the database file and encryption keys cannot run queries against the sensitive application data within the database because their role does not allow it. This role-based access, enforced within the database, is an extremely effective control to avoid accidental access that may occur throughout the course of daily database administration.

Layer 5: Protect Data Within the Database
Even with full authorization to application data, highly sensitive fields should be protected within the database.
Assuming all of the other layers break down and you have full access to the unencrypted database file and credentials that are authorized to access the sensitive application data, certain highly sensitive fields should be protected via application-tier encryption. Social Security Numbers and Passwords, for example, shouldn’t be stored in plain text. By applying protection for these fields at the app layer, even fully authorized users wouldn’t have access. We all know that passwords should be hashed so that the password field is only useful to the individual user who enters their correct password. But other fields, like SSN, can be encrypted at the app layer to protect against accidental exposure (human error), intentional insider attack, or exposed credentials (perhaps via phishing attack).

Maybe the vendor didn’t follow the proper protocols instituted by the organization. Maybe they made a human error; we all make mistakes. But, that’s why a layered approach to database security is critical on any database instances where sensitive production data resides. Security protocols shouldn’t require humans to make the right decisions. They should apply security best practices by default and without option.

Assuming this was a non-production database, any sensitive data should have been fully masked/replaced before it was even made available. And, if it was a production DB, database encryption and access control protections that stay with the database during export or if the database file is moved away from an encrypted volume should have been applied. The data should have been protected before the vendor's analyst ever got his/her hands on it. Oracle Database Vault would have prevented even a DBA-type user from being able to access the sensitive user data that was exposed here. These are not new technologies; they’ve been around for many years with plentiful documentation and industry awareness.

Unfortunately, a few of the early comments I read on this particular event were declarations or warnings about how this proves that cloud is less secure than on-premises deployments. I don’t agree. Many cloud services are configured with security by default and offer far more protection than company-owned data centers. Companies should seek cloud services that enable security by default and that offer layered security controls; more security than their own data centers. It’s more than selecting the right Cloud Service Provider. You also need to choose the right service; one that matches the specific needs (including security needs) of your current project. The top CSPs offer multiple IaaS and/or PaaS options that may meet the basic project requirements. While cloud computing grew popular because it’s easy and low cost, ease-of-use and cost are not always the most important factors when choosing the right cloud service. When sensitive data is involved, security needs to be weighed heavily when making service decisions.

I'll leave you with this. Today's computing landscape is extremely complex and constantly changing. But security controls are evolving to address what has been called the extended enterprise (which includes cloud computing and user mobility among other characteristics). Don't leave security in the hands of humans. And apply security in layers to cover as many potential attack vectors as possible. Enable security by default and apply automated checks to ensure that security configuration guidelines are being followed.

Note: Some of the content above is based on my understanding of Oracle security products (encryption, masking, CASB, etc.) Specific techniques or advantages mentioned may not apply to other vendors’ similar solutions.

Friday, February 19

Next Generation IDaaS: Moving From Tactical to Strategic

Today, I posted a blog entry to the Oracle Identity Management blog titled Next Generation IDaaS: Moving From Tactical to Strategic. In the post, I examine the evolution of IDaaS and look toward the next generation of Enterprise Identity and Access Management. I believe that the adoption of IDaaS by enterprises has typically been a reactive, tactical response to the quick emergence of SaaS (and the associated loss of control). The next generation of IDaaS will be more strategic and carefully planned to better meet evolving enterprise requirements.

Note that I'm not talking about the technology. Nor am I talking about consumer use-cases or developer adoption of outsourced authentication. In this post, I'm looking at IDaaS from the perspective of enterprise IAM and the on-going Digital Transformation.

Here's a few quotes that capture the essence:
First generation Identity as a Service (IDaaS) was a fashion statement that’s on its way out. It was cool while it lasted. And it capitalized on some really important business needs. But it attempted to apply a tactical fix to a strategic problem.

Security functions are coalescing into fewer solutions that cover more ground with less management overhead. Digital Enterprises want more functionality from fewer solutions.

The next generation of IAM is engineered specifically for Digital Business providing a holistic approach that operates in multiple modes. It adapts to user demands with full awareness of the value of the resources being accessed and the context in which the user is operating. Moving forward, you won’t need different IAM products to address different user populations (like privileged users or partners) and you won’t stand up siloed IDaaS solutions to address subsets of target applications (like SaaS).

Next generation IDaaS builds on all the promises of cloud computing but positions itself strategically as a component of a broader, more holistic IAM strategy. Next-gen IDaaS fully supports the most demanding Digital Business requirements. It’s not a stop-gap and it’s not a fashion statement. It’s an approach enabling a new generation of businesses that will take us all further than we could have imagined.
 Continue Reading

Thursday, October 30

A Few Thoughts on Privacy in the Age of Social Media

Everyone already knows there are privacy issues related to social media and new technologies. Non-tech-oriented friends and family members often ask me questions about whether they should avoid Facebook messenger or flashlight apps. Or whether it's OK to use credit cards online in spite of recent breach headlines. The mainstream media writes articles about leaked personal photos and the Snappening. So, it's out there. We all know. We know there are bad people out there who will attempt to hack their way into our personal data. But, that's only a small part of the story.

For those who haven't quite realized it, there's no such thing as a free service. Businesses exist to generate returns on investment capital. Some have said about Social Media, "if you can't tell what the product is, it's probably you." To be fair, most of us are aware that Facebook and Twitter will monetize via advertising of some kind. And yes, it may be personalized based on what we like or retweet. But, I'm not sure we fully understand the extent to which this personal, potentially sensitive, information is being productized.

Here are a few examples of what I mean:

Advanced Profiling

I recently viewed a product marketing video targeted to communications service providers. It describes that massive adoption of mobile devices and broadband connections suggesting that by next year there will be 7.7 billion mobile phones in use with 15 billion connections globally. And that "All of these systems produce an amazing amount of customer data" to the tune of 40TB per day; only 3% of which is transformed into revenue. The rest isn't monetized. (Gasp!) The pitch is that by better profiling customers, telcos can improve their ability to monetize that data. The thing that struck me was the extent of the profiling.



As seen in the screen capture, the user profile presented extends beyond the telco services acquired or service usage patterns into the detailed information that flows through the system. The telco builds a very personal profile using information such as favorite sports teams, life events, contacts, location, favorite apps, etc. And we should assume that favorite sports team could easily be religious beliefs, political affiliations, or sexual interests.

IBM and Twitter

On October 29, IBM and Twitter announced a new relationship that enables enterprises to "incorporate Twitter data into their decision-making." In the announcement, Twitter describes itself as "an enormous public archive of human thought that captures the ideas, opinions and debates taking place around the world on almost any topic at any moment in time." And now all of those thoughts, ideas, and opinions are available for purchase through a partnership with IBM.

I'm not knocking Twitter or IBM. The technology behind these capabilities is fascinating and impressive. And perhaps Twitter users allow their data to be used in these ways by accepting the Terms of Use. But, it feels a lot more invasive to essentially provide any third party with a siphon into the massive data that is our Twitter accounts than it would be to, for example, insert a sponsored tweet into my feed that may be selected based on which accounts I follow or keywords I've tweeted.

Instagram Users and Facebook

I recently opened Facebook to see an updated list of People I may know. Most Facebook users are familiar with the feature. It can be an easy way to locate old friends or people who recently joined the network. But something was different. The list was heavily comprised of people who I sort of recognize but have never known personally.

I realized that Facebook was trying to connect me with many of the people behind the accounts I follow on Instagram. Many of these people don't use their real names, talk about their work, or discuss personal family matters on Instagram. They're photographers sharing photos. Essentially, they're artists sharing their art with anyone who wants to take a look. And it feels like a safe way to share.

But now I'm looking at a profile of someone I knew previously only as "Ty_Chi the landscape photographer" and I can now see that he is actually Tyson Kendrick, retail manager from Chicago, father of three girls and a boy. Facebook is telling me more than Mr. Kendrick wanted to share. And I'm looking at Richard Thompson, who's a marketing specialist for one of the brands I follow. I guess Facebook knows the real people behind brand accounts too. It started feeling pretty creepy.

What does it all mean?

Monetization of social media goes way beyond targeted advertising. Businesses are reaching deep into any available data to make connections or discover insights that produce better returns. Service providers and social media platforms may share customer details with each other or with third parties to improve their own bottom lines. And the more creative they get, the more our sense of privacy erodes.

What I've outlined here extends only slightly beyond what I think most people expect. But, we should collectively consider how far this will all go. If companies will make major financial decisions based on Twitter user activity, will there be well-funded campaigns to change user behavior on Social Media platforms? Will the free-flow exchange of ideas and opinions become more heavily and intentionally influenced?

The sharing/exchanging of users' personal data is becoming institutionalized. It's not a corner case of hackers breaking in. It's a systemic business practice that will grow, evolve, and expand.

I have no recipe to avoid what's coming. I have no suggestions for users looking to hold onto to the last threads of their privacy. I just think it's worth thinking critically about how our data may be used and what that may mean for us in years to come.

Monday, July 28

BMWs and Bicycles: The Value of Complexity

If your ideas about Oracle Identity & Access solutions start and end with the word complexity, you're missing the big picture. Contrary to what competitors might be telling you, Oracle's current IAM solution looks nothing like a conglomeration of distinct, aging products. If you want to know about today's Oracle IAM solutions, consider concepts like: common data model, consolidated feature set, shared services, unified admin and operational consoles, and a lower TCO than managing multiple point solutions.

It didn't happen by accident. Oracle has a large, diverse, and talented team of engineers and developers. I'm consistently impressed by the level of talent roaming the halls at Oracle. And the team knew years ago that continued innovation was important. They intentionally expended significant effort to rationalize the product backend so that it's not simply multiple integrated products. Did you know that Oracle uses a single connector for user provisioning, access governance, and privileged account management? Did you know that Oracle's provisioning product also provides access requests, risk scoring, and entitlement reviews in a single product? (not a license bundle - a single installed product)

Can the entire solution be downloaded onto a smartphone and installed in 3-5 minutes? No. But, the solution can meet any current or future Identity & Access requirement with a modular, unified approach to Identity & Access for legacy, enterprise, cloud, mobile, and social use-cases. And there are numerous customer case studies that demonstrate Oracle's IAM technology has already been implemented in mobile, consumer, and IoT scenarios with extreme scale. Claiming that Oracle can't handle third platform use-cases is either ignorant or deceitful. Which it is depends on who you're talking to.

That's not to say that there aren't IAM solutions on the market that offer less complexity. But let's investigate complexity for a moment.

Is complexity good or bad?

If you already answered, you're missing the point. The reality is that complexity should be commensurate with your needs and the optimal amount of complexity will depend on the context.

A BMW is more complex than a bicycle. If your goal is take a leisurely ride through a park to enjoy the weather while getting some exercise, then a bicycle may be a great fit. And a BMW will miss the mark entirely. If the goal is to find a vehicle for your daily commute to work, you might still opt for a bicycle but you'll be balancing the desire for less complexity with the BMW's feature advantages of getting you there quicker, shielding you from the weather, and requiring less effort. If your intended use-cases involve cross-country trips or travel in severe weather, the complexity of BMW engineering becomes a thing of desire. And if you fall in love with the way a BMW handles corners at speed, well... let's just say you may stop thinking about complexity altogether.

Getting back to IAM, here are some IAM features to consider:
  • Enterprise Access Mgt - Context-Aware Adaptive Access and Fraud Detection
  • Enterprise Access Mgt - API Security and Protocol Translation
  • Enterprise Access Mgt - Social Logon and Identity Validation
  • Enterprise Access Mgt - Mobile App for Strong Authentication
  • Enterprise Access Mgt - Enterprise Single Sign On
  • Mobile Security - Secure App Management and Endpoint Data Protection
  • Mobile Security - True SSO to backend applications from the mobile device
  • Mobile Security - Apps integrated with Enterprise Access Mgt
  • Identity Governance - Integrated Access Requests and Provisioning
  • Identity Governance - Entitlement Certifications
  • Identity Governance - Single point of audit across cloud, mobile, and enterprise
  • Privileged Account Management - Proxied Access, Session Management
  • Privileged Account Management - Session Recording
  • Privileged Account Management - Emergency Access
When you begin to think about how these capabilities can be used to enable new business opportunities, it starts to feel like a BMW approaching a corner. And you'll be glad you're not on a bicycle.

Wednesday, April 2

The Evolution of Mobile Security

Today, I posted a blog entry to the Oracle Identity Management blog titled Analyzing How MDM and MAM Stack Up Against Your Mobile Security Requirements. In the post, I walk through a quick history of mobile security starting with MDM, evolving into MAM, and providing a glimpse into the next generation of mobile security where access is managed and governed along with everything else in the enterprise. It should be no surprise that's where we're heading but as always I welcome your feedback if you disagree.

Here's a brief excerpt:
Mobile is the new black. Every major analyst group seems to have a different phrase for it but we all know that workforces are increasingly mobile and BYOD (Bring Your Own Device) is quickly spreading as the new standard. As the mobile access landscape changes and organizations continue to lose more and more control over how and where information is used, there is also a seismic shift taking place in the underlying mobile security models.
Mobile Device Management (MDM) was a great first response by an Information Security industry caught on its heels by the overwhelming speed of mobile device adoption. Emerging at a time when organizations were purchasing and distributing devices to employees, MDM provided a mechanism to manage those devices, ensure that rogue devices weren’t being introduced onto the network, and enforce security policies on those devices. But MDM was as intrusive to end-users as it was effective for enterprises.
Continue Reading

Monday, February 24

Deep Data Governance

One of the first things to catch my eye this week at RSA was a press release by STEALTHbits on their latest Data Governance release. They're a long time player in DG and as a former employee, I know them fairly well. And where they're taking DG is pretty interesting.

The company has recently merged its enterprise Data (files/folders) Access Governance technology with its DLP-like ability to locate sensitive information. The combined solution enables you to locate servers, identify file shares, assess share and folder permissions, lock down access, review file content to identify sensitive information, monitor activity to look for suspicious activity, and provide an audit trail of access to high-risk content.

The STEALTHbits solution is pragmatic because you can tune where it looks, how deep it crawls, where you want content scanning, where you want monitoring, etc. I believe the solution is unique in the market and a number of IAM vendors agree having chosen STEALTHbits as a partner of choice for gathering Data Governance information into their Enterprise Access Governance solutions.

Learn more at the STEALTHbits website.