Showing posts sorted by relevance for query insider. Sort by date Show all posts
Showing posts sorted by relevance for query insider. Sort by date Show all posts

Thursday, January 24

The Insider Threat: News & Info

Some recent news of malicious insiders:

Some interesting reading on the insider threat:

  • Are Insiders Really a Threat? - An article from the Software Engineering Institute at Carnegie Mellon discussing the reality of the insider threat and outlining thirteen practices for preventing insider attacks. Incidentally, I think the 30% stat they provide is low. I think 30% may be the percentage of reported malicious attacks perpetrated by insiders. A far greater number of security breaches happen every day by non-malicious insiders. And here's an article on research suggesting that many insider breaches aren't reported (and why).

  • The CERT Insider Threat Research page - Lots of useful information on insider breaches, including the source of the article above.

What does all that mean?

Well, the insider threat is real. I don't think that's controversial news. But I would argue that there are far more light security breaches by insiders than malicious attacks -- something I haven't seen much data on. But a breach is a breach and in many cases can be prevented with the right policies, processes and tools. I like the SEI article and I think it provides a good place to start thinking about how to approach the challenge.

Friday, October 3

85% of Security Breaches are Opportunistic

I've talked before about security breaches being crimes of opportunity. I've given presentations and webinars discussing the Insider Threat and talking about security breaches. And I always mention that I don't think the concern should be that people are bad. I don't think that employees are out-to-get their companies.

I didn't want to paint a picture of bad guys huddled in a dark room trying to figure out how to breach the company's security. Sure, that happens too. But, I don't think that's the real Insider Threat. Some of those attacks may have an element of insider advantage, but the big number of security breaches that I attribute to insiders are more opportunistic. It's administrators who have been given explicit access to sensitive information and stumble across it in their daily routine. And it happens all the time.

According to a new Data Breach Report by Verizon Business,

85% of security breaches are opportunistic.

I always thought the percentage of insider breaches that are opportunistic would be high. But, of the breaches covered in this report,

18% were caused by insiders.

I believe that number to be much higher. This report is based on breaches that were not only reported, but brought to Verizon Business for help. Nobody calls a forensics team when an admin opens up an HR doc containing a co-worker's salary. Or when an admin creates a new account and grants full system rights in order to get a new application up and running. I would consider both scenarios to be a security breach, but neither would appear in this report (or other reports). Those breaches are generally not reported and quite often not even noticed.

Does your environment have a mechanism that enables you to even see that kind of activity? Most do not. ...which leads me to the last stat I'll share from the report:

87% of breaches in this study were considered
avoidable through reasonable controls

...and I would argue that the same is true for the unreported, opportunistic, insider-threat type of breaches that are likely unrepresented in this research.

Wednesday, May 13

The SOFT Insider Threat

I've written a lot about the insider threat and what it means to me. A while back, I spoke to IT Business Edge about my opinion that non-malicious insiders pose a greater risk of causing a breach than malicious insiders. Many in the industry still claim that insiders should not be a major cause for concern and that external threats should get the lion's share of attention.

It's fairly easy to see that malicious attacks cause immediate and expansive financial harm. But, the unintentional or at least non-malicious insider breaches, which I'll call the Soft Insider Threat, occurs far more often – perhaps hundreds of times every day.

Today, I read a story in NetworkWorld titled Inside a Data Leak Audit that illustrates my story.

The IT Director at a pharmaceutical firm facilitated a data leakage audit for his company. Before the audit, the firm believed they "were in good shape". They "had done internal and external audits" and "extensive penetration testing". They had intrusion detection and prevention solutions, laptop encryption, and employee training. What they found out is that "you can do all that and it's just not enough."

The audit, conducted by Networks Unlimited, revealed gaping holes, including:
  • 700 leaks of critical information, such as Social Security numbers, pricing, financial information and other sensitive data in violation of the PCI-DSS standards.
  • Over 4,000 incidents that ran counter to HIPAA and Defense Department Information Assurance Certification rules.
  • More than 1,000 cases of unencrypted password dissemination, such as to access personal, Web-based e-mail accounts.
A few specific examples:
  • Employees sent ZIP files and attachments of confidential documents in unencrypted emails.
  • An employee attached a clinical study report in an unencrypted email to an outside vendor.
  • An employee sent sensitive employee compensation data to an outside survey company inc. salary, bonuses, sales quota, stock options, granted share price and more.
This single audit conducted on one company revealed 11,000 potential leaks that not only went unreported as data breaches, but wouldn't have even been known about or identified as problematic if the audit wasn't going on at the time.

I call them soft breaches because they're not intended to be harmful and may not ever cause harm or get noticed. But if they happen 10,000 times over the course of two weeks, that's 260,000 security violations each year. And those are real breaches that may violate HIPAA or PCI-DSS, expose employee and customer information, violate business contracts, and otherwise cause potential for harm. It should be pretty apparent that if this happens 260,000 times each year, that's a pretty big attack surface.

As the author and auditor say in the article, don't leave security in the hands of end-users. Automate the important stuff and track activity on a regular basis to ensure that your attack-surface is in-line with your risk tolerance. Don't ignore the soft insider threat just because it gets overlooked. That's the exact reason why you need to address it.

Thursday, August 21

Insider Threat: Crime of Opportunity

For the past few years, I've talked to many people about the insider threat. I don't spend too much time focused on the hardcore criminal element that plan an attack against their employer. I have mostly been thinking about the 35% of employees that claim they need to break policies in order to get their jobs done (see my post on Insider Threat - By the Numbers). And the unknown percentage of employees who break policies without being noticed (or in many cases without even knowing it).

A few days ago, security researcher Ira Winkler articulated one aspect of this very plainly.
Why is there a sudden epidemic of violations of sensitive personal information? The answer is, Because it’s there.
The scenario of an employee viewing sensitive information that they shouldn't be viewing is a fairly common example of real-world insider security breaches. While it won't likely lead to a $7 Billion loss, it could mean a failed audit, bad publicity, lost customers, or other lost business opportunities. In today's transparent business environment, it's only a matter of time before juicy information is made public. State Dept. employees were probably snooping on passport information for years before they found the 2008 presidential candidates. Then, it got out and became a news story.

Winkler goes on to note:
Anyone developing or maintaining information just better accept that their fellow workers will look at information and that they need to track and limit access. More importantly, they better look at their audit logs and specifically search for violations.
I agree. One of the scenarios I often run into is where administrators require access to files (in order to manage access) but they don't require access to the information within those files. A good example is the admin who controls access to HR files and has the ability to open offer letters containing salary and other personal information. To Winkler's point, if the capability is there, they will likely open the files to take a peek. After all, they have been explicitly granted access to those files in order to do their jobs. Doesn't that make it OK? No. And to Winkler's final point, the admin would probably exercise additional restraint if they knew that file access was being monitored.

Tuesday, September 1

The 'Soft' Insider Threat: More Data

There's a new IDC white paper sponsored by RSA:
Insider Risk Management: A Framework Approach to Internal Security (PDF)

It has some interesting data on the risk posed by insiders. Specifically, they look at the difference between risk from malicious attackers and the risk posed by unintentional breaches or well-intentioned employees (the 'Soft' Insider Threat).

Courion points out one of the most interesting data points:
"CXOs also revealed that the greatest financial impact to their organization was caused by risks related to out-of-date or excessive access rights"
I was surprised by that. I intuitively know that soft breaches occur far more often than malicious attacks. But, my intuition also tells me that malicious attacks probably cause far more extensive financial harm. The respondents of this survey tell us that inappropriate permissions lead to greater financial harm than malware, internal fraud, deliberate policy violations, and unauthorized access (among others).

You should look directly at the data. It does vary by country. In the U.S. (where the greatest financial losses were reported by respondents), internal fraud edges out excessive rights, but I'm still surprised to see the financial impact of each is almost equal. And keep closer watch on contractors and temporary employees!

Wednesday, March 23

The RSA Breach

[Updated 5:36 pm ET 23 Mar 2011]

Sorry - two points of clarification:

1. Where I say "serial #" throughout my post below, we should keep in mind that the token has a hidden 'seed record' which is actually used in the algorithm, so that's another level of security. The serial # is not enough - you also need the seed # and the ability to match it to a given user's token.

2. I should've mentioned that there's also a feature which prevents brute-force attacks by disabling an account after x number of failed attempts, so if you have a very good educated guess on the PIN, along with the other data, you have a good shot. If you think you'll brute-force it, that isn't going to fly.

[end update]


I don't have any inside info, but it certainly sounds like the algorithm for generating the One-Time-Passwords (OTP) may have been accessed. This makes an attack much easier because to some degree it eliminates the "second factor" in the Two-Factor authentication. But not 100%.

If I know the algorithm, to spoof the token functionality, I still need the serial # of the token, matched to the user name, and the PIN. These things aren't impossible, though. You could stop by a person's desk, for example, and scribble down their serial # while they're getting coffee. If they're a co-worker, you probably know their user name and can make some guesses about the PIN.

Most people I know that use RSA tokens use a pretty simple PIN - a date, 4 digits of a phone number, something like that. So, if you use social engineering to get the serial # and user name, your down to having to guess the PIN, which is really a shorter, less secure password. And you're back to one-factor authentication. PINs may also be written down on the desk, scribbled on the back of the token (I've seen it), left in email or browser auto-fill, etc.

For an outsider to use this attack, it's a little more challenging than an insider. You'd need access to the serial numbers used by the company and the ability to match them with user names. Based on the info provided by RSA in their letter to customers, some or all of this may be in the RSA server logs. So, protecting those logs has just become critical.

If the token is installed on a smart phone or PC (software token), the token only works from that installed device. So, if the algorithm is public, the software tokens may have just become slightly more secure than the hardware tokens since it would be difficult to spoof the hardware configuration (or even to know the exact hardware) associated to that software token. ...at least, that's how I think it works.

So, a few shifts have been made if my assumptions are true:
- Software tokens may be more secure if the algorithm is known.
- Protecting the RSA server and logs, and access to those logs has become critical.
- Overall, the system is still somewhat secure, but people don't buy RSA tokens for 'somewhat secure'.
- RSA tokens have become pretty insecure against insider attacks.

If anyone knows something different, please correct me.

Tuesday, December 11

Insider Threat - By the Numbers

I've been talking with customers and colleagues about the insider threat throughout most of 2007. I've mentioned stats that 70% of electronic security breaches originate inside the firewall and 90% of those are users with elevated rights (systems administrators, etc.).

For the most part, I've rationalized that most of those attacks are likely in one of these two categories:
  • Opportunistic
  • Unintentional
The category that's missing is malicious. I leave out malicious because I believe the large majority of breaches are not intentional or at least not driven by ill-intent. From what I've seen, people break security policies because their everyday jobs lead them to it. Sometimes, people break security protocols in order to meet a deadline or otherwise get a task accomplished. Other times, opportunity just presents itself.

Consider these scenarios:
  • A DBA opens a database to accomplish a work-related task and encounters data that's just too enticing to ignore.
  • A file system administrator is asked to grant a new HR manager access to the file share that houses previous employees' offer letters and he/she can't help but take a peak at a few co-worker salaries.
  • An employee is asked to take some work home and rather than carry a company laptop, they put sensitive information on a USB key that they often use to share songs or other trivial files with friends. Or they email files to/from a personal account which may not be secure.
  • In software development and/or integration, I've seen numerous people make decisions to share a password, grant full permissions or otherwise remove security restrictions to troubleshoot some software or configuration-related issue.
All of these scenarios represent a real security risk to the organization but none would be considered a malicious attack. When I first saw the 70% number, I thought it had to include these types of scenarios. I know malicious attacks happen, but I just don't see it in my daily life. These scenarios, however, are another story. It's almost hard to work on any corporate project and not encounter these types of security breaches.

A series of articles posted yesterday in Network World by Denise Dubie provides some air cover for the arguments I've made based on personal experience. Check out just a few of these quotes, then go look at the articles for yourself. Great food for thought.

End users behaving badly
Most employees knowingly violate corporate security policies.
By Denise Dubie, Network World, 12/10/07

"most companies say they have security policies in place, yet data breaches continue to plague more than 75% of Fortune 1000 companies"

"More than 50% of survey respondents admit to copying confidential information onto a USB memory stick, and 87% say they believe that the company's policy forbids it. But 40% also reported they knowingly break the policy because the company doesn't enforce it, and another 21% said 'no one really cares about compliance with this policy.' Close to 30% said they'd violate the policy because otherwise they would not be able to complete their work on time."

"46% of those polled said they share their passwords at work, and 40% of survey respondents believe that sharing passwords with co-workers is necessary to get work done within deadlines"

Trusted users pose significant security threats, survey finds
RSA survey data reveals innocent insiders create data exposures of extraordinary scope
By Denise Dubie, Network World, 12/10/07

"35% of people polled said they need to work around their organization's security policies to get their job done"

"34% reported having held a door open for someone they did not recognize"

Scary tech stories: How dangerous user behavior puts networks at risk
IT managers share tales of how users' actions can cause security nightmares
By Denise Dubie, Network World, 12/10/07

"end users just don't think passwords are a big deal and think we are just here to make their lives miserable when we request them to change or update passwords"

Monday, October 27

More Insider Threat Data

RSA recently released their latest data on Insider security.

Some interesting results:

53% of respondents feel they NEED to work around security policies to get their jobs done.

37% of respondents have stumbled into areas of the network to which they SHOULDN'T have access.

50% of U.S. respondents switched roles and still had access to UNNECESSARY accounts/resources.

And that's with most respondents understanding security policies and having been given training about the importance of following security practices.

The last time I wrote about an RSA survey pointing out that employees feel they NEED to work around security controls to get their jobs done, the number was at 35%. So, it's either gotten worse or it varies from crowd to crowd (likely the latter).

Get the full survey report here

Saturday, September 6

89% of Security Incidents in 2007 Unreported

I've been saying for the past few years that most security breaches go unreported, but I had no hard data to back it up.  I just believed it by instinct and some anecdotal evidence.  Now, we have a survey to point to with supporting data that claims 89% of data leakage incidents in 2007 went unreported.  I've also talked a lot about non-malicious insider breaches which is listed as the #2 security challenge by respondents of this survey.  I haven't seen that question asked very often.  Interesting data points.  Data leakage, lost devices, insider threats continue to be a major concern (along with email attachments, malware and phishing).

Friday, March 21

Obama Passport Breached by Insider

This is a great example of one of the most underestimated insider threat scenarios that I would be worried about if I were managing GRC for an organization.

Three employees of the U.S. State Department, who were properly given access rights to passport files, inappropriately used those rights to access details such as Obama's date and place of birth, e-mail address, mailing address, Social Security number, former names and travel plans. Was this a problem about not having the right policies in place? No. A problem with ineffective controls? No. It's simply a problem of a few people choosing to abuse the trust that had been given them – not out of malice, but simple curiosity (most likely).

Luckily, the State Department has computer-monitoring equipment in place that triggered alarms. And each of the three breaches was identified and dealt with. This incident will serve as a pretty strong deterrent for future curious employees who might otherwise be tempted to try the same thing. And (if this wasn't a government agency) the organization would be able to prove to auditors pretty quickly that they're effectively managing the risk associated to the access rights provided to employees and contractors. Because even when there is risk, they're watching and ready to respond.

UPDATE:
Apparently, it was a bipartisan attack.

Wednesday, January 14

Bad Guy Scenario

Here's a perfect example of the insider "bad guy" threat scenario. An unhappy ex-employee came back in through an Internet-based system and put malicious code on the company's customers' servers. He installed the code on 1000 servers and crashed 25 out of the 1000. The company reports a cost of $49,000 to find and fix the problem. They also say it could have cost $4.25 million if all 1000 servers had crashed.

The Lessons:

  • Be diligent about monitoring – catching this early saved close to $4 Million
  • De-Provision (it's unclear whether the employee still had an account)
  • Include hosted and Internet systems in your de-provisioning process
  • Do security audits to find and fill holes
Although I don't think the "bad guy" scenario happens nearly as much as the "good guy" security breach scenario, it has the potential to get very expensive very quickly.

Monday, February 4

Users Cutting Corners, Not Crooks, Are Main Inside Threat

Thanks to IT Business Edge for taking some time to speak with me about the insider threat. Now, I'm going to take a moment to argue with myself on one point. While I do think non-malicious breaches occur far more often than their malicious counterparts, I do also concede that so far it appears that the malicous attacks have brought about more monetary damages (which is usually the bottom line in corporate environments). So, the question of which is a bigger threat probably depends on which beans you're counting. Strictly from an audit and policy perspective, you want to be sure that policies are being enforced, which is why the numerous security breaches we often see in our daily routines seem like a bigger threat. They're more likely to cause problems in an audit or compliance project. And they open holes which can be exploited during malicious attacks. So, if you don't patch the holes that are often exploited by non-malicious personnel, it could come back to bite you in the bottom line.

Monday, April 10

Layered Database Security in the age of Data Breaches

We live in a time of daily breach notifications. One recently affected organization in Germany put out a statement which said: "The incident is not attributable to security deficiencies." and "Human error can also be ruled out." They went on say that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." It's a tough climate we find ourselves in. It  just feels too hard or impossible at times. And there's some truth to that. There are way too many potential attack vectors for comfort.

Many breaches occur in ways that make it difficult to pinpoint exactly what might have prevented it. Or, the companies involved hide details about what actually happened or how. In some cases, they lie. They might claim there was some Advanced Persistent Threat on the network when in reality, it was a simple phishing attack where credentials were simply handed over.

In one recent case, a third party vendor apparently uploaded a database file to an unsecured Amazon AWS server. A media outlet covering the story called out that it was not hacking because the data was made so easily available. Numerous checkpoints come to mind that each could have prevented or lessened the damage in this scenario. I’d like to paint a picture of the numerous layers of defense that should be in place to help prevent this type of exposure.

Layer 1: Removing Production Data
The data should have been long removed from the database.
Assuming this is a non-production database (and I sure hope it is), it should have been fully masked before it was even saved as a file. Masking data means completely removing the original sensitive data and replacing it with fake data that looks and acts real. This enables safe use of the database for app development, QA, and testing. Data can be masked as it’s exported from the production database (most secure) or in a secure staging environment after the initial export. Had this step been done, the database could safely be placed on an insecure AWS server with limited security concerns because there’s no real data. An attacker could perhaps use the DB schema or other details to better formulate an attack on the production data, so I’m not recommending posting masked databases publicly, but the risk of data loss is severely limited once the data is masked.

Layer 2: Secure Cloud Server Configuration
The researcher should never have been able to get to the file.
A security researcher poking around the web should never have been able to access this database file. Proper server configuration and access controls should prevent unauthorized access to any files (including databases). In addition to documenting proper security configuration, certain Cloud Security Access Brokers can be used to continuously monitor AWS instances to ensure that server configurations match the corporate guidelines. Any instances of configuration drift can be auto-remediated with these solutions to ensure that humans don’t accidentally misconfigure servers or miss security settings in the course of daily administration.

Layer 3: Apply Database Encryption
Even with access to the database file, the researcher should not have been able to access the data.
At-rest data encryption that is built into the database protects sensitive data against this type of scenario. Even if someone has the database file, if it were encrypted, the file would essentially be useless. An attacker would have to implement an advanced crypto attack which would take enormous resources and time to conduct and is, for all intents and purposes, impractical. Encryption is a no-brainer. Some organizations use disk-layer encryption, which is OK in the event of lost or stolen disk. However, if a database file is moved to an unencrypted volume, it is no longer protected. In-database encryption improves security because the security stays with the file regardless of where it’s moved or exported. The data remains encrypted and inaccessible without the proper encryption keys regardless of where the database file is moved.

Layer 4: Apply Database Administrative Controls
Even with administrative permissions to the database, the researcher should not have been able to access the sensitive data.
I’m not aware of similar capabilities outside of Oracle database, but Oracle Database Vault would have also prevented this breach by implementing access controls within the database. Database Vault effectively segregates roles (enforces Separation of Duties) so that even an attacker with DBA permissions and access to the database file and encryption keys cannot run queries against the sensitive application data within the database because their role does not allow it. This role-based access, enforced within the database, is an extremely effective control to avoid accidental access that may occur throughout the course of daily database administration.

Layer 5: Protect Data Within the Database
Even with full authorization to application data, highly sensitive fields should be protected within the database.
Assuming all of the other layers break down and you have full access to the unencrypted database file and credentials that are authorized to access the sensitive application data, certain highly sensitive fields should be protected via application-tier encryption. Social Security Numbers and Passwords, for example, shouldn’t be stored in plain text. By applying protection for these fields at the app layer, even fully authorized users wouldn’t have access. We all know that passwords should be hashed so that the password field is only useful to the individual user who enters their correct password. But other fields, like SSN, can be encrypted at the app layer to protect against accidental exposure (human error), intentional insider attack, or exposed credentials (perhaps via phishing attack).

Maybe the vendor didn’t follow the proper protocols instituted by the organization. Maybe they made a human error; we all make mistakes. But, that’s why a layered approach to database security is critical on any database instances where sensitive production data resides. Security protocols shouldn’t require humans to make the right decisions. They should apply security best practices by default and without option.

Assuming this was a non-production database, any sensitive data should have been fully masked/replaced before it was even made available. And, if it was a production DB, database encryption and access control protections that stay with the database during export or if the database file is moved away from an encrypted volume should have been applied. The data should have been protected before the vendor's analyst ever got his/her hands on it. Oracle Database Vault would have prevented even a DBA-type user from being able to access the sensitive user data that was exposed here. These are not new technologies; they’ve been around for many years with plentiful documentation and industry awareness.

Unfortunately, a few of the early comments I read on this particular event were declarations or warnings about how this proves that cloud is less secure than on-premises deployments. I don’t agree. Many cloud services are configured with security by default and offer far more protection than company-owned data centers. Companies should seek cloud services that enable security by default and that offer layered security controls; more security than their own data centers. It’s more than selecting the right Cloud Service Provider. You also need to choose the right service; one that matches the specific needs (including security needs) of your current project. The top CSPs offer multiple IaaS and/or PaaS options that may meet the basic project requirements. While cloud computing grew popular because it’s easy and low cost, ease-of-use and cost are not always the most important factors when choosing the right cloud service. When sensitive data is involved, security needs to be weighed heavily when making service decisions.

I'll leave you with this. Today's computing landscape is extremely complex and constantly changing. But security controls are evolving to address what has been called the extended enterprise (which includes cloud computing and user mobility among other characteristics). Don't leave security in the hands of humans. And apply security in layers to cover as many potential attack vectors as possible. Enable security by default and apply automated checks to ensure that security configuration guidelines are being followed.

Note: Some of the content above is based on my understanding of Oracle security products (encryption, masking, CASB, etc.) Specific techniques or advantages mentioned may not apply to other vendors’ similar solutions.

Wednesday, July 18

Boeing Data Theft: NetVision Use Case

An Information Week article titled Boeing Employee Charged With Stealing 320,000 Sensitive Files discusses a massive data breach by a Boeing insider. It's another illustration of the fact that the biggest threats for organizations are insiders. The perpetrator (Gerald Lee Eastman) was ready to share Boeing's sensitive information which could cost Boeing as much as $15 billion in damages.

This type of attack is a good use case for NetVision file system monitoring (part of our NVMonitor product). The article explains that Eastman had to exploit a weakness in Boeing's computer system to access the stolen files. Over the course of two years, he methodically searched the Boeing systems looking for unprotected file shares and was routinely denied access to many. As he searched for files and found ways around the file system security mechanisms, NetVision file system monitoring could have caught the behavior and alerted security officers with each attempt. ...nipping this issue in the bud two years ago when it began.