Jump to content

Blog

Shield

Information and insight on today's advanced threats from the leader in advanced threat prevention.

All Posts


A Threatening Threat Map

FireEye recently released a ThreatMap to visualize some of our Threat Intelligence Data.

The ThreatMap data is a sample of real data collected from our two-way sharing customers for the past 30 days. The data represented in the map is malware communication to command and control (C2) servers, where the “Attackers” represent the location of the C2 servers and “Targets” represent customers.

To mask customer identity, locations are represented as the center of the country in which they reside. There is nothing in the data that can be used to identify a customer or their origin city. The “attacks today” counter is not a real time. Rather, we take a real, observed attack rate and then calculate attacks for the day based on local time.

One of the biggest challenges with the ThreatMap was how to display this information in a consumable way. If all attacks were shown at the rate they occur, the map would be incomprehensible and full of lines (see example below). To solve this, we decided to randomly select which lines to display from our dataset at a rate that results in the best viewing experience. The random selection will help to allow a user to see which areas are targeted more and see which APT families target specific regions.

So how does FireEye use this information? We use it to understand patterns and further our threat intelligence. It lets us see trends over time as well as by malware family or threat actor.

For instance, it lets us examine whether a particular threat actor – say APT1 – is using a particular set of IP addresses, domain names, URLs to launch their attacks. Based on the type of malware being used it also lets us attribute the malware and hence, the source of these attacks, to particular threat actors. It allows us to combine the strategic threat intelligence we have gained from 10+ years of responding to the largest breaches with the tactical indicators of compromise we see in the millions every day from our virtual machine based sensors deployed across the globe. Connecting these dots allows us to create the eye-catching graphic but, more importantly, it also lets us take the fight to the attacker by understanding and uncovering their tactics, techniques and procedures which ultimately lets us serve our mission of better protecting our customers.

Cashing in on Cybersecurity

With the recent news of Wall Street banks requesting a meeting with the U.S. Treasury Department and other government officials to discuss cybersecurity concerns, I reached out to one of the leading information security authorities for her take on the cyber threats that banks currently face. Following is an interview I held with CEO and Founder of Pondera International, Kristen Verderame.

What are the key threats banks face today?

Banks face a number of cybersecurity threats today, now more than ever. Threat actors targeting financial services are getting more and more sophisticated. While malware continues to be the biggest reported threat, attackers are more often using attack vectors only once – rendering monitoring for advanced persistent threat groups more and more difficult. The good news is that the financial services industry is way ahead of the curve in terms of preparedness and the ability to counter such threats. In fact, the financial services sector has lead all sectors for some time because their business case has required it.

How can the government(s) help? Why should they?

Governments can help by publicizing best practices for industry to follow, as demonstrated in the NIST Cybersecurity Framework issued earlier this year. Though the Framework is not comprehensive and certainly not a panacea for all cybersecurity vulnerabilities, it provides a useful assessment and summary of best practices and will be a good resource for entities that have not taken action previously. Governments can also help by facilitating trustworthy information sharing and supporting bi-directional sharing (i.e., government-to-industry sharing, not just industry-to-government). Often the government, as a neutral party, is in the best position to facilitate such sharing between industry competitors.

How important is a community approach when it comes to cyber defense?

A community approach is widely recognized as critical for effective cyber defense. The sharing of threat information and best practices between entities has proven the most effective means of combating APTs across industry sectors and across geographical boundaries. Collaboration through information sharing has been recognized by the U.S. Congress as a critical tool against cybersecurity threats – both the House and Senate introduced legislation to promote information sharing across government and industry. President Obama included information sharing as a key component of his Executive Order. Outside the U.S., the European Commission is currently considering cybersecurity legislation that not only encourages information sharing, but requires collaboration across Member States in a variety of other ways.

Are there any precedents for this type of collaboration and will it succeed?

Yes on both counts, in my opinion. One example of collaboration that has proven effective is the “Information Sharing and Analysis Centers” (ISACs), which are comprised of critical infrastructure owners in various sectors. The ISACs provide an information-sharing platform for their members and sometimes also provide risk mitigation, incident response and alerts to members. Some of the ISACs have proven more effective than others. The FS-ISAC has consistently served as a model for other ISACs while the energy sector ISAC is not as robust as many would like, in part because the industry regulator is at the table with industry presenting a potential conflict of interest. Though not perfect, the ISACs provide at their core a facilitative framework used by government and industry for collaboration and cooperation.

Follow the Information

This is the second article of a six-piece series by FireEye’s Chief Privacy Officer, Shane McGee. In this series, Shane explores six fundamental steps to building an effective privacy program. While there are many critical pieces to consider, Shane chose to highlight the following:

  1. Give Privacy a Voice
  2. Follow the Data
  3. Communicate Clearly, Carefully and Candidly
  4. Become Part of the Process
  5. Build a Culture of Privacy
  6. Rinse and Repeat

Follow the Data

Chief Privacy Officers have a lot of different responsibilities. We monitor new legislation, make policy, meet with clients to explain our information practices, attend conferences to keep current on best practices, participate in working groups and speak publicly about the company’s commitment to privacy. And while that’s all important, none of it addresses what I believe should be the primary responsibility of a CPO: ensuring that your company collects, stores, uses and shares data consistent with the law, company policies and reasonable customer expectations. This, of course, requires a solid understanding of the nature of the data collected and where it resides.

When it comes to data, understanding what you have and where it sits is more difficult than most people think. In fact, it’s frequently the case that any given company will collect more data – and more types of data – than a simple inquiry would indicate. To obtain a true inventory, one must follow the data.

Following the data isn’t easy. You can ask your company’s engineers to provide you with a data map, but what you receive may not turn out to be particularly helpful. Engineers deal with technology and information architectures, and you’re much more likely to receive an architectural flow than something that presents an organized picture of the data the company collects. And if you’re lucky enough to receive a ‘real’ data map, it may be out-of-date or unintelligible to someone without engineering superpowers.

So what do you do? Conduct a friendly data deposition! Sit the engineers down and ask questions – lots and lots of questions. If you don’t understand something, concede your ignorance and ask them to explain. And while they may be more interested in telling you about the technology, your questions should be focused on the data. Although each data deposition should be tailored to fit the situation, some examples of questions to ask are:

  • How is the data collected?Is it validated or sanitized?
  • If collected via a web browser, are HTTP referrers stored? If so, is user data removed?
  • If collected online, is the contributor’s IP or MAC address stored?
  • Is the data associated with any type of unique identifier that could be used to identify a person?
  • List the database fields in which the data is stored.
  • Are there any narrative/text fields that can store information that wasn’t solicited?
  • When data is reportedly deleted, is the record overwritten or just ‘unlinked’ or marked as deleted?

Take copious notes during the data deposition and, after you’re done, draft your own data map while it’s still fresh in your head. Send that back to the engineers and ask them to review, correct any mistakes and bless the final version.

Finally, without giving away any of the yet-to-be disclosed secrets in our sixth article, Rinse and Repeat, make certain you keep apprised of changes. Ask your engineers to update you when something changes on the data map they blessed, and schedule regular checkpoints to catch anything that falls through the cracks. If you do all this, you’ll be better situated to accomplish your primary mission as CPO.

MIRcon: What the Cosmos can Teach us about Security

A few people have asked me what central theme and message stayed with me after last week’s MIRcon. I would say that Dr. Neil deGrasse Tyson’s keynote resonated with me and matched the central theme I felt during the conference; allow me to explain.

During his keynote, Dr. Tyson spoke to us about science, and, specifically, the scientific method, allowing us to objectively overcome our natural human biases. In other words, science is about forming a hypothesis, testing that hypothesis through accurate measurement, and reaching an objective conclusion based on the observed data. Security is the same, or at least it should be.

In the security domain, we don’t always take advantage of and apply the rich foundations of knowledge and expertise that exist in other domains. Often these other domains are far more mature than our own. I can think of no better example of this than science. For hundreds of years, scientists have used an agreed upon, methodical approach to advance the state of science. We can learn a lot from this conceptually and apply it to the security domain.

At MIRcon, the presentations I saw and the discussions I was fortunate enough to take part in indicated to me that we have begun to approach the security domain far more scientifically. Gone are the days of emoting and guessing – the problems are far too complex, the data too diverse, and the attackers too sophisticated. As a profession, we have begun to demand a far more scientific approach to the security domain than was historically the case.

The atmosphere at MIRcon was invigorating. Security professionals have tired of unsupported hypotheses – we are ready for a more formal approach. Today’s challenges require a more scientific way of thinking. We need to explicitly identify and enumerate the challenges we are facing in the field, hypothesize the solutions to those challenges, test those solutions through accurate measurement, and reach objective conclusions about the merits of those solutions.

Recommendations, beliefs, and hypotheses are in no shortage in our field. But are they accurate, do they solve security problems, and do they address the challenges of the day? The answer to those questions needs to be evaluated scientifically, rather than debated in the absence of accurately measured data.

Security has evolved from a niche profession to a mainstream one. As such, our work must stand up to the same rigor we would apply to any other profession. Any other approach would simply be unscientific.

 

What are Java’s Biggest Vulnerabilities?

In our continuing mission to equip security professionals against today’s advanced cyber threats, FireEye has published a free technical report, “A Daily Grind: Filtering Java Vulnerabilities.” The report outlines the three most commonly exploited Java vulnerabilities and maps out the step-by-step infection flow of exploits kits that leverage them.

  • CVE-2012-0507: is due to the improper implementation of AtomicReferenceArray() leading to the type confusion vulnerability.
  • CVE-2013-2465: which involves insufficient bounds checks in the storeImageArray() function. This vulnerability is used by White Lotus and other exploit kits.
  • CVE-2012-1723: which allows attackers to bypass sandbox using type confusion vulnerability.

These vulnerabilities are also being used in targeted attacks. Our report explains Java exploits’ three most common behaviors: usage of reflection to hide the function call, functional and data obfuscation and behavior to download the malicious files. Download the paper to learn more.

Two Limited, Targeted Attacks; Two New Zero-Days

The FireEye Labs team has identified two new zero-day vulnerabilities as part of limited, targeted attacks against some major corporations. Both zero-days exploit the Windows Kernel, with Microsoft assigning CVE-2014-4148 and CVE-2014-4113 to and addressing the vulnerabilities in their October 2014 Security Bulletin.

FireEye Labs have identified 16 total zero-day attacks in the last two years – uncovering 11 in 2013 and five in 2014 so far.

Microsoft commented: “On October 14, 2014, Microsoft released MS14-058 to fully address these vulnerabilities and help protect customers. We appreciate FireEye Labs using Coordinated Vulnerability Disclosure to assist us in working toward a fix in a collaborative manner that helps keep customers safe.”

In the case of CVE-2014-4148, the attackers exploited a vulnerability in the Microsoft Windows TrueType Font (TTF) processing subsystem, using a Microsoft Office document to embed and deliver a malicious TTF to an international organization. Since the embedded TTF is processed in kernel-mode, successful exploitation granted the attackers kernel-mode access. Though the TTF is delivered in a Microsoft Office document, the vulnerability does not reside within Microsoft Office.

CVE-2014-4148 impacted both 32-bit and 64-bit Windows operating systems shown in MS14-058, though the attacks only targeted 32-bit systems. The malware contained within the exploit has specific functions adapted to the following operating system platform categories:

  • Windows 8.1/Windows Server 2012 R2
  • Windows 8/Windows Server 2012
  • Windows 7/Windows Server 2008 R2 (Service Pack 0 and 1)
  • Windows XP Service Pack 3

CVE-2014-4113 rendered Microsoft Windows 7, Vista, XP, Windows 2000, Windows Server 2003/R2, and Windows Server 2008/R2 vulnerable to a local Elevation of Privilege (EoP) attack. This means that the vulnerability cannot be used on its own to compromise a customer’s security. An attacker would first need to gain access to a remote system running any of the above operating systems before they could execute code within the context of the Windows Kernel. Investigation by FireEye Labs has revealed evidence that attackers have likely used variations of these exploits for a while. Windows 8 and Windows Server 2012 and later do not have these same vulnerabilities.

Information on the companies affected, as well as threat actors, is not available at this time. We have no evidence of these exploits being used by the same actors. Instead, we have only observed each exploit being used separately, in unrelated attacks.

About CVE-2014-4148

Mitigation

Microsoft has released security update MS14-058 that addresses CVE-2014-4148.

Since TTF exploits target the underlying operating system, the vulnerability can be exploited through multiple attack vectors, including web pages. In the past, exploit kit authors have converted a similar exploit (CVE-2011-3402) for use in browser-based attacks. More information about this scenario is available under Microsoft’s response to CVE-2011-3402: MS11-087.

Details

This TTF exploit is packaged within a Microsoft Office file. Upon opening the file, the font will exploit a vulnerability in the Windows TTF subsystem located within the win32k.sys kernel-mode driver.

The attacker’s shellcode resides within the Font Program (fpgm) section of the TTF. The font program begins with a short sequence of instructions that quickly return. The remainder of the font program section is treated as unreachable code for the purposes of the font program and is ignored when initially parsing the font.

During exploitation, the attacker’s shellcode uses Asynchronous Procedure Calls (APC) to inject the second stage from kernel-mode into the user-mode process winlogon.exe (in XP) or lsass.exe (in other OSes). From the injected process, the attacker writes and executes a third stage (executable).

The third stage decodes an embedded DLL to, and runs it from, memory. This DLL is a full-featured remote access tool that connects back to the attacker.

Plenty of evidence supports the attacker’s high level of sophistication. Beyond the fact that the attack is zero-day kernel-level exploit, the attack also showed the following:

  • a usable hard-coded area of kernel memory is used like a mutex to avoid running the shellcode multiple times
  • the exploit has an expiration date: if the current time is after October 31, 2014, the exploit shellcode will exit silently
  • the shellcode has implementation customizations for four different types of OS platforms/service pack levels, suggesting that testing for multiple OS platforms was conducted
  • the dropped malware individually decodes each string when that string is used to prevent analysis
  • the dropped malware is specifically customized for the targeted environment
  • the dropped remote access capability is full-featured and customized: it does not rely on generally available implementations (like Poison Ivy)
  • the dropped remote access capability is a loader that decrypts the actual DLL remote access capability into memory and never writes the decrypted remote access capability to disk

About CVE-2014-4113

Mitigation

Microsoft has released security update MS14-058 that addresses this vulnerability.

Vulnerability and Exploit Details

The 32-bit exploit triggers an out-of-bounds memory access that dereferences offsets from a high memory address, and inadvertently wraps into the null page. In user-mode, memory dereferences within the null page are generally assumed to be non-exploitable. Since the null page is usually not mapped – the exception being 16-bit legacy applications emulated by ntvdm.exe–null pointer dereferences will simply crash the running process. In contrast, memory dereferences within the null page in the kernel are commonly exploited because the attacker can first map the null page from user-mode, as is the case with this exploits. The steps taken for successful 32-bit exploitation are:

  1. Map the null page:
    1. ntdll!ZwAllocateVirtualMemory(…,BaseAddress=0×1, …)
  2. Build a malformed win32k!tagWND structure at the null page such that it is properly validated in the kernel
  3. Trigger vulnerability
  4. Attacker’s callback in win32k!tagWND.lpfnWndProc executes in kernel-mode
    1. Callback overwrites EPROCESS.Token to elevate privileges
  5. Spawns a child process that inherits the elevated access token

32-bit Windows 8 and later users are not affected by this exploit. The Windows 8 Null Page protection prohibits user-mode processes from mapping the null page and causes the exploits to fail.

In the 64-bit version of the exploit, dereferencing offsets from a high 32-bit memory address do not wrap, as it is well within the addressable memory range for a 64-bit user-mode process. As such, the Null Page protection implemented in Windows versions 7 (after MS13-031) and later does not apply. The steps taken by the 64-bit exploit variants are:

  1. Map memory page:
    1. ntdll!ZwAllocateVirtualMemory(…)
  2. Build a malformed win32k!tagWND structure at the mapped page such that it is properly validated in the kernel
  3. Trigger vulnerability
  4. Attacker’s callback in win32k!tagWND.lpfnWndProc executes in kernel-mode
    1. Callback overwrites EPROCESS.Token to elevate privileges
  5. Spawns a child process that inherits the elevated access token

64-bit Windows 8 and later users are not affected by this exploit. Supervisor Mode Execution Prevention (SMEP) blocks the attacker’s user-mode callback from executing within kernel-mode and causes the exploits to fail.

Exploits Tool History

The exploits are implemented as a command line tool that accepts a single command line argument – a shell command to execute with SYSTEM privileges. This tool appears to be an updated version of an earlier tool. The earlier tool exploited CVE-2011-1249, and displays the following usage message to stdout when run:

Usage:system_exp.exe cmd
Windows Kernel Local Privilege Exploits

The vast majority of samples of the earlier tool have compile dates in December 2009.  Only two samples were discovered with compile dates in March 2011. Although the two samples exploit the same CVE, they carry a slightly modified usage message of:

Usage:local.exe cmd
Windows local Exploits

The most recent version of the tool, which implements CVE-2014-4113, eliminates all usage messages.

The tool appears to have gone through at least three iterations over time. The initial tool and exploits is believed to have had limited availability, and may have been employed by a handful of distinct attack groups. As the exploited vulnerability was remediated, someone with access to the tool modified it to use a newer exploit when one became available. These two newer versions likely did not achieve the widespread distribution that the original tool/exploits did and may have been retained privately, not necessarily even by the same actors.

We would like to thank Barry Vengerik, Joshua Homan, Steve Davis, Ned Moran, Corbin Souffrant, Xiaobo Chen for their assistance on this research.

Double-edged Sword: Australia Economic Partnerships Under Attack from China

During a visit in mid-September, China’s Foreign Minister Wang Yi urged Australia to become “a bridge between east and west.” He was Down Under to discuss progress on the free trade agreement between Australia and China that seems likely by the end of the year. His comment referred to furthering the trade relationship between the two countries, but he might as well have been referring to hackers who hope to use the deepening alliance to steal information.

The Australian Financial Review (AFR) did an in-depth article with FireEye regarding Chinese attacks against Australian businesses, and this blog provides additional context.

Australia has experienced unprecedented trade growth with China over the last decade, which has created a double-edged sword. As Australian businesses partner with Chinese firms, Chinese-based threat actors increasingly launch sophisticated and targeted network attacks to obtain confidential information from Australian businesses. In the U.S. and Europe, Chinese attacks on government and private industry have become a routine in local newspapers.  Australia, it seems, is the next target.

 The Numbers

First, let’s review the state of Australian and Chinese economic interdependence.  Averaging an annual 9.10% GDP growth rate over the last two decades, China’s unparalleled economic expansion has protected Australia from the worst of the global financial crisis effects. Exports to China have increased tenfold, from $8.3b USD in 2001 to $90b USD in 2013[i], with the most prominent commodities being iron ore and natural gas. Much of these resources originate in Australia, which puts China’s government under significant pressure to meet the skyrocketing demand for them. Despite the ever-increasing co-dependence Australia and China share as regional partners, Chinese authorities are likely supporting greater levels of monitoring and intelligence gathering from the Australian economy – often conducted through Chinese State-Owned Enterprises (SOEs) with domestic relationships in Australia.

SOE direct investment into Australia grew to 84% of all foreign investment inflows from China in 2014, primarily directed into the Australian mining and resource sector; demonstrating a further signal for control as China seeks to capture a level of certainty in catering for its future internal growth. We suspect this to be government-commissioned cyber threat actors targeting Australian firms with a specific agenda: to gain advantage and control of assets both in physical infrastructure and intellectual property.

chn

Figure 1. Chinese Direct Investment into Australia by industry

The Impacts

How have these partnerships impacted Australian networks?  Mandiant has observed the strategic operations of Chinese threat actors target companies involved in key economic sectors, including data theft from an Australian firm.  Chinese Advanced Persistent Threats (APTs) are likely interested in compromising Australian mining and natural resources firms, especially after spikes in commodity prices. The upward trend in APT attacks from China is also aimed toward the third parties in the mining and natural resources ecosystems. Mandiant believes a significant increase in China-based APT intrusions focused on law firms that hold confidential mergers and acquisitions information and sensitive intellectual property. It is no coincidence these third-party firms are often found lacking in network protections. The investigation also found that, at the time of compromise, the majority of victim firms were in direct negotiations with Chinese enterprises, highlighting attempts by the Chinese government to gain advantage in targeted areas.

Due to its endemic pollution problems, clean energy has evolved into a critical industry for China. The country has now engaged a plan to develop Strategic Emerging Industries (SEIs) to address this. Australian intellectual property and R&D have become prime data, and has taken a major position in Chinese APT campaigns. Again, it is the third parties like law firms that are coming under attack.

Furthermore, to reduce China’s reliance on Australian iron ore exports, Beijing has initiated a plan to develop an efficient, high-end steel production vertical through strategic acquisitions in Australia and intervening to prevent unfavorable alliances.  For example, the SOE Chinalco bought into Australian mining companies to presumably prevent a merger that would have disadvantaged their interests. Clearly, the confidential business information of Australian export partners to China is becoming increasingly sought after.

Mandiant found that the majority of compromised firms had either current negotiation with Chinese enterprises or previous business engagements with Chinese enterprises. These attacks will persist as trade and investment grows, though they will do so at the cost of confidential Australian business information such as R&D and intellectual property. As large Australian mining and resources firms themselves may partner with the Australian Signals Directorate for security, the focus of the threat actors shifts to associated parties with access to sensitive data, who may not be pursuing partnerships with the Australian Signals Directorate.  This calls for greater awareness and protection against the increasingly determined and advanced attacks launched.

The Bottom Line

Although this blog focuses on acts against large Australian mining and resources sectors, Mandiant has observed these APT actors often focusing their attention on other sectors such as defence, telecommunications, agriculture, political organizations, high technology, transportation, and aerospace, among others. But the broader lesson and message—drawing from U.S. and European experience with Chinese attacks—is that no one is or will be exempt.  For all Australian businesses and governments, it’s time to fortify defences for a new era of cyber security.

 

[i]“Australian Government Department of Foreign Trade and Affairs. www.dfat.gov.au/publications/stats-pubs/australiasexports-

 

Flying Blind

With all the news about data breaches lately, it’s not particularly surprising to wake up to headlines describing yet another one.  What is perhaps a bit surprising, however, is the common theme that seems to exist in many of the breach stories.  Time and time again, when organizations get breached, they find out the hard way that they don’t have the endpoint and network visibility they thought they did.  The necessary data to perform the forensics required to reach an analytical conclusion is simply missing.  Further, there is no way to remedy this situation – if the data was not properly recorded when it traversed the network or endpoint, there is simply no way to access it.

What are some of the reasons that data is not available come breach response time?  Let’s take a look at a few of them.

  • Collection: One of the goals of a security program is to ensure that the necessary network and endpoint data are collected.  Unfortunately, this is often a challenge for even the most mature of security programs.  In some cases, organizations may not have their networks and endpoints properly instrumented for collection.  In other cases, organizations may not be properly equipped to retain and expose for analysis the volume of data created by the network and endpoint instrumentation.  Either way, when it comes time to investigate, the relevant data will not be available.
  • Visibility: More data doesn’t necessarily mean more visibility or coverage.  There is an important distinction between the volume of the data and the portions of the organization that it provides visibility into.  Some organizations may have portions of their networks or endpoints instrumented for collection, but not others.  But what if the breach occurs in an area of the network or on an endpoint that is not included in the area of visibility?  In those cases, unfortunately, data that is relevant to the breach investigation will not be available for forensics and analysis.
  • Retention: Another important dimension to consider is that of retention.  In the absence of an infinite volume of storage, data cannot be retained forever.  Today’s organizations generate incredible amounts of data from their collection efforts.  Sometimes, the network and endpoints are properly instrumented in the appropriate places, but there is simply nowhere to put the volume of data that is generated.  As the volume of data grows, either the retention period shrinks, or the storage capacity grows to compensate.  It is not uncommon for the retention period to fall to 30 days, or even less.  With mean-time-to-detection at a staggering 229 days, it is easy to see that 30, 60, or even 90 days of retention is simply inadequate when it comes time to perform forensics and analysis.  Although the relevant data for the investigation may have existed at one time, if it isn’t present when we perform our investigation, it doesn’t help us much.  This necessitates us getting a bit smarter about what data we retain.  Our goal should be data that provides us maximum visibility into the network and endpoints, but at the minimal volume.  Perhaps it sounds a bit radical to say, but the days of “collect everything” are gone – instead we find ourselves in an era of “collect the most relevant things”.
  • Analysis: Even if our collection, visibility, and retention are squared away, we may still encounter frustrations and limitations when performing incident response.  Although we may have the data we need over the time period we need it for, we still need to be able to analyze it.  If we are unable to extract the data rapidly from our forensic collection platforms, we will be unable to analyze it.  Simply put, what goes in must come out.  For example, say we need to search for the first appearance of a given Indicator of Compromise (IOC) over the entirety of our retention period.  For this example, let’s assume our retention period is on the order of 12 months.  If that query fails before completing or takes days to complete, it is of no value to incident response.  Incident response demands answers in seconds or minutes, rather than hours or days.

Despite the steady stream of bad news regarding data breaches, there is some good news.  By taking proactive steps, organizations can prepare themselves to perform rapid and efficient incident response when they become the victim of a breach.  Among many details, it’s important for an organization to consider the points above when assessing its breach preparedness.

FireEye and OS X Support

Today, we announced support for OS X in our flagship NX product. This means we now have virtual image capabilities for Macs in an enterprise environment. This is important for several reasons:

  • Mac’s footprint inside the enterprise is growing. Today, 21 percent of information workers are using one or more Apple products and a 52 percent projected increase in Apple devices to be issued according to Forrester.
  • Senior level employees—i.e., targets interesting to attackers—represent 41 percent of enterprise Apple users. At a recent conference, our CTO Dave Merkel said, “We live in a fully connected world. Where information goes, spies follow. Where money goes, crime follows.” Now you can add: “Where the employee goes, malware will follow.”

In fact, our product has been in beta and available to customers for several months now.  Such increased use of Apple computers has caught the attention of attackers, with FireEye Labs seeing malware callbacks from Macs increase 36 percent year-over-year between the first six months of 2013 and 2014.

More importantly, our product uncovered—within two days of deployment—an Apple-centric malware campaign which we detailed in this blog. Specifically, FireEye Labs discovered a previously unknown variant of the APT backdoor XSLCmd – OSX.XLSCmd – which is designed to compromise Apple OS X systems. This backdoor shares a significant portion of its code with the Windows-based version of the XSLCmd backdoor that has been around since at least 2009. This discovery, along with other industry findings, is a clear indicator that APT threat actors are shifting their eyes to OS X as it becomes an increasingly popular computing platform.

We hope with this release, security teams can be ready.

When POS Comes to Shove

In today’s blog post, FireEye examines the threats posed to retailers by crimeware, Point-of-Sale (POS) malware, and other threats. It is certainly a topic that is on the mind of many organizations and individuals these days. But with all the hype and buzz, what proactive steps can a CISO take to better defend his or her organization against these threats? There are many potential approaches that could be taken, but two foundational concepts that come to mind are:

  • Best practices and first principles
  • Continuous Security Monitoring (CSM)

Best practices and first principles are not rocket science, but they still rule the day. As discussed in additional detail in the FireEye blog post on BrutPOS best practices can go a long way towards helping an organization defend itself. First principles such as identity management, sensible permissions, adequate controls for remote logins, and others can help keep an organization from falling victim to the wide variety of threats that it faces today. CISOs can do their part by communicating their vision for assessing the weak links in the chain and strengthening them. It is an iterative process and one that will not be fully completed in a day, a week, or even a month. But the CISO that pushes and motivates his or her organization in this direction will be doing that same organization a great service. It is always better for the organization itself to find a weakness in its security posture than for the attackers to find it.

Despite our best efforts and intentions, however, intrusions and breaches will still inevitably occur. In those instances, our attention quickly turns from prevention to detection and response. Continuous Security Monitoring (CSM) is the formalized process through which we build and enhance our organizational capability to rapidly detect, analyze, contain, and remediate intrusions and breaches. After all, breaches happen, but what a CISO must truly be on the lookout for is the theft of sensitive, proprietary, or confidential data. The financial, legal, and PR damage caused by an intrusion of any scale can be minimized, but only if that intrusion is detected and responded to rapidly. Proactively enhancing the organization’s CSM capability allows a CISO to markedly improve the security posture of the organization.

As an example, consider the case of a Point-of-Sale (POS) malware sample entering an enterprise network. This will likely trip one or more alerts that will be sent to the organization’s work queue (I.e., SIEM, incident ticketing system, etc.).

The first challenge we encounter here is ensuring that this alert does not get overlooked or lost in the noise. This can be accomplished by ensuring that we methodically approach the process by which we develop content to generate alerts for the work queue. We want to ensure a high enough rate of true positives to false positives, or signal-to-noise ratio.

Next, we will need to ensure that an analyst vets, qualifies, and analyzes the relevant alert or alerts. We can ensure this occurs by following a rigorous, formalized incident response process at strategic and tactical levels, along with ensuring we adequately train our staff.

As the analyst reviews the alert, we will need to ensure that the appropriate contextual information in support of the alert can be retrieved quickly and easily. This requires visibility across the network, endpoint, and intelligence in order to enrich the alert data with supporting evidence that will allow us to draw a conclusion as to whether or not we have a compromise, along with the scope of that compromise.

Lastly, we will need to contain and remediate the intrusion. These steps ensure that we stop the POS malware’s progress dead in its tracks — before it can steal valuable and sensitive payment card information from our organization.

If this seems like the familiar people, process, and technology triad, there is good reason for that. We must remind ourselves that it is no one piece of malware or intrusion that lands us in trouble. Rather, it is not detecting and responding to that intrusion in a timely manner that causes the damage.

It is certainly not easy to be a CISO these days. The microscope and heat lamp seem continually focused upon those in the role. The good news is that through a combination of best practices and Continuous Security Monitoring, CISOs can take a proactive stance to defend and protect their organizations against the breaches of today and of tomorrow.

Threat Research

Filter by Category:


A Threatening Threat Map

FireEye recently released a ThreatMap to visualize some of our Threat Intelligence Data.

The ThreatMap data is a sample of real data collected from our two-way sharing customers for the past 30 days. The data represented in the map is malware communication to command and control (C2) servers, where the “Attackers” represent the location of the C2 servers and “Targets” represent customers.

To mask customer identity, locations are represented as the center of the country in which they reside. There is nothing in the data that can be used to identify a customer or their origin city. The “attacks today” counter is not a real time. Rather, we take a real, observed attack rate and then calculate attacks for the day based on local time.

One of the biggest challenges with the ThreatMap was how to display this information in a consumable way. If all attacks were shown at the rate they occur, the map would be incomprehensible and full of lines (see example below). To solve this, we decided to randomly select which lines to display from our dataset at a rate that results in the best viewing experience. The random selection will help to allow a user to see which areas are targeted more and see which APT families target specific regions.

So how does FireEye use this information? We use it to understand patterns and further our threat intelligence. It lets us see trends over time as well as by malware family or threat actor.

For instance, it lets us examine whether a particular threat actor – say APT1 – is using a particular set of IP addresses, domain names, URLs to launch their attacks. Based on the type of malware being used it also lets us attribute the malware and hence, the source of these attacks, to particular threat actors. It allows us to combine the strategic threat intelligence we have gained from 10+ years of responding to the largest breaches with the tactical indicators of compromise we see in the millions every day from our virtual machine based sensors deployed across the globe. Connecting these dots allows us to create the eye-catching graphic but, more importantly, it also lets us take the fight to the attacker by understanding and uncovering their tactics, techniques and procedures which ultimately lets us serve our mission of better protecting our customers.

What are Java’s Biggest Vulnerabilities?

In our continuing mission to equip security professionals against today’s advanced cyber threats, FireEye has published a free technical report, “A Daily Grind: Filtering Java Vulnerabilities.” The report outlines the three most commonly exploited Java vulnerabilities and maps out the step-by-step infection flow of exploits kits that leverage them.

  • CVE-2012-0507: is due to the improper implementation of AtomicReferenceArray() leading to the type confusion vulnerability.
  • CVE-2013-2465: which involves insufficient bounds checks in the storeImageArray() function. This vulnerability is used by White Lotus and other exploit kits.
  • CVE-2012-1723: which allows attackers to bypass sandbox using type confusion vulnerability.

These vulnerabilities are also being used in targeted attacks. Our report explains Java exploits’ three most common behaviors: usage of reflection to hide the function call, functional and data obfuscation and behavior to download the malicious files. Download the paper to learn more.

Two Limited, Targeted Attacks; Two New Zero-Days

The FireEye Labs team has identified two new zero-day vulnerabilities as part of limited, targeted attacks against some major corporations. Both zero-days exploit the Windows Kernel, with Microsoft assigning CVE-2014-4148 and CVE-2014-4113 to and addressing the vulnerabilities in their October 2014 Security Bulletin.

FireEye Labs have identified 16 total zero-day attacks in the last two years – uncovering 11 in 2013 and five in 2014 so far.

Microsoft commented: “On October 14, 2014, Microsoft released MS14-058 to fully address these vulnerabilities and help protect customers. We appreciate FireEye Labs using Coordinated Vulnerability Disclosure to assist us in working toward a fix in a collaborative manner that helps keep customers safe.”

In the case of CVE-2014-4148, the attackers exploited a vulnerability in the Microsoft Windows TrueType Font (TTF) processing subsystem, using a Microsoft Office document to embed and deliver a malicious TTF to an international organization. Since the embedded TTF is processed in kernel-mode, successful exploitation granted the attackers kernel-mode access. Though the TTF is delivered in a Microsoft Office document, the vulnerability does not reside within Microsoft Office.

CVE-2014-4148 impacted both 32-bit and 64-bit Windows operating systems shown in MS14-058, though the attacks only targeted 32-bit systems. The malware contained within the exploit has specific functions adapted to the following operating system platform categories:

  • Windows 8.1/Windows Server 2012 R2
  • Windows 8/Windows Server 2012
  • Windows 7/Windows Server 2008 R2 (Service Pack 0 and 1)
  • Windows XP Service Pack 3

CVE-2014-4113 rendered Microsoft Windows 7, Vista, XP, Windows 2000, Windows Server 2003/R2, and Windows Server 2008/R2 vulnerable to a local Elevation of Privilege (EoP) attack. This means that the vulnerability cannot be used on its own to compromise a customer’s security. An attacker would first need to gain access to a remote system running any of the above operating systems before they could execute code within the context of the Windows Kernel. Investigation by FireEye Labs has revealed evidence that attackers have likely used variations of these exploits for a while. Windows 8 and Windows Server 2012 and later do not have these same vulnerabilities.

Information on the companies affected, as well as threat actors, is not available at this time. We have no evidence of these exploits being used by the same actors. Instead, we have only observed each exploit being used separately, in unrelated attacks.

About CVE-2014-4148

Mitigation

Microsoft has released security update MS14-058 that addresses CVE-2014-4148.

Since TTF exploits target the underlying operating system, the vulnerability can be exploited through multiple attack vectors, including web pages. In the past, exploit kit authors have converted a similar exploit (CVE-2011-3402) for use in browser-based attacks. More information about this scenario is available under Microsoft’s response to CVE-2011-3402: MS11-087.

Details

This TTF exploit is packaged within a Microsoft Office file. Upon opening the file, the font will exploit a vulnerability in the Windows TTF subsystem located within the win32k.sys kernel-mode driver.

The attacker’s shellcode resides within the Font Program (fpgm) section of the TTF. The font program begins with a short sequence of instructions that quickly return. The remainder of the font program section is treated as unreachable code for the purposes of the font program and is ignored when initially parsing the font.

During exploitation, the attacker’s shellcode uses Asynchronous Procedure Calls (APC) to inject the second stage from kernel-mode into the user-mode process winlogon.exe (in XP) or lsass.exe (in other OSes). From the injected process, the attacker writes and executes a third stage (executable).

The third stage decodes an embedded DLL to, and runs it from, memory. This DLL is a full-featured remote access tool that connects back to the attacker.

Plenty of evidence supports the attacker’s high level of sophistication. Beyond the fact that the attack is zero-day kernel-level exploit, the attack also showed the following:

  • a usable hard-coded area of kernel memory is used like a mutex to avoid running the shellcode multiple times
  • the exploit has an expiration date: if the current time is after October 31, 2014, the exploit shellcode will exit silently
  • the shellcode has implementation customizations for four different types of OS platforms/service pack levels, suggesting that testing for multiple OS platforms was conducted
  • the dropped malware individually decodes each string when that string is used to prevent analysis
  • the dropped malware is specifically customized for the targeted environment
  • the dropped remote access capability is full-featured and customized: it does not rely on generally available implementations (like Poison Ivy)
  • the dropped remote access capability is a loader that decrypts the actual DLL remote access capability into memory and never writes the decrypted remote access capability to disk

About CVE-2014-4113

Mitigation

Microsoft has released security update MS14-058 that addresses this vulnerability.

Vulnerability and Exploit Details

The 32-bit exploit triggers an out-of-bounds memory access that dereferences offsets from a high memory address, and inadvertently wraps into the null page. In user-mode, memory dereferences within the null page are generally assumed to be non-exploitable. Since the null page is usually not mapped – the exception being 16-bit legacy applications emulated by ntvdm.exe–null pointer dereferences will simply crash the running process. In contrast, memory dereferences within the null page in the kernel are commonly exploited because the attacker can first map the null page from user-mode, as is the case with this exploits. The steps taken for successful 32-bit exploitation are:

  1. Map the null page:
    1. ntdll!ZwAllocateVirtualMemory(…,BaseAddress=0×1, …)
  2. Build a malformed win32k!tagWND structure at the null page such that it is properly validated in the kernel
  3. Trigger vulnerability
  4. Attacker’s callback in win32k!tagWND.lpfnWndProc executes in kernel-mode
    1. Callback overwrites EPROCESS.Token to elevate privileges
  5. Spawns a child process that inherits the elevated access token

32-bit Windows 8 and later users are not affected by this exploit. The Windows 8 Null Page protection prohibits user-mode processes from mapping the null page and causes the exploits to fail.

In the 64-bit version of the exploit, dereferencing offsets from a high 32-bit memory address do not wrap, as it is well within the addressable memory range for a 64-bit user-mode process. As such, the Null Page protection implemented in Windows versions 7 (after MS13-031) and later does not apply. The steps taken by the 64-bit exploit variants are:

  1. Map memory page:
    1. ntdll!ZwAllocateVirtualMemory(…)
  2. Build a malformed win32k!tagWND structure at the mapped page such that it is properly validated in the kernel
  3. Trigger vulnerability
  4. Attacker’s callback in win32k!tagWND.lpfnWndProc executes in kernel-mode
    1. Callback overwrites EPROCESS.Token to elevate privileges
  5. Spawns a child process that inherits the elevated access token

64-bit Windows 8 and later users are not affected by this exploit. Supervisor Mode Execution Prevention (SMEP) blocks the attacker’s user-mode callback from executing within kernel-mode and causes the exploits to fail.

Exploits Tool History

The exploits are implemented as a command line tool that accepts a single command line argument – a shell command to execute with SYSTEM privileges. This tool appears to be an updated version of an earlier tool. The earlier tool exploited CVE-2011-1249, and displays the following usage message to stdout when run:

Usage:system_exp.exe cmd
Windows Kernel Local Privilege Exploits

The vast majority of samples of the earlier tool have compile dates in December 2009.  Only two samples were discovered with compile dates in March 2011. Although the two samples exploit the same CVE, they carry a slightly modified usage message of:

Usage:local.exe cmd
Windows local Exploits

The most recent version of the tool, which implements CVE-2014-4113, eliminates all usage messages.

The tool appears to have gone through at least three iterations over time. The initial tool and exploits is believed to have had limited availability, and may have been employed by a handful of distinct attack groups. As the exploited vulnerability was remediated, someone with access to the tool modified it to use a newer exploit when one became available. These two newer versions likely did not achieve the widespread distribution that the original tool/exploits did and may have been retained privately, not necessarily even by the same actors.

We would like to thank Barry Vengerik, Joshua Homan, Steve Davis, Ned Moran, Corbin Souffrant, Xiaobo Chen for their assistance on this research.

Double-edged Sword: Australia Economic Partnerships Under Attack from China

During a visit in mid-September, China’s Foreign Minister Wang Yi urged Australia to become “a bridge between east and west.” He was Down Under to discuss progress on the free trade agreement between Australia and China that seems likely by the end of the year. His comment referred to furthering the trade relationship between the two countries, but he might as well have been referring to hackers who hope to use the deepening alliance to steal information.

The Australian Financial Review (AFR) did an in-depth article with FireEye regarding Chinese attacks against Australian businesses, and this blog provides additional context.

Australia has experienced unprecedented trade growth with China over the last decade, which has created a double-edged sword. As Australian businesses partner with Chinese firms, Chinese-based threat actors increasingly launch sophisticated and targeted network attacks to obtain confidential information from Australian businesses. In the U.S. and Europe, Chinese attacks on government and private industry have become a routine in local newspapers.  Australia, it seems, is the next target.

 The Numbers

First, let’s review the state of Australian and Chinese economic interdependence.  Averaging an annual 9.10% GDP growth rate over the last two decades, China’s unparalleled economic expansion has protected Australia from the worst of the global financial crisis effects. Exports to China have increased tenfold, from $8.3b USD in 2001 to $90b USD in 2013[i], with the most prominent commodities being iron ore and natural gas. Much of these resources originate in Australia, which puts China’s government under significant pressure to meet the skyrocketing demand for them. Despite the ever-increasing co-dependence Australia and China share as regional partners, Chinese authorities are likely supporting greater levels of monitoring and intelligence gathering from the Australian economy – often conducted through Chinese State-Owned Enterprises (SOEs) with domestic relationships in Australia.

SOE direct investment into Australia grew to 84% of all foreign investment inflows from China in 2014, primarily directed into the Australian mining and resource sector; demonstrating a further signal for control as China seeks to capture a level of certainty in catering for its future internal growth. We suspect this to be government-commissioned cyber threat actors targeting Australian firms with a specific agenda: to gain advantage and control of assets both in physical infrastructure and intellectual property.

chn

Figure 1. Chinese Direct Investment into Australia by industry

The Impacts

How have these partnerships impacted Australian networks?  Mandiant has observed the strategic operations of Chinese threat actors target companies involved in key economic sectors, including data theft from an Australian firm.  Chinese Advanced Persistent Threats (APTs) are likely interested in compromising Australian mining and natural resources firms, especially after spikes in commodity prices. The upward trend in APT attacks from China is also aimed toward the third parties in the mining and natural resources ecosystems. Mandiant believes a significant increase in China-based APT intrusions focused on law firms that hold confidential mergers and acquisitions information and sensitive intellectual property. It is no coincidence these third-party firms are often found lacking in network protections. The investigation also found that, at the time of compromise, the majority of victim firms were in direct negotiations with Chinese enterprises, highlighting attempts by the Chinese government to gain advantage in targeted areas.

Due to its endemic pollution problems, clean energy has evolved into a critical industry for China. The country has now engaged a plan to develop Strategic Emerging Industries (SEIs) to address this. Australian intellectual property and R&D have become prime data, and has taken a major position in Chinese APT campaigns. Again, it is the third parties like law firms that are coming under attack.

Furthermore, to reduce China’s reliance on Australian iron ore exports, Beijing has initiated a plan to develop an efficient, high-end steel production vertical through strategic acquisitions in Australia and intervening to prevent unfavorable alliances.  For example, the SOE Chinalco bought into Australian mining companies to presumably prevent a merger that would have disadvantaged their interests. Clearly, the confidential business information of Australian export partners to China is becoming increasingly sought after.

Mandiant found that the majority of compromised firms had either current negotiation with Chinese enterprises or previous business engagements with Chinese enterprises. These attacks will persist as trade and investment grows, though they will do so at the cost of confidential Australian business information such as R&D and intellectual property. As large Australian mining and resources firms themselves may partner with the Australian Signals Directorate for security, the focus of the threat actors shifts to associated parties with access to sensitive data, who may not be pursuing partnerships with the Australian Signals Directorate.  This calls for greater awareness and protection against the increasingly determined and advanced attacks launched.

The Bottom Line

Although this blog focuses on acts against large Australian mining and resources sectors, Mandiant has observed these APT actors often focusing their attention on other sectors such as defence, telecommunications, agriculture, political organizations, high technology, transportation, and aerospace, among others. But the broader lesson and message—drawing from U.S. and European experience with Chinese attacks—is that no one is or will be exempt.  For all Australian businesses and governments, it’s time to fortify defences for a new era of cyber security.

 

[i]“Australian Government Department of Foreign Trade and Affairs. www.dfat.gov.au/publications/stats-pubs/australiasexports-

 

Data Theft in Aisle 9: A FireEye Look at Threats to Retailers

While cybercriminals continue to target the payment card and banking information of individual users, they seem increasingly aware that compromising retailers is more lucrative. Targeting retailers is not new; Albert Gonzales infamously targeted retailers nearly a decade ago. What has changed, however, is the wide availability of tools and know-how that make it possible for even relatively unskilled cybercriminals to commit large-scale attacks. The results speak for themselves – significant breaches at retailers have increased over the last few years, and the trend continues. In fact, the Verizon Data Breach Investigations Report called 2014 the “year of the retailer breach” due to the number of large-scale attacks.

Not only are breaches at retailers occurring more regularly, FireEye researchers have noticed another startling new trend: while much of this activity is not initially targeted in nature, it can easily transition to a targeted attack when the attackers realize the value of the network they have compromised. The convergence of the wide availability of malware tools specifically built for point-of-sale (POS) systems and indiscriminate botnets, combined with targeted attack activity, suggest that network defenders struggle to determine the levels of threat severity and adversary sophistication. Simply put: What may initially seem like a “simple” crimeware infection may actually be a vector through which targeted actors can purchase or rent access to their victims.

POS Malware: A History Lesson

Since 2013 we have seen a dramatic increase in the number of malware threats specifically focused on POS systems. This uptick is like any other market dynamic — there’s a lot of data residing in retailers’ networks, and threat actors are adapting and evolving to take advantage of what’s at stake. Robust underground markets and an enterprise-like cyber criminal ecosystem enable threat actors to develop and trade their wares. What follows is a summary of some of the most major POS malware families and their similarities:

  • Backoff POS – The Backoff attacks were publicly disclosed in July 2014 but the campaign itself was active in October 2013. The attackers reportedly brute forced remote desktop servers and installed the Backoff malware. Backoff is capable of extracting payment card data by scraping memory, and exfiltrating data over HTTP. Backoff’s Command-and-Control (C2) servers are connected to servers used to host Zeus, SpyEye and Citadel, suggesting Backoff may be connected to a broader series of attacks.
  • BrutPOS – The BrutPOS malware was documented in July 2014. This botnet scans specified ranges of IP addresses for remote desktop servers and if a POS system is found, the attackers may deploy another variant that scans the memory of running processes to extract payment card information. BrutPOS exfiltrates data over FTP.
  • Soraya – The Soraya POS malware was disclosed in June 2014. It iterates through running processes and accesses memory to extract payment card data. Soraya also has form-grabbing capabilities and exfiltrates data over HTTP.
  • Nemanja – The details of the Nemanja were disclosed in May 2014 and the botnet is believed to have been active throughout 2013. The attackers compromised an array of POS machines worldwide running a variety of POS software. The attackers were reportedly directly engaged in the production of fake payment cards and money laundering using mobile POS solutions.
  • JackPOS – The JackPOS malware was reported in February 2014 and was reportedly originally spread using “drive-by” download attacks. The malware, which appears to be somewhat related to the Alina malware, is capable of scraping memory to acquire payment card data and exfiltrate it over HTTP. JackPOS is now widely available on underground forums and is used by a variety of actors.
  • Decebal – The Decebal POS malware was first reported in January 2014. The malware enumerates running processes and extracts payment card information, which is then exfiltrated over HTTP.
  • ChewBacca – The ChewBacca malware was first disclosed in December 2013. This malware enumerates running processes and accesses memory to extract information using two regular expressions that match payment card data formats. This malware uses the Tor anonymity network for data exfiltration.
  • BlackPOS – The BlackPOS malware, sold on underground forums by an individual believed to be “ree4,” was first reported March 2013 and is now widely available. This malware, which has a variant also known as KAPTOXA, scrapes memory to obtain payment card data. This data is typically transferred to a local staging point and then exfiltrated using FTP. The malware is best known for its reported role in several highly publicized breaches.
  • Dexter – The Dexter POS malware was first disclosed in December 2012 and is believed to have been developed by an actor known as “dice,” (who may also have been involved with the development of the Alina POS malware); the actual use of the tool has been connected to an individual known as “Rome0″. The malware iterates through running processes, accesses memory looking for payment card data, and exfiltrates it over HTTP.

Most of these malware families use a similar approach of enumerating running processes and using pattern matching to extract payment card information from running processes. However, in at least one case, a BlackPOS variant was configured to only access a specific process. This not only makes it less noisy, but indicates that the attackers knew what process to target on the compromised system. This development, along with some hardcoded network paths and usernames, may indicate specific targeting by the attackers.

Indiscriminate vs. Targeted Attacks

While some malware appears to have been used exclusively by particular threat actors, some variants are now widely available. In many cases, it appears that the POS-specific malware was used as a “second stage” attack, while the initial vector remains unclear. In the case of Alina and BackOff, for example, the POS malware was connected to Citadel, Zeus and SpyEye botnets. While the details remain unclear, the attackers may be selling or trading access to particular targets.

In other cases, the attackers appear to be much more specific with their targeting. One particular threat group will engage in periods of reconnaissance for months before engaging with the target. We also observed this group using SQL injection as an attack vector, deploying POS-specific malware after moving laterally through the compromised network.

Conclusion

These developments challenge the traditional conceptions of risk when it comes to network defense. While targeted attacks (including those associated with APT activity) can be tracked and clustered over time (by understanding the tools, techniques and procedures used by the threat actors, as well as their timing, scope and targeting preferences), it is difficult to prioritize incidents that began indiscriminately and transitioned into targeted attack activity. Is a simple, indiscriminate Zeus infection a noteworthy incident? Or will it pass largely unnoticed … only to transform into a significant breach?

Acknowledgements

We would like to thank Kyle Wilhoit, Jen Weedon and Chris Nutt.

 

 

Security Perspective

Filter by Category:


Cashing in on Cybersecurity

With the recent news of Wall Street banks requesting a meeting with the U.S. Treasury Department and other government officials to discuss cybersecurity concerns, I reached out to one of the leading information security authorities for her take on the cyber threats that banks currently face. Following is an interview I held with CEO and Founder of Pondera International, Kristen Verderame.

What are the key threats banks face today?

Banks face a number of cybersecurity threats today, now more than ever. Threat actors targeting financial services are getting more and more sophisticated. While malware continues to be the biggest reported threat, attackers are more often using attack vectors only once – rendering monitoring for advanced persistent threat groups more and more difficult. The good news is that the financial services industry is way ahead of the curve in terms of preparedness and the ability to counter such threats. In fact, the financial services sector has lead all sectors for some time because their business case has required it.

How can the government(s) help? Why should they?

Governments can help by publicizing best practices for industry to follow, as demonstrated in the NIST Cybersecurity Framework issued earlier this year. Though the Framework is not comprehensive and certainly not a panacea for all cybersecurity vulnerabilities, it provides a useful assessment and summary of best practices and will be a good resource for entities that have not taken action previously. Governments can also help by facilitating trustworthy information sharing and supporting bi-directional sharing (i.e., government-to-industry sharing, not just industry-to-government). Often the government, as a neutral party, is in the best position to facilitate such sharing between industry competitors.

How important is a community approach when it comes to cyber defense?

A community approach is widely recognized as critical for effective cyber defense. The sharing of threat information and best practices between entities has proven the most effective means of combating APTs across industry sectors and across geographical boundaries. Collaboration through information sharing has been recognized by the U.S. Congress as a critical tool against cybersecurity threats – both the House and Senate introduced legislation to promote information sharing across government and industry. President Obama included information sharing as a key component of his Executive Order. Outside the U.S., the European Commission is currently considering cybersecurity legislation that not only encourages information sharing, but requires collaboration across Member States in a variety of other ways.

Are there any precedents for this type of collaboration and will it succeed?

Yes on both counts, in my opinion. One example of collaboration that has proven effective is the “Information Sharing and Analysis Centers” (ISACs), which are comprised of critical infrastructure owners in various sectors. The ISACs provide an information-sharing platform for their members and sometimes also provide risk mitigation, incident response and alerts to members. Some of the ISACs have proven more effective than others. The FS-ISAC has consistently served as a model for other ISACs while the energy sector ISAC is not as robust as many would like, in part because the industry regulator is at the table with industry presenting a potential conflict of interest. Though not perfect, the ISACs provide at their core a facilitative framework used by government and industry for collaboration and cooperation.

Follow the Information

This is the second article of a six-piece series by FireEye’s Chief Privacy Officer, Shane McGee. In this series, Shane explores six fundamental steps to building an effective privacy program. While there are many critical pieces to consider, Shane chose to highlight the following:

  1. Give Privacy a Voice
  2. Follow the Data
  3. Communicate Clearly, Carefully and Candidly
  4. Become Part of the Process
  5. Build a Culture of Privacy
  6. Rinse and Repeat

Follow the Data

Chief Privacy Officers have a lot of different responsibilities. We monitor new legislation, make policy, meet with clients to explain our information practices, attend conferences to keep current on best practices, participate in working groups and speak publicly about the company’s commitment to privacy. And while that’s all important, none of it addresses what I believe should be the primary responsibility of a CPO: ensuring that your company collects, stores, uses and shares data consistent with the law, company policies and reasonable customer expectations. This, of course, requires a solid understanding of the nature of the data collected and where it resides.

When it comes to data, understanding what you have and where it sits is more difficult than most people think. In fact, it’s frequently the case that any given company will collect more data – and more types of data – than a simple inquiry would indicate. To obtain a true inventory, one must follow the data.

Following the data isn’t easy. You can ask your company’s engineers to provide you with a data map, but what you receive may not turn out to be particularly helpful. Engineers deal with technology and information architectures, and you’re much more likely to receive an architectural flow than something that presents an organized picture of the data the company collects. And if you’re lucky enough to receive a ‘real’ data map, it may be out-of-date or unintelligible to someone without engineering superpowers.

So what do you do? Conduct a friendly data deposition! Sit the engineers down and ask questions – lots and lots of questions. If you don’t understand something, concede your ignorance and ask them to explain. And while they may be more interested in telling you about the technology, your questions should be focused on the data. Although each data deposition should be tailored to fit the situation, some examples of questions to ask are:

  • How is the data collected?Is it validated or sanitized?
  • If collected via a web browser, are HTTP referrers stored? If so, is user data removed?
  • If collected online, is the contributor’s IP or MAC address stored?
  • Is the data associated with any type of unique identifier that could be used to identify a person?
  • List the database fields in which the data is stored.
  • Are there any narrative/text fields that can store information that wasn’t solicited?
  • When data is reportedly deleted, is the record overwritten or just ‘unlinked’ or marked as deleted?

Take copious notes during the data deposition and, after you’re done, draft your own data map while it’s still fresh in your head. Send that back to the engineers and ask them to review, correct any mistakes and bless the final version.

Finally, without giving away any of the yet-to-be disclosed secrets in our sixth article, Rinse and Repeat, make certain you keep apprised of changes. Ask your engineers to update you when something changes on the data map they blessed, and schedule regular checkpoints to catch anything that falls through the cracks. If you do all this, you’ll be better situated to accomplish your primary mission as CPO.

MIRcon: What the Cosmos can Teach us about Security

A few people have asked me what central theme and message stayed with me after last week’s MIRcon. I would say that Dr. Neil deGrasse Tyson’s keynote resonated with me and matched the central theme I felt during the conference; allow me to explain.

During his keynote, Dr. Tyson spoke to us about science, and, specifically, the scientific method, allowing us to objectively overcome our natural human biases. In other words, science is about forming a hypothesis, testing that hypothesis through accurate measurement, and reaching an objective conclusion based on the observed data. Security is the same, or at least it should be.

In the security domain, we don’t always take advantage of and apply the rich foundations of knowledge and expertise that exist in other domains. Often these other domains are far more mature than our own. I can think of no better example of this than science. For hundreds of years, scientists have used an agreed upon, methodical approach to advance the state of science. We can learn a lot from this conceptually and apply it to the security domain.

At MIRcon, the presentations I saw and the discussions I was fortunate enough to take part in indicated to me that we have begun to approach the security domain far more scientifically. Gone are the days of emoting and guessing – the problems are far too complex, the data too diverse, and the attackers too sophisticated. As a profession, we have begun to demand a far more scientific approach to the security domain than was historically the case.

The atmosphere at MIRcon was invigorating. Security professionals have tired of unsupported hypotheses – we are ready for a more formal approach. Today’s challenges require a more scientific way of thinking. We need to explicitly identify and enumerate the challenges we are facing in the field, hypothesize the solutions to those challenges, test those solutions through accurate measurement, and reach objective conclusions about the merits of those solutions.

Recommendations, beliefs, and hypotheses are in no shortage in our field. But are they accurate, do they solve security problems, and do they address the challenges of the day? The answer to those questions needs to be evaluated scientifically, rather than debated in the absence of accurately measured data.

Security has evolved from a niche profession to a mainstream one. As such, our work must stand up to the same rigor we would apply to any other profession. Any other approach would simply be unscientific.

 

Flying Blind

With all the news about data breaches lately, it’s not particularly surprising to wake up to headlines describing yet another one.  What is perhaps a bit surprising, however, is the common theme that seems to exist in many of the breach stories.  Time and time again, when organizations get breached, they find out the hard way that they don’t have the endpoint and network visibility they thought they did.  The necessary data to perform the forensics required to reach an analytical conclusion is simply missing.  Further, there is no way to remedy this situation – if the data was not properly recorded when it traversed the network or endpoint, there is simply no way to access it.

What are some of the reasons that data is not available come breach response time?  Let’s take a look at a few of them.

  • Collection: One of the goals of a security program is to ensure that the necessary network and endpoint data are collected.  Unfortunately, this is often a challenge for even the most mature of security programs.  In some cases, organizations may not have their networks and endpoints properly instrumented for collection.  In other cases, organizations may not be properly equipped to retain and expose for analysis the volume of data created by the network and endpoint instrumentation.  Either way, when it comes time to investigate, the relevant data will not be available.
  • Visibility: More data doesn’t necessarily mean more visibility or coverage.  There is an important distinction between the volume of the data and the portions of the organization that it provides visibility into.  Some organizations may have portions of their networks or endpoints instrumented for collection, but not others.  But what if the breach occurs in an area of the network or on an endpoint that is not included in the area of visibility?  In those cases, unfortunately, data that is relevant to the breach investigation will not be available for forensics and analysis.
  • Retention: Another important dimension to consider is that of retention.  In the absence of an infinite volume of storage, data cannot be retained forever.  Today’s organizations generate incredible amounts of data from their collection efforts.  Sometimes, the network and endpoints are properly instrumented in the appropriate places, but there is simply nowhere to put the volume of data that is generated.  As the volume of data grows, either the retention period shrinks, or the storage capacity grows to compensate.  It is not uncommon for the retention period to fall to 30 days, or even less.  With mean-time-to-detection at a staggering 229 days, it is easy to see that 30, 60, or even 90 days of retention is simply inadequate when it comes time to perform forensics and analysis.  Although the relevant data for the investigation may have existed at one time, if it isn’t present when we perform our investigation, it doesn’t help us much.  This necessitates us getting a bit smarter about what data we retain.  Our goal should be data that provides us maximum visibility into the network and endpoints, but at the minimal volume.  Perhaps it sounds a bit radical to say, but the days of “collect everything” are gone – instead we find ourselves in an era of “collect the most relevant things”.
  • Analysis: Even if our collection, visibility, and retention are squared away, we may still encounter frustrations and limitations when performing incident response.  Although we may have the data we need over the time period we need it for, we still need to be able to analyze it.  If we are unable to extract the data rapidly from our forensic collection platforms, we will be unable to analyze it.  Simply put, what goes in must come out.  For example, say we need to search for the first appearance of a given Indicator of Compromise (IOC) over the entirety of our retention period.  For this example, let’s assume our retention period is on the order of 12 months.  If that query fails before completing or takes days to complete, it is of no value to incident response.  Incident response demands answers in seconds or minutes, rather than hours or days.

Despite the steady stream of bad news regarding data breaches, there is some good news.  By taking proactive steps, organizations can prepare themselves to perform rapid and efficient incident response when they become the victim of a breach.  Among many details, it’s important for an organization to consider the points above when assessing its breach preparedness.

FireEye and OS X Support

Today, we announced support for OS X in our flagship NX product. This means we now have virtual image capabilities for Macs in an enterprise environment. This is important for several reasons:

  • Mac’s footprint inside the enterprise is growing. Today, 21 percent of information workers are using one or more Apple products and a 52 percent projected increase in Apple devices to be issued according to Forrester.
  • Senior level employees—i.e., targets interesting to attackers—represent 41 percent of enterprise Apple users. At a recent conference, our CTO Dave Merkel said, “We live in a fully connected world. Where information goes, spies follow. Where money goes, crime follows.” Now you can add: “Where the employee goes, malware will follow.”

In fact, our product has been in beta and available to customers for several months now.  Such increased use of Apple computers has caught the attention of attackers, with FireEye Labs seeing malware callbacks from Macs increase 36 percent year-over-year between the first six months of 2013 and 2014.

More importantly, our product uncovered—within two days of deployment—an Apple-centric malware campaign which we detailed in this blog. Specifically, FireEye Labs discovered a previously unknown variant of the APT backdoor XSLCmd – OSX.XLSCmd – which is designed to compromise Apple OS X systems. This backdoor shares a significant portion of its code with the Windows-based version of the XSLCmd backdoor that has been around since at least 2009. This discovery, along with other industry findings, is a clear indicator that APT threat actors are shifting their eyes to OS X as it becomes an increasingly popular computing platform.

We hope with this release, security teams can be ready.