Q&A Follow-Up - State of the Hack: M-Trends 2013

During our most recent webinar, State of the Hack: M-Trends of 2013, we received a lot of excellent questions. We received so many, in fact, that we didn't have enough time to answer them all. Many of these questions provided additional information to the trends we identified last year. In addition, attendees asked direct questions on what organizations could do to better position themselves against advanced attackers. This post will cover those questions that went unanswered and provide some additional insight into this year's M-Trends® report.

Q: Based on the trends seen over the past year(s), how do you see hacking attacks evolve against the financial industry?

Over the past few years, we've seen targeted attackers pushing towards bigger and bigger payouts. This shift has focused the attack towards payment processors. It is likely the attackers will shift towards spear-phishing as an initial means of entry into the environment. As financial institutions implement more controls around the core systems used in financial transactions, the attackers will need to shift into diverting these guards.

Q: Have you seen APT groups use the BlackHole Toolkit?

As of yet, we haven't seen APT groups use the BlackHole Toolkit. That being said, it is never out of the question that APT groups could switch tactics to eventually use similar methods to initially infect systems. It is important to remember that APT groups does not use only custom tools but also publicly available utilities; including Gh0st Rat and Poison Ivy.

Q: What is your recommendation for avoiding/detecting the drive-by attacks originating from legitimate websites that have been compromised?

Avoiding and detecting drive-by attacks can be a difficult task due to the nature of the attack itself. Often times, this activity will encompass either an unknown zero-day exploit or one that has recently surfaced. Ensuring that software is kept up-to-date will assist in detecting and mitigating the well-known exploits. While harder to implement, the more effective way to reduce this attack surface is to minimize the attack vector, which would include third party applications such as Java and Flash. In addition, helpful browser extensions, such as NoScript or NotScript (or even disabling scripting within your browsers) are very effective.

Q: What are the presenter's thoughts on collaboration between companies / businesses to counter cyber crime? And do they have any insight into the effectiveness, success and issues of different businesses working together to share techniques and experiences?

In the past, it has been difficult for companies to come out and admit that they've been hacked. This led to little to no information on threat actors spreading throughout the community; however, recently the introverted nature of hacking victims has started to evolve. Companies are coming out and publicly acknowledging attacks and informing the public, and even competitors, of details that previously would never have been known. The defense industrial base (DIB) and FS-ISAC do an excellent job of disseminating information to companies who face similar threats. It is no surprise then to see that these industries often do a better job of detecting and responding to targeted threats. As the trends we discussed seemed to indicate, better intelligence sharing (and thus awareness) has led to decreased response time!

Q: Since the release of the APT1 report have you seen a noticeable change in the tools, techniques and procedures used by APT1 and other actors?

We have not yet performed an in-depth study like we did when we released the APT1 report. However, we have noticed this attack group taking actions such as removing disclosed backdoors and implementing newer variants. We've also seen this group take steps to remove identifiable WHOIS and other publicly available information. The short answer to this question is that yes, the group noticed the report and has responded to it. Ironically enough, a trojanized version of our report has been circulating as a fake report titled "APT2".

Q: Did you see anything in common among the victims who self-discovered the breach, or vice-versa?

Most of the companies who self-detected last year were organizations that were already familiar with the threat, oftentimes because they operated in industries heavily targeted by the APT. This awareness of the threat likely caused the organization to invest in their people and technology, which allowed them to act on received intelligence.

Q: How are we able to identify if the same attack group attempts to break back into a company?

This is a tough question to answer. It is rarely as simple as being able to definitively state that we are certain "Group A" attempted to come back. Sometimes we are lucky and the group uses a tactic we've seen before, but more often it requires additional investigative time (if the attacker was able to successfully break back into the organization, however briefly). Our Intelligence department also does a good job of tracking attacker group "campaigns" across industries, so we may receive a warning that our client is being targeted as part of a "campaign".

Q: Do you see any useful - for detection - characteristics in pre-attack reconnaissance today?

This is a difficult question to answer. When an organization suffers a SQL Injection attack, the pre-attack reconnaissance often becomes painfully obvious when we review the web logs. However, we certainly don't want security teams responding every time someone detects a vulnerability scanner or SQLi scanning tool. On the other hand, organizations are so public that it can be very difficult to determine whether someone browsing your publicly available information is generally curious or has something nefarious planned. The easy answer is to say that you should focus on securing your critical assets such that a successful attack cannot access those assets rather than worry about potential reconnaissance activity. The not-so-easy answer is that you should spend some time scouring the internet and your publicly available servers to ensure that you are not exposing too much information (such as network architecture diagrams)

Q: Please discuss your observations regarding originator patterns of use of attack techniques - are there differentiators by region of origin or other localization? It is hard to believe all attackers use all techniques all the time?

While we do see a lot of different attack groups use similar techniques, different types of attack groups do use different tools, tactics, and procedures (TTPs). For example, APT groups favor spear phishing emails or strategic web compromise in order to gain initial access to an environment whereas organized cybercrime seems to favor exploiting vulnerable web applications.

Q: When compromised, how do you balance between the need to get back to business normality and ensuring the systems are working yet on the other hand being able to forensically track and learn from the attack?

This is a great question and the answer is that it always depends on the situation. For example, if a financial institution is under attack and losing money in near real-time, containing the incident is likely going to be more important than the investigation. This oftentimes puts the attacker into a firefight with the investigative and remediation teams, which cause the various teams to work long hours before the incident is truly remediated and the attacker denied access. However, if the same financial intuition has been breached by a targeted threat that is more interested in gaining access to M&A data, the better approach will likely be to investigate the scope of the compromise first, then perform an eradication event followed by a longer term remediation.

Q: Any developments regarding forensic analysis/digital analysis within the Cloud Ecosystem?

Approaching an investigation in a cloud environment is not much different from approaching an investigation of a very large multi-national corporation - the investigative process is roughly the same but the scale of the investigation is larger. One difference may be that the investigative team may have to rely heavily on available logging to piece back together activity (such as in a heavily load balanced environment).

Q: With people compromising external websites, cloud based threat intelligence feeds, etc. don't seem to be very useful. How can we really defend against it?

In our experience, attackers that targeted external facing servers were not using cutting edge methods of exploitation. Often times, they were leveraging something as simple as SQL injection, default passwords, or in some cases no passwords at all. Reducing your risk to these types of attacks involves three primary steps. First, identify and reduce your external footprint. Many applications that attackers exploit aren't even known by the victim. Understanding what services you need exposed externally and removing those that aren't is the first step in reducing your risk. The second step is to proactively scan your external facing servers and identify potential vulnerabilities. This leads to the third step: hardening your external systems. Fix the potential vulnerabilities previously identified and implement controls to help mitigate and stop potential attacks. This would include services such as application white listing. There is not a silver bullet to stop all attackers from gaining access into your environment through an external server. The best defense is to better posture yourself with the capabilities to quickly detect and respond to potential incidents.

Q: What are your thoughts on vulnerability analysis at the source code level to detect exploitable paths in the source code itself?

Source code level review is something that can identify potential vulnerabilities before the application even goes live. It is possible to identify these vulnerabilities before anyone even has a chance to exploit them. That being said, as the lines of code grows larger so does the complexity of reviewing the code. For that reason, code review is something that should be used in conjunction with in-depth application vulnerability testing and a secure system configuration on which the application will run.