Tracking internal hacktivists

One of the most notable types of cybersecurity threats that emerged during the past decade is the rise of “hacktivists.” These individuals or groups perpetrate digital theft or mayhem not for personal gain, but to advance agendas that can range from the political (including simple anarchy) to attacks meant to expose or punish perceived business, ethical, or moral transgressions.

Organizations often think of the hacktivist threat as an external danger, perpetrated by groups such as Anonymous (which gained notoriety following a 2008 distributed denial of service attack on the Church of Scientology) or, more recently, the hackers who released customer data from the “dating” site Ashley Madison. Hacktivists within an organization, as explained in the AT&T Cybersecurity Insights report, can pose just as significant a threat. Look no further than Army intelligence analyst Chelsea (nee Bradley) Manning, who released nearly three-quarters of a million sensitive or classified documents to WikiLeaks.

In some ways, employee, contractor- or vendor-based hacktivists are among the toughest to identify and counter. Unlike cyberthieves seeking to gain access to user accounts or other data for personal gain, the motivations and targets of a hacktivist can be much more varied and unpredictable. Still, many of the behavioral and system tracking tools designed to flag suspicious activity of any type can help organizations spot hacktivist-driven threats.

Many of the tools that aim to identify questionable online activity make use of pattern recognition and other techniques that have been developed and refined by Web-based marketers and publishers. These tools make use of big data statistical modeling and analysis techniques to track the Web activity (sites visited, time spent on site, number of links clicked, etc.) to identify user trends and preferences.

When applied to cybersecurity needs, the same types of modeling and analysis tools can first establish “normal” patterns of online activity and then flag any activity that falls outside of the expected parameters. When outlier activities materialize, IT security staff can dig deeper into the questionable events to determine whether they’re actual threats or simply false alarms.

It’s important for organizations to understand that there are two broad levels of insider tracking and threat prevention. The first, most familiar level involves tools to identify threats related to gaining initial access and authorization to corporate systems and data. An obvious example of such a red-flag is a user who makes multiple (possibly automated) attempts to enter a password and user name. There are many subtle activities that behavioral and systems tracking tools can identify, however.

The second level of protection is often less implemented, but just as critical. It involves tracking, auditing, and real-time alarming of a user’s suspicious activity once he or she has gained access rights to an organization’s network and systems. Since insider hacktivists will often have privileged access rights (i.e. administrator accounts) to corporate systems and data, tools designed to detect unauthorized access attempts offer little protection. Organizations need automated tools that can flag potential threats both outside and inside their firewalls.

Dwight Davis has reported on and analyzed computer and communications industry trends, technologies and strategies for more than 35 years. All opinions are expressed are his own.  AT&T has sponsored this blog post.

Dwight Davis Independent Writer Researcher About Dwight