The last blog spoke about using behaviour in everyday life as part of threat assessment. Internet sites can also be auspicious locations for tracking the existence of potential threats. What can be taken from analyses of online comments is the importance of language and descriptiveness when assessing potential threats on there. Clearly, YouTube videos geared towards empathising and/or idolising school shooters is going to attract a certain kind of audience — these are, therefore, more likely to contain potential threats. Evident in the findings here is that a certain number of factors indicate the possibility of a potential threat.
-Looking at the key agents in the statements. Who are they? Upon whom is blame and responsibility being placed? A tendency to exonerate shooters from blame and place culpability with the victims, those in the school, and society as a whole show at the very least a willingness to have some degree of empathy with shooters. This, obviously, does not necessarily mean that the user expressing those comments is a potential threat, so this needs to be in combination with the other factors listed.
-Material that fits under the rubric of the ‘revenge and bullying thesis,’ where users adopt the stance that the victims deserved it, the perpetrator was achieving justice, and that bullying was the main driving force behind the attack. Here, the shooters are being ‘romanticised’ and their attack has created, to an individual susceptible to its influence, a ‘culture script’ of action as a way to resolve their problems. What makes this have the potential to be a threat is combining this frame with statements sharing personal experience of bullying, followed by expressions of a desire to also carry out a school shooting.
-The degree of attack planning, such as sharing knowledge of ways to gain weapons legally and illegally or naming specific targets, times and dates. When users are as detailed as this, it suggests something more than just an internet troll or bored kid looking for attention, as they would be more inclined to just put “I want to carry out a similar attack,” which is very vague, has an absence of a target and attack plan, and has not considered the means to obtain resources needed to execute a shooting.
-Statements paralleling past shooters or expressing narcissistic tendencies. The former would be in the form of “everyone else is to blame for making me want to do this,” which denotes persecution and a perceived injustice, and switches agency from the potential threat-maker to other (perhaps unspecified) agents. The narcissistic statements would fit with the traits outlined in earlier blogs, with comments like “I am so much better than everyone else and I laugh at their incompetence” if this was combined with other components outlined in this model. The over-reaction aspect is perhaps less of an issue on YouTube and other internet sites because the very nature of such debates means that discussions can commonly degenerate into rants and abusive comments — of course, this only serves to worsen the problem and any negative feelings the user already has.
-What should particularly be flagged are any discussions where users appear to be encouraging each other or possibly collaborating to plan an attack together. The more detailed and descriptive the discussion, the more likely it is that this could be shaping up to become something else. These should be closely monitored and interventionary action taken if required.
[This blog was put together by analysing comments on YouTube videos about school shooters. It builds upon the threat assessment material detailed in previous blog postings and is best used in collaboration with the offline behaviour threat assessment model proposed.]