Submission to DPP consultation on social media prosecutions
In December 2012 the Director of Public Prosecutions launched a consultation on “interim guidelines on prosecuting cases involving communications sent via social media”. The guidelines “set out the approach that prosecutors should take when making decisions in relation to cases where it is alleged that criminal offences have been committed by the sending of a communication via social media.” This was in response to a number of cases through which people were subject to overzealous prosecutions for their comments on social networks, often under section 127 of the Communications Act 2003. For more information see our wiki page.
Our response to the consultation is based on our concern that current laws create a chilling effect on freedom of expression online.
The interim guidelines on which we are commenting are available at the CPS website.
For more information please contact Peter Bradwell: peter@openrightsgroup.org
Response to the consultation
Question 1. Do you agree with the approach set out in paragraph 12 above for initially assessing offences which may have been committed using social media?
Open Rights Group welcomes the effort to improve on the guidance regarding prosecutions involving communications sent using social media, in the light of the many cases where individuals have been prosecuted inappropriately for their online speech.
Whilst acknowledging that legislative change is not within the remit of the CPS, we would like to note at the outset that there are fundamental problems with the law that guidelines such as these cannot adequately address.
Guidelines could serve as mild and sporadically effective pain relief, but will not treat the cause of the pain. These guidelines cannot be a long term solution and will not be sufficient to create a sustainable and acceptable legal environment for the promotion of freedom of expression online – even if they are helpful in the short term until a proper legislative solution is found.
Open Rights Group believes that repeal of section 127 is necessary, for example, which should be followed by a full review of speech laws in the digital age. Without this, there will be continued uncertainty for users of social media, alongside inconsistent, arbitrary and illiberal prosecutions. This situation is undermining the role the Internet could play as a supportive environment for freedom of expression.
With regard to this specific question, and this guidance, the ‘initial assessment’ criteria are helpful, although we believe they require clarification to ensure freedom of expression is safeguarded appropriately. Otherwise further arbitrary and inconsistent prosecutions will happen, and users of social media will be left unable to foresee whether their communications via social media may lead to their prosecution.
This will not only undermine freedom of expression, but could bring the justice system into disrepute and place unnecessary financial burdens upon it. As noted above, we do not expect the guidelines to fully address this problem but further clarification would help.
Broadly, we agree that it is useful to distinguish between the four different categories specified in paragraph 12. However, the guidance could further clarify the definition of the key terms, and could give a clearer and more defined role to the consideration of the context of the message.
The guidance on ‘credible threats’ could be clearer and more precise on what a credible, ‘menacing’ threat would be.
The reference in paragraph 17 of the draft guidance to comments from Lord Chief Justice in Chambers v DPP [2012] is welcome – that a message that “does not create fear or apprehension in those to whom it is communicated, or may reasonably be expected to see it, falls outside…for the simple reason that the message lacks any menace.” The guidance suggests prosecutors should ‘heed’ these words.
However, we are concerned that this is unlikely to be sufficient to narrow down the scope of ‘menacing’. This is not strong enough or specific enough. First, we suggest that the word “heed” in paragraph 17 be replaced with the word “follow”.
The guidance could go further, providing a specific definition of credible, perhaps along the lines of “the statement is such as to cause a reasonable person of normal fortitude that the maker os a statement is likely to carry out their stated intention.”
Second, the criteria ‘may reasonably be expected to see it’ is unclear. If someone sends a message over a social network such as Twitter, it is extremely difficult to second guess who ultimately may see that message. It would be unsatisfactory if people should have to stick to a tone that they can reasonably expect such a broad audience to understand.
The context in which a message is sent is mentioned more explicitly in relation to communications defined in 12(4), but is likely to play an equally important role in communications falling under 12(1). We would suggest that the ‘credible threat’ assessment for ‘menacing’ communications include more detail on context and scale.
We do not agree with the conclusions relating to the ‘publicness’ of social media communications set out in paragraph 27:
“In Chambers v DPP [2012] EWHC 2157 (Admin), the Divisional Court held that because a message sent by Twitter is accessible to all who have access to the internet, it is a message sent via a “public electronic communications network”. Since many communications sent via social media are similarly accessible to all those who have access to the internet, the same applies to any such communications.”
It is not true that a message sent via social media is necessarily accessible to all those who have access to the Internet. For example, many Twitter users only permit access to their feed to approved followers. Similarly, it is possible to create ‘closed’ groups and discussions on Facebook and Google Plus. So we do not believe it is helpful for those undertaking an initial assessment to assume that “all such messages” are messages that are accessible to all who have access to the Internet.
This paragraph should clarify that prosecutors should take into account the actual audience in context rather than assuming that all social media messages are visible to all who use the Internet.
Whilst acknowledging that context is dealt with in some additional detail in paragraphs 28 to 36, we believe the initial assessment section for ‘communications which are grossly offensive, indecent, obscene or false’ should include a more nuanced account of the context in which a message was posted.
Question 2. Do you agree with the threshold, as explained above, in bringing a prosecution under section 127 of the Communications Act 2003 or section 1 of the Malicious Communications Act 1988?
We are happy to see the guidance related to the threshold grounded in a consideration of freedom of expression.
We also welcome that the guidance highlights that the Communications Act 2003 prohibits ‘grossly’ offensive, as opposed to simply offensive, communications, as noted in paragraph 34.
However, we feel that not enough is done to define what differentiates the two – meaning what pushes a communication over a line from offensive to grossly offensive.
We are concerned about the reliance on the reference to Lord Bingham in paragraph 34:
“There can be no yardstick of gross offensiveness otherwise than by the application of reasonably enlightened, but not perfectionist, contemporary standards to the particular message sent in its particular context.”
Specifically, we are concerned with the attempt to protect what will often be very unpopular, controversial or ‘bad taste’ speech by reference to ‘reasonably enlightened, but not perfectionist, contemporary standards’.
The very nature of some of the more controversial speech that may be posted online is that it will test boundaries of contemporary standards – even reasonably enlightened standards. The great majority of that speech should not attract prosecution. There is a danger that the appeal for ‘contemporary standards’ to be applied to a particular message in its particular context may lead to a situation in which the expression of outrage or concern – by an individual or a group – may be considered sufficient to qualify a communication as ‘grossly offensive’.
Related to the note above on paragraph 27, the treatment of context in paragraph 35 is insufficient. It requires more detail on how the nature of social media communication should influence a prosecutor’s decision about whether it qualifies as ‘grossly’ offensive, for example.
Currently, there is little to guide a prosecutor expected make a better decision through the process defined in paragraph 36 – i.e. that they should be satisfied that the communication in question is *more than*: Offensive, shocking or disturbing etc.
As a consequence, we are concerned that there is still significant scope for overzealous prosecutions of messages sent via social media, and it seems clear that users will not be able to foresee with any certainty what will be considered prosecutable. Examples of what may or may not be considered ‘grossly offensive’ may help here.
We recommend more detail on the threshold that qualifies a communication for prosecution, including additional detail on the contextual issues of scale and intended audience, for example.
The higher threshold should be higher still.
Question 3. Do you agree with the public interest factors set out in paragraph 39 above?
There are two problems with paragraph 39(B) as drafted.
First, this could encourage overactive ‘private’ policing of content, with social media services making more decisions about what is or may be illegal on their service, potentially for the good of their users or reputation. This would have a significant detrimental effect on the environment for freedom of expression, for example where it encourages these services to be risk averse.
Second, what happens in respect of paragraph 39(B) is entirely outside of the control of the user. It may be an appropriate factor in assessing harm in sentencing, it is inappropriate to consider at the stage of a decision to prosecute because it risks rendering prosecution decisions arbitrary from the point of view of the defendant – which in turn risks undermining public confidence in the justice system.
With regard to paragraph 39(C), it is hard to imagine situations in which it is *obvious* that a communication will reach a wider audience via social media than was intended. Content posted publicly on sites like Twitter, for a variety of reasons, may reach far beyond the intended audience. It is most often effectively impossible to judge when that would be the case. Even messages sent to deliberately provoke at sensitive moments may fail to attract any notoriety.
We would like to see contextual questions such as this dealt with in more detail earlier in the guidance – so that in considering whether a communication is actually grossly offensive, context is more relevant. Context is a more material factor in whether a message could be considered qualifying under paragraph 12(4) than the guidance makes out.
Question 4. Are there any other public interest factors that you think should also be included?
Other relevant factors that could be considered could be whether the communication was humourous, or whether any other approach could be taken for remedying the situation. The use of section 127 of the Communications Act should be a last resort at most.
Question 5. Do you have any further comments on the interim policy on prosecuting cases involving social media?
As noted in the opening remarks in answer to question 1, Open Rights Group ultimately would like to see the law itself amended to better safeguard freedom of expression online, with the repeal of section 127 of the Communications Act, for example, and a full review of speech laws in the digital age. In the meantime this guidance could prove useful if further clarifications as noted above are made.
The protection of freedom of expression online requires a clearer treatment of contextual considerations and more work to clarify the definitions of menacing, grossly offensive, indecent or false communications. Otherwise we are concerned that users will remain unclear about what may be prosecuted, inappropriate prosecutions will continue, the law will continue to chill freedom of expression online and confidence in the justice system will decline.