The right to freedom of expression – is there a better way to begin this topic? What could be perceived as a written embodiment of one of the holy pillars of modern human rights also appears to be in an unspoken conflict with other fundamental norms. National security matters and the protection of judicial authority are some of the permissible restrictions on free speech, but we are interested here in something else – the protection of health, morale, and reputation or rights of others, where the greatest emphasis is to be placed on protecting your online reputation. Not needing a great deal of explanation is the fact that the current tension between the freedom of expression and other underlying interests is exacerbated by the constant, instantaneous interconnectedness of the world we live in today.
Source: Cyber Libel in Canada by Zvulony and Co. P.C. This article does not aim to tip the scales one way or another; it will let the reader decide what seems more important: the right to freedom of expression on the Internet or the protection of online reputation. Defamation is a false statement that significantly denigrates the reputation of an individual, business, product, government, religion, group or nation. Reputation and privacy matters of individuals are protected by defamation laws, which as already mentioned are implicitly recognized by many human rights instruments. Corporate bodies can sue for defamation, provided that they can prove that it has caused them or is likely to cause them financial loss. Common law jurisdictions can also make a distinction between spoken defamation, i.e. slander, and defamation in other settings such as images or printed words, called libel. What can be considered defamatory on the Internet? Most countries do not have specific laws to address cyber libel, and the national defamation norms or international agreements are adjusted to deal with the situation on a case by case basis. While there can be state-to-state differences, we could still point out common characteristics of when someone should be considered defamed:
a statement directed towards one or more persons; at least one person, other than the one who is being defamed, has noticed the false information; the average person familiar with the statement would think less of the victim based on what is being posted or said—in effect, lowering the reputation of the victim.
Typical instances of such a statement comprise:
When you state that someone is corrupt, dishonest, or disloyal; When you state that someone is alleged to have committed, or suspected of committing something illegal; When you ridicule someone; When you state that someone carries a contagious disease, suffers from a mental illness, or anything out of line that is likely to cause this person to be avoided;
Yet, usually there are grounds in almost every legislation that would exclude liability: the statement is substantially true; the statement is published with the consent of the person being defamed; the statement is unimportant and unlikely to cause real reputational damages to the person being defamed; “honest opinion” or “fair comment” is when someone is critical of another person, and that even may cause harm to his reputation, but it’s nothing more than a personal stance. It consist of three elements: a mere comment, not assertions of facts; based on provable facts pointed out or referred to in the story; honestly believed. Some speech is recognized by courts and parliament as important to society and is protected against liability for defamation. This category of statements is called “privilege”. Examples of that would be court hearings, public meetings, parliamentary debates, and more obscure cases like a caller who reports that a persons has committed a crime—this person turns out to be innocent—but the report is made in good faith.
Source: Cyber Libel in Canada by Zvulony and Co. P.C. Note: Do not confuse cyber libel with cyberbullying. Criminal Defamation In many EU countries defamation is not only a subject to civil action, but is also considered a criminal offence. Although the harsh side of the medal is rarely applied to most of these countries, in others this penalty is used frequently. For instance, between 2002 and 2004 more than 100 people were incarcerated in the Netherlands for criminal defamation, insult, and libel. There were such convictions of journalists in Denmark, Belgium, Finland, Norway, Switzerland, Malta and Italy. Overly applied defamation punishments may not be feasible, on the other hand, because it has a chilling effect on free expression. Ordinary citizens and even respectable journalists should be free to openly voice their opinion on powerful public figures. The European Court of Human Rights (ECtHR) has overturned a case in France where a man was convicted of defamation of the Mayor of Sens on a website. According to the court, public officials and politicians must tolerate even tart commentary or criticism because they discharge a public mandate (a vox populi commission of sorts) and penalizing other citizens would be “disproportionate to the legitimate aim of protecting the reputation and rights of others.” Perhaps more adequate protection against defamation can be achieved if such claims are “a private matter between two individuals, with which the State should not concern itself.”
Source: How do you fight back against online defamation? by Mayer, A. III. Defamation Laws in Australia, Europe & UK, Canada and USA.
Australia
Larger corporations in Australia were barred from suing for defamation in 2006. They still can pursue claims for defamation on behalf of officers and employees who are known as the public face of the given corporation. Alternatively, large corporations can claim damage to business reputation framed in other types of torts like deceit, injurious falsehood, negligent misstatement or deceptive conduct. In addition, companies can be subject to charges if defamatory tweets or comments appear on their corporate Twitter account or Facebook page, regardless of who is the actual person responsible for posting them. Interesting is the matter of universal jurisdiction Australian courts may employ with regard to cases of cyber libel. Jason Bosland, the Senior Lecturer of Media Law at the University of Melbourne, shares his view that no matter from which point of the world you made your comments, you could still be held accountable in Australia provided that you have a reputation there. So, dear New Zealanders, be careful with the sheep jokes about your neighbours. Well, in case a real nasty one had managed to slip out through your ‘moral censor’, then remove the material immediately, and try to straighten out things if the opposite side has made a complaint that he is defamed. You can offer in this situation to publish an apology or reasonable correction, or pay expenses caused to the complainant until this moment. A 28-day period is given for all these redemptive actions, and even though a trial is still possible, they will have some mitigating effect on the final decision taken by a court.
European Union
2.1 “Hosting Defence” and “Notice and Takedown” Mechanism Fig. 1 “Notice and Takedown”
Article 14 of the E-Commerce Directive (2000/31/EC) limits the liability of information service providers where such services amount to a mere “storage of information” provided by users of the services. This is the so-called “hosting defence.” Another term, “notice and takedown,” corresponds to the expeditious removal of the offending information by service providers upon notification. Reacting quickly to remove defamatory content after a user has made a complaint is the principle of work here. Hence, it can be inferred that service providers are not compelled to proactively monitor all what they host, and are only expected to take action when a user has already made a complaint. In addition, as you will see further in the article, service providers should not control the content or have knowledge of illegal information.
A new defamation law came into force in the UK from the start of 2014. Some of its most prominent elements are:
A company can sue for defamation if it can prove that the business has suffered or is likely to suffer “serious financial loss.” After receiving a notification, authors of comments have 5 days to respond, whether they agree to delete them or not. If they fail to respond, they have the last 48 hours of the 5-day deadline to delete the comments so that they would avoid exposure of themselves to liability. The websites hosting the material at issue generally exchange messages between authors of allegedly defamatory comments and complainants, and have the duty to hide the identity of these parties if anonymity is sought. If an author refuses to take down his post, then he should give his name and address to the web operator. Failing to provide this information, for whatever reason, would authorize website operators to remove the comments within two days. If there are “no means of contacting the poster” with a “private electronic communication,” websites should delete the comments within two days of receiving the complaint.
According to Lord McNally, “where the poster has not consented the release of his or her contract details to the complaint, it will be a matter for the complainant to consider what further action he may wish to take.” 2.2 Does Any form of Editorial Control go Beyond mere Storage of Information? Kashke v Gray & Hilton (2010) is the key case law to answer this question. One of the items in Ms. Kaschke’s argument was that the website as a whole should be taken into account, not just the page where the defamatory material appears. The judge, referring to the case of Imran Karim v Newsquest Media Group Limited, considers that the host can still be exempted from liability for the user-generated content, in spite of having posted such elsewhere on the website. However, Mr. Hilton, the defendant and website owner, occasionally wrote summaries of users’ post in order to promote them, and the judge viewed that activity “went beyond mere storage so that the hosting defence would not be available to Mr. Hilton if he had promoted the offending post.” Even as little editing as “fixing of spelling or grammar in a post could cost the provider the protection of the hosting defence.”Therefore, when hosts of services actively edit user comments, they could expose themselves to liability for the part that was edited. 2.3 Failing to Remove the Content after Notification Tamiz v Google is another UK case in 2013, in which the court ruled that “a blog (Google Blogger) could, in principle, be liable for defamatory comments that are hosted after Google had received a complaint but then failed to remove the comments.” So being notified about the content, Google must act upon the signal in order to avoid liability issues. However, the question of what level of “notice” is needed to impose a service provider a duty to take down users’ defamatory materials remains somehow ill-defined. Perhaps at a minimum, a formal letter of complaint would be sufficient to make the machinery move, urging the intermediary to remove all allegedly defamatory comments. 2.4 Delphi AS v Estonia – A Turn for the Worse?In a 2013 case Delphi AS v Estonia before the European Court of Human Rights, the defendant Delhi – the largest online news portal in Estonia—was sued for numerous defamatory comments made by anonymous Delphi readers on their website. Interestingly, the claimant, which is a ferry company, did not sue Delphi for their article, rather the claimant hold the opposing party responsible for hosing defamatory UGC. Many were stricken by the decision of the European Court of Human Rights to impute liability to Delphi, because the company employs strict enough measures to eradicate grotesque remarks users leave sometimes by operating a working ‘notice and takedown’ system, which allows Delphi to remove the conflict content after being notified. Clearly, these efforts of Delphi to stay out of trouble weren’t convincing enough for the court, whose decision was in favour of the plaintiff. The reasoning for that is the appearance of a highly controversial and provocative article, which presupposed a reasonably foreseeable reaction by the Delphi readers. In other words, the news provider should have anticipated beforehand this outburst of defamatory comments. Instead, the offending comments had lingered on for six weeks post publication before a complaint was made. Notwithstanding the immediate removal upon notification, the court deemed that the system at hand allowed these comments to be accessible to public for a too long period of time. Consequently, this decision is at odds with widely accepted ‘notice-takedown’ mechanism. Fig. 2 Timeline of Delphi AS v Estonia
Of course, the event attracted significant volumes of criticism. One curious point was expressed by Gabrielle Guillemin in a writing on the Inforrm’s Blog: “[T]he Court conveniently overlooked that any damage suffered as a result of the defamatory comments being accessible for 6 weeks was the direct result of the complainant’s own failure to notify the material to Delfi despite the availability of a swift and easy-to-use reporting procedure. More generally, the Court seems to have naively assumed that every injured party acts in good faith, the reported content is always in fact unlawful and the Internet filters are the silver bullet to deal with the different types of illegal content that circulate on the Internet.” Furthermore, given the tremendous amounts of comments and the unlikeliest of places the unlawful ones tend to crop up, intermediaries and other businesses involved in this field fear that they will be forced from now on to pre-moderate materials and even inevitably block access to otherwise lawful content. Apparently, the court willingness to impose damages on well-capitalised service providers in place of the real author of libellous comments does not overjoy everyone in the position of Delphi.
Canada
In comparison to the US defamation laws, the Canadian laws are more plaintiff-friendly. In fact, they are the most plaintiff-friendly defamation laws in the English speaking world. It is not necessary to prove the intent of the defendant, because the intent in the Canadian defamation legislation is presumed. The burden of proof is on the defendant. And journalists are better protected than average Internet users in a way that the stricter limitation periods allow the former to post an apology that would mitigate the negative effects. Believe it or not, American citizens do not have to pay defamation damages in an international jurisdiction, Canada in this case, whose standards are less friendly to free speech than the US law.
USA
For web defamation case law, it perhaps all started in the early ’90s with cases like Cubby, Inc. v. CompuServe Inc. and the Stratton Oakmont, Inc. v. Prodigy Services Co. While in the first case the court found that the defendant acted only as a distributor of information (not liable as such), the conditions weren’t so favourable for Prodigy in the second one. The company was known for its stringent threefold editorial control over messages appearing on their bulletin boards. Thus, apart from being a re-publisher, Prodigy was liable as a publisher as well. According to the court’s general argument, “Prodigy’s conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability to CompuServe and other computer networks that make no such choice.” In the wake of this decision, many legal advisers recommended their clients to avoid censorship discussion groups as a precaution against defamation liability. 4.1 Immunity of Online Service Providers In 1996, Section 230 of the Communication Decency Act (CDA § 230) granted service providers of the so-called interactive computer services immunity as publishers and distributors of content provided by an independent party, intended to the people at large. As a general rule, they cannot be held liable for “Good Samaritan” attempts to sieve out questionable content. Therefore, American service providers can actively monitor UGC without jeopardising the immunity they have. This immunity also stretches so much to include within its umbrella liability for defamation and invasion of privacy. In filtering the incoming content, the intermediaries are allowed to edit it, but the fine line between acceptable editing is not being clarified, nor is the turning point at which they would become “information content providers.” Some can deduct that if they do heavy editing on comments, changing the overall meaning of the information, and that new meaning is defamatory, then they will lose their Section 230 protection. 4.2 What about the Rights of Injured Parties? Obstacles to Suing for Defamation Well, there is a little but bright spark of hope for victims of defamation. A website named Roommates.com opted to filter the results displayed to users in a way that involves preselecting the content on preferences picked one-sidedly by the service provider. Users were asked to fill out mandatory text fields regarding information about their sexual lives. The Ninth Circuit found that this activity was in violation of the Fair Housing Act, which is an anti-discrimination law. Furthermore, such a “collaborative effort” goes beyond the regular duties of a mere “re-publisher,” and a “content provider” does not enjoy immunity under CDA § 230. Fig. 3
In another similar case in 2012, the court in Jones v. Dirty World Entertainment Recordings LLC after examining the facts concluded that the defendant “specifically encouraged development of what is offensive about the content,” thus not deserving exoneration from liability on Section 230. The court based its decision on three factors: 1) the very name of the site provokes visitors to throw in defamatory remarks 2) the side itself picked which posts to published, adding provocative taglines 3) the site itself generated comments to many threads to stimulate the output of defamatory content. Also, the ruling was made in consideration of other two similar cases: Fair Housing Council of San Fernando Valley v. Roommates.com, and Federal Trade Commission v. Accusearch. Obviously these two cases deviate from the mainstream defamation case law with respect to CDA § 230. Yet they may serve as a guide-book of what can really work in the court room. Despite this optimistic ray, the light is dim because other substantial obstacles remain. On the one side there is this practical issue of anonymous posting, which creates uncertainty about the real identity of the person behind the libellous words. On the other side, suitors frequently face enormous difficulties defining the right jurisdiction and defend against the tedious, labour-intensive anti-SLAPP motion. Let’s have a more detailed look at the problems:
Anonymity of PostersThe anonymity of online posters is well protected, and a victim of defamation has to go through a lot bureaucratic burdens before finding out the name of the verbal bully. The injured party of defamation is overburdened by requirements that make the whole battle pricey, prolonged, and nerve-wracking – so even if you end up somehow in the winning corner, a subliminal feeling might eat you up from inside that you’ve just attained a Pyrrhic win. Jurisdictional BarriersIf the defamation material is delivered online from another state, the plaintiff may experience problems to claim jurisdiction in his home state. This claim must pass the U.S. Supreme Court’s “Calder test,” which entrusts the plaintiff with the burden to prove that the opposite party “purposefully directed” the publication toward that country with “intent to cause harm in the forum (See Calder v. Jones, 465 U.S. 783 (1984)).” The Anti-SLAPP StatuteThe anti-SLAPP statute (Cal. Civ. Proc. Code § 425.16) can delay a material defamation action for years if the defendant file an early motion to dismiss, under the pretence that “the challenged activity is constitutionally protected speech. Yes, it’s a sham plea, but it works. Injunction BarriersInjunction relief to make a service provider take down the insulting content would not necessarily produce the desired effect, because he may feel untouchable behind the broad umbrella given to him by CDA. So claimants are usually left at the mercy of the service providers to remove the post voluntarily. Suing AbroadBringing your case to a foreign court is a possible alternative. American courts require the plaintiff to prove his right, while foreign courts may use reversed burden of proof. The problem following this path is that foreign judgments are not easily enforceable in American courts. They have to pass American legal standards in order to become one with the local system (See 28 U.S.C. §§ 4101-4105).
4.3 Perhaps a New Legislative Remedy is Needed As you can see, there are many difficulties that the victims of defamation encounter when they attempt to defend their reputation. Lawyers have claimed throughout the years a very broad spectre of torts in some blind effort to overcome the nearly impregnable CDA immunity: claims for declaratory and injunctive relief, negligence, negligent misrepresentation, contractual liability, premises liability, interference with business expectancy, nuisance and misuse of funds. Nevertheless, courts have rejected all of them. Maybe the right answer to the problem is different. As the author of the article Remedies for Web Defamation and lawyer by trade, Neville L. Johnson, suggests: “One possible solution to the problem of Internet defamation is to amend section 230 to more closely resemble the Digital Millennium Copyright Act, with its system of notice and take-down procedures to regulate copyrighted material online. (See 17 U.S.C. § 101.) If ISPs can be forced to take down copyrighted material, proponents assert, why can’t defamation victims enjoy similar protection, especially since the harm is more intimately felt?”
Reference List Larger corporations in Australia were barred from suing for defamation in 2006. They still can pursue claims for defamation on behalf of officers and employees who are known as the public face of the given corporation. Alternatively, large corporations can claim damage to business reputation framed in other types of torts like deceit, injurious falsehood, negligent misstatement or deceptive conduct. In addition, companies can be subject to charges if defamatory tweets or comments appear on their corporate Twitter account or Facebook page, regardless of who is the actual person responsible for posting them. Interesting is the matter of universal jurisdiction Australian courts may employ with regard to cases of cyber libel. Jason Bosland, the Senior Lecturer of Media Law at the University of Melbourne, shares his view that no matter from which point of the world you made your comments, you could still be held accountable in Australia provided that you have a reputation there. So, dear New Zealanders, be careful with the sheep jokes about your neighbours. Well, in case a real nasty one had managed to slip out through your ‘moral censor’, then remove the material immediately, and try to straighten out things if the opposite side has made a complaint that he is defamed. You can offer in this situation to publish an apology or reasonable correction, or pay expenses caused to the complainant until this moment. A 28-day period is given for all these redemptive actions, and even though a trial is still possible, they will have some mitigating effect on the final decision taken by a court. 2.1 “Hosting Defence” and “Notice and Takedown” Mechanism Fig. 1 “Notice and Takedown”
Article 14 of the E-Commerce Directive (2000/31/EC) limits the liability of information service providers where such services amount to a mere “storage of information” provided by users of the services. This is the so-called “hosting defence.” Another term, “notice and takedown,” corresponds to the expeditious removal of the offending information by service providers upon notification. Reacting quickly to remove defamatory content after a user has made a complaint is the principle of work here. Hence, it can be inferred that service providers are not compelled to proactively monitor all what they host, and are only expected to take action when a user has already made a complaint. In addition, as you will see further in the article, service providers should not control the content or have knowledge of illegal information.
A new defamation law came into force in the UK from the start of 2014. Some of its most prominent elements are:
A company can sue for defamation if it can prove that the business has suffered or is likely to suffer “serious financial loss.” After receiving a notification, authors of comments have 5 days to respond, whether they agree to delete them or not. If they fail to respond, they have the last 48 hours of the 5-day deadline to delete the comments so that they would avoid exposure of themselves to liability. The websites hosting the material at issue generally exchange messages between authors of allegedly defamatory comments and complainants, and have the duty to hide the identity of these parties if anonymity is sought. If an author refuses to take down his post, then he should give his name and address to the web operator. Failing to provide this information, for whatever reason, would authorize website operators to remove the comments within two days. If there are “no means of contacting the poster” with a “private electronic communication,” websites should delete the comments within two days of receiving the complaint.
According to Lord McNally, “where the poster has not consented the release of his or her contract details to the complaint, it will be a matter for the complainant to consider what further action he may wish to take.” 2.2 Does Any form of Editorial Control go Beyond mere Storage of Information? Kashke v Gray & Hilton (2010) is the key case law to answer this question. One of the items in Ms. Kaschke’s argument was that the website as a whole should be taken into account, not just the page where the defamatory material appears. The judge, referring to the case of Imran Karim v Newsquest Media Group Limited, considers that the host can still be exempted from liability for the user-generated content, in spite of having posted such elsewhere on the website. However, Mr. Hilton, the defendant and website owner, occasionally wrote summaries of users’ post in order to promote them, and the judge viewed that activity “went beyond mere storage so that the hosting defence would not be available to Mr. Hilton if he had promoted the offending post.” Even as little editing as “fixing of spelling or grammar in a post could cost the provider the protection of the hosting defence.”Therefore, when hosts of services actively edit user comments, they could expose themselves to liability for the part that was edited. 2.3 Failing to Remove the Content after Notification Tamiz v Google is another UK case in 2013, in which the court ruled that “a blog (Google Blogger) could, in principle, be liable for defamatory comments that are hosted after Google had received a complaint but then failed to remove the comments.” So being notified about the content, Google must act upon the signal in order to avoid liability issues. However, the question of what level of “notice” is needed to impose a service provider a duty to take down users’ defamatory materials remains somehow ill-defined. Perhaps at a minimum, a formal letter of complaint would be sufficient to make the machinery move, urging the intermediary to remove all allegedly defamatory comments. 2.4 Delphi AS v Estonia – A Turn for the Worse?In a 2013 case Delphi AS v Estonia before the European Court of Human Rights, the defendant Delhi – the largest online news portal in Estonia—was sued for numerous defamatory comments made by anonymous Delphi readers on their website. Interestingly, the claimant, which is a ferry company, did not sue Delphi for their article, rather the claimant hold the opposing party responsible for hosing defamatory UGC. Many were stricken by the decision of the European Court of Human Rights to impute liability to Delphi, because the company employs strict enough measures to eradicate grotesque remarks users leave sometimes by operating a working ‘notice and takedown’ system, which allows Delphi to remove the conflict content after being notified. Clearly, these efforts of Delphi to stay out of trouble weren’t convincing enough for the court, whose decision was in favour of the plaintiff. The reasoning for that is the appearance of a highly controversial and provocative article, which presupposed a reasonably foreseeable reaction by the Delphi readers. In other words, the news provider should have anticipated beforehand this outburst of defamatory comments. Instead, the offending comments had lingered on for six weeks post publication before a complaint was made. Notwithstanding the immediate removal upon notification, the court deemed that the system at hand allowed these comments to be accessible to public for a too long period of time. Consequently, this decision is at odds with widely accepted ‘notice-takedown’ mechanism. Fig. 2 Timeline of Delphi AS v Estonia
Of course, the event attracted significant volumes of criticism. One curious point was expressed by Gabrielle Guillemin in a writing on the Inforrm’s Blog: Furthermore, given the tremendous amounts of comments and the unlikeliest of places the unlawful ones tend to crop up, intermediaries and other businesses involved in this field fear that they will be forced from now on to pre-moderate materials and even inevitably block access to otherwise lawful content. Apparently, the court willingness to impose damages on well-capitalised service providers in place of the real author of libellous comments does not overjoy everyone in the position of Delphi. In comparison to the US defamation laws, the Canadian laws are more plaintiff-friendly. In fact, they are the most plaintiff-friendly defamation laws in the English speaking world. It is not necessary to prove the intent of the defendant, because the intent in the Canadian defamation legislation is presumed. The burden of proof is on the defendant. And journalists are better protected than average Internet users in a way that the stricter limitation periods allow the former to post an apology that would mitigate the negative effects. Believe it or not, American citizens do not have to pay defamation damages in an international jurisdiction, Canada in this case, whose standards are less friendly to free speech than the US law. 4.1 Immunity of Online Service Providers In 1996, Section 230 of the Communication Decency Act (CDA § 230) granted service providers of the so-called interactive computer services immunity as publishers and distributors of content provided by an independent party, intended to the people at large. As a general rule, they cannot be held liable for “Good Samaritan” attempts to sieve out questionable content. Therefore, American service providers can actively monitor UGC without jeopardising the immunity they have. This immunity also stretches so much to include within its umbrella liability for defamation and invasion of privacy. In filtering the incoming content, the intermediaries are allowed to edit it, but the fine line between acceptable editing is not being clarified, nor is the turning point at which they would become “information content providers.” Some can deduct that if they do heavy editing on comments, changing the overall meaning of the information, and that new meaning is defamatory, then they will lose their Section 230 protection. 4.2 What about the Rights of Injured Parties? Obstacles to Suing for Defamation Well, there is a little but bright spark of hope for victims of defamation. A website named Roommates.com opted to filter the results displayed to users in a way that involves preselecting the content on preferences picked one-sidedly by the service provider. Users were asked to fill out mandatory text fields regarding information about their sexual lives. The Ninth Circuit found that this activity was in violation of the Fair Housing Act, which is an anti-discrimination law. Furthermore, such a “collaborative effort” goes beyond the regular duties of a mere “re-publisher,” and a “content provider” does not enjoy immunity under CDA § 230. Fig. 3
In another similar case in 2012, the court in Jones v. Dirty World Entertainment Recordings LLC after examining the facts concluded that the defendant “specifically encouraged development of what is offensive about the content,” thus not deserving exoneration from liability on Section 230. The court based its decision on three factors: 1) the very name of the site provokes visitors to throw in defamatory remarks 2) the side itself picked which posts to published, adding provocative taglines 3) the site itself generated comments to many threads to stimulate the output of defamatory content. Also, the ruling was made in consideration of other two similar cases: Fair Housing Council of San Fernando Valley v. Roommates.com, and Federal Trade Commission v. Accusearch. Obviously these two cases deviate from the mainstream defamation case law with respect to CDA § 230. Yet they may serve as a guide-book of what can really work in the court room. Despite this optimistic ray, the light is dim because other substantial obstacles remain. On the one side there is this practical issue of anonymous posting, which creates uncertainty about the real identity of the person behind the libellous words. On the other side, suitors frequently face enormous difficulties defining the right jurisdiction and defend against the tedious, labour-intensive anti-SLAPP motion. Let’s have a more detailed look at the problems:
Anonymity of PostersThe anonymity of online posters is well protected, and a victim of defamation has to go through a lot bureaucratic burdens before finding out the name of the verbal bully. The injured party of defamation is overburdened by requirements that make the whole battle pricey, prolonged, and nerve-wracking – so even if you end up somehow in the winning corner, a subliminal feeling might eat you up from inside that you’ve just attained a Pyrrhic win. Jurisdictional BarriersIf the defamation material is delivered online from another state, the plaintiff may experience problems to claim jurisdiction in his home state. This claim must pass the U.S. Supreme Court’s “Calder test,” which entrusts the plaintiff with the burden to prove that the opposite party “purposefully directed” the publication toward that country with “intent to cause harm in the forum (See Calder v. Jones, 465 U.S. 783 (1984)).” The Anti-SLAPP StatuteThe anti-SLAPP statute (Cal. Civ. Proc. Code § 425.16) can delay a material defamation action for years if the defendant file an early motion to dismiss, under the pretence that “the challenged activity is constitutionally protected speech. Yes, it’s a sham plea, but it works. Injunction BarriersInjunction relief to make a service provider take down the insulting content would not necessarily produce the desired effect, because he may feel untouchable behind the broad umbrella given to him by CDA. So claimants are usually left at the mercy of the service providers to remove the post voluntarily. Suing AbroadBringing your case to a foreign court is a possible alternative. American courts require the plaintiff to prove his right, while foreign courts may use reversed burden of proof. The problem following this path is that foreign judgments are not easily enforceable in American courts. They have to pass American legal standards in order to become one with the local system (See 28 U.S.C. §§ 4101-4105).
4.3 Perhaps a New Legislative Remedy is Needed As you can see, there are many difficulties that the victims of defamation encounter when they attempt to defend their reputation. Lawyers have claimed throughout the years a very broad spectre of torts in some blind effort to overcome the nearly impregnable CDA immunity: claims for declaratory and injunctive relief, negligence, negligent misrepresentation, contractual liability, premises liability, interference with business expectancy, nuisance and misuse of funds. Nevertheless, courts have rejected all of them. Maybe the right answer to the problem is different. As the author of the article Remedies for Web Defamation and lawyer by trade, Neville L. Johnson, suggests: “One possible solution to the problem of Internet defamation is to amend section 230 to more closely resemble the Digital Millennium Copyright Act, with its system of notice and take-down procedures to regulate copyrighted material online. (See 17 U.S.C. § 101.) If ISPs can be forced to take down copyrighted material, proponents assert, why can’t defamation victims enjoy similar protection, especially since the harm is more intimately felt?” Bitlaw. ISP Liability. Retrieved on 23/10/2014 from http://www.bitlaw.com/internet/isp.html#defamation Electronic Frontier Foundation. Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir. 2008). Retrieved on 23/10/2014 from https://www.eff.org/issues/cda230/cases/fair-housing-council-san-fernando-valley-v-roommatescom Harris, M. (2014). The EU’s commitments to free expression: Libel and privacy. Retrieved on 23/10/2014 from http://www.indexoncensorship.org/2014/01/eus-commitments-free-expression-libel-privacy/ Harvard Journal on Sports & Entertainment Law, (2014). Defamation, Celebrities, and the Internet. Retrieved on 23/10/2014 from http://harvardjsel.com/2014/04/defamation-internet/ Gabrielle Guillemin. Case Law, Strasbourg: Delfi AS v Estonia: Court Strikes Serious Blow to Free Speech Online. Retrieved on 23/10/2014 from http://inforrm.wordpress.com/2013/10/15/case-law-strasbourg-delfi-as-v-estonia-court-strikes-serious-blow-to-free-speech-online-gabrielle-guillemin/ Goldman, E. (2009). Roommates.com Infects the Tenth Circuit–FTC v. Accusearch. Retrieved on 23/10/2014 from http://blog.ericgoldman.org/archives/2009/06/roommatescom_in.htm Johnson, N. (2013). Remedies for Web Defamation. Retrieved on 23/10/2014 from http://www.callawyer.com/clstory.cfm?eid=928446&wteid=928446_Remedies_for_Web_Defamation Lexology (2010). Kaschke v Gray [2010] EWHC 690 (QB): online blogs and the hosting safe harbour. Retrieved on 23/10/2014 from http://www.lexology.com/library/detail.aspx?g=89d59a65-82e9-47e8-8cf1-345c343c9519 Linklaters, (2010). EU – How robust is the hosting defence? Retrieved on 23/10/2014 from http://www.linklaters.com/Insights/Publication1403Newsletter/20100705/Pages/hostingdefence.aspx Mayer, A. (2013). How do you fight back against online defamation? Retrieved on 23/10/2014 from http://www.cbc.ca/news/technology/how-do-you-fight-back-against-online-defamation-1.1314609 Mark Twain McCarthy, H. (2013). Internet defamation: who is legally responsible for online comments? Retrieved on 23/10/2014 from http://www.irishexaminer.com/analysis/internet-defamation-who-is-legally-responsible-for-online-comments-266171.html OUT-LAW.COM, (2013). UK defamation law reforms take effect from start of 2014. Retrieved on 23/10/2014 from http://www.theregister.co.uk/2013/11/21/uk_defamation_law_reforms_take_effect_from_start_of_2014/ Wikipedia. Stratton Oakmont, Inc. v. Prodigy Services Co. Retrieved on 23/10/2014 from http://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prodigy_Services_Co. Wikipedia. Cubby, Inc. v. CompuServe Inc.. Retrieved on 23/10/2014 from http://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prodigy_Services_Co. Zvulony and Co. P.C., (2014). Cyber Libel in Canada. Retrieved on 23/10/2014 from http://zvulony.ca/2012/articles/defamation-articles/cyber-libel-canada/