Get Complete Project Material File(s) Now! »


Part of my career has been devoted to assisting telecom regulators in Europe justify the imposition of new regulatory measures on telecommunications operators. The task is difficult because the European directives on electronic communications impose a strict methodology that regulators must follow before they can propose new measures. Regulators must conduct a market analysis, identify one or more actors holding the dominant position, demonstrate that a market failure creates durable barriers to entry for competitors, that technological and market evolutions are not likely to cure the problem without regulatory intervention, and that competition law is not sufficient. Regulators must also show that the measure they propose is proportionate, causing the least intrusion possible while achieving the desired outcome. In some countries, regulators must present several alternatives and choose the one that is the least burdensome while still attaining the desired objective. The regulatory proposal must take into account the principles set forth in the Framework Directive1 on electronic communications: encourage competition, investment, respect for technology neutrality. Regulators must submit the proposed solution to public consultation and then to a special task force at the European Commission, which has the power to request changes and in some cases veto the measure. Finally, the measure must be reevaluated regularly to make sure it is still fit for purpose. Regulations that are no longer needed must be withdrawn.
This methodology contrasts with the total lack of methodology for regulatory measures designed to limit access to content on the internet. Measures adopted to fight online copyright infringement, hate speech, child pornography, and privacy violations often involve internet intermediaries, including telecom operators. Yet those measures are adopted without the analytical rigor that applies to European telecommunications regulation or to health, safety and environmental regulation in the United States. The potential adverse effects of the proposed measures are not studied in detail. The impact assessments omit major considerations relating to effects on fundamental rights and adverse effects on the internet ecosystem. There is no peer review system, and no system to remove regulations that are no longer fit for purpose. If these measures were presented in a context of telecom regulation, many of them would fail the strict scrutiny imposed by the European directives.
The study of telecommunications regulation and the literature surrounding cost-benefit analysis in the United States led me to the idea of this thesis. Would it be possible to take the analytical tools applicable to European telecommunication regulation and United States cost-benefit analyses and transpose those tools to the field of regulating access to internet content?
The question of limiting access to content is more complex than telecommunication regulation because the objectives pursued by policymakers include protection of fundamental rights, protection of children, and national security. The objectives of telecommunication regulation consist principally of achieving effective competition. The question of access to illegal content on the internet is also more complex because of the number of different internet intermediaries involved. In addition to telecom operators, search engines, app stores, social media, advertising networks, and payment providers may be called on to assist in enforcing a given content policy. However, the complexity of the problem is all the more reason to use analytical tools. The European methodology for telecommunication regulation is not perfect, but it requires regulators to ask key questions before they act: does this market failure really require a regulatory response, or is the market likely to deal with the problem on its own? Is the regulatory measure we propose the least intrusive among the available alternatives? What are the potential side effects of our proposed measure on competition, innovation and investment? These questions must be seriously analyzed in any proposed regulation of telecommunications in Europe. Cost-benefit analysis in United States regulatory policy requires that policy makers precisely define the desired outcome and develop tools to measure the outcome. In this thesis I will show that the same questions (and others) should be asked and analyzed in the context of regulations designed to limit access to harmful content on the internet.


Publishers of content and services on the internet are often located beyond the reach of national courts and police. Because the publishers are beyond reach, lawmakers and courts tend to look to internet intermediaries located within national borders as proxies to help apply content rules (Noam, 2006; Lichtman & Posner, 2004; Lescure, 2013). The internet intermediaries may be ISPs, payment providers, search engines, social media, app stores, domain name registries, browser publishers, or advertising networks (OECD, 2011).
To illustrate the difficulty facing regulators, let us use the example of hate speech: French law prohibits content that promotes racial hatred or anti-Semitism. In the United States, many forms of hate speech are protected by the First Amendment of the United States Constitution. Hate content that is published on a website based in the United States is instantly accessible by citizens of France. French victims of hate speech can bring an action before French courts and then try to obtain enforcement of the action in the United States. However, enforcement of a French judgment in the United States will be long and costly, and a United States court will not necessarily enforce a French decision that potentially interferes with United States constitutional principles. Even if a United States court were to grant enforcement, the publisher of the content could easily move to another country and start its website again.
As this example shows, going after the source of the offending content is difficult and in some cases impossible. That leaves the option of seeking enforcement against technical intermediaries located in the country where the harm is caused. Take our example of the hate speech sites based in the United States. A number of options are available in order to make the hate speech sites less available to French citizens. Internet access providers in France could block access through various kinds of filtering. A search engine could make the site less visible on search results for French users. If the site collects payments from French users, payment providers could be called on to block payments. Browser software could be configured to make access to the site difficult. Advertising networks could be called on to block advertising. If the site uses a .fr domain name, the domain name could be seized in France.
All of these techniques can potentially be used to limit French citizens’ access to an offshore hate speech site. However, all of these measures have potentially grave side effects. Technical filtering can block the targeted hate speech but can easily block other perfectly legal content. Filtering can create significant costs for internet intermediaries, can interfere with principles of net neutrality and threaten privacy. Moreover, many measures may prove ineffective, and/or encourage users to use encryption and dark networks to avoid detection, which creates other problems for law enforcement authorities.
National regulators may even be tempted to make their blocking measure apply to the entire world. In Europe, individuals have the right to request search engines to block certain harmful search results, even though the content revealed by the search is not illegal. Some European privacy regulators ask search engines to apply the blocking measure to search results worldwide, arguing that the victim is entitled to effective protection even if the search is conducted outside Europe. If applied to search results in the UNITED STATES, the blocking measure would block content protected by the First Amendment of the United States Constitution. The measure would also set a precedent for other governments who may want to silence dissent by asking a major search engine to apply global censorship. Like a potent medicine, measures taken by internet intermediaries, whether on their own or under government constraint, can have dangerous side effects.
The principles of the open internet and net neutrality prohibit interference with the free flow of content, applications and services on the internet. The open internet creates numerous economic and social benefits (OECD, 2016). Actions by internet intermediaries, whether government-imposed or voluntary, will necessarily affect the open internet. Interferences with the open internet are tolerated if they are part of « reasonable network management, » that is, if the measure is intended to address a legitimate objective, and the means used to attain the objective are proportionate (Sieradzki & Maxwell, 2008). Net neutrality so far applies only to ISPs, but its principles can in theory be extended to any kind of internet intermediary (ARCEP, 2012).
Net neutrality has at least three objectives: to prevent anti-competitive conduct by last-mile ISPs, to protect freedom of expression and to protect the borderless and end-to-end character of the internet (Curien & Maxwell, 2011). Measures that erect gateways designed to protect national content rules on the internet constitute a serious threat to the open, global character of the internet. Advocates of net neutrality also fear that if democratic countries begin to impose non-neutrality to achieve content objectives, other less democratic countries will follow suit, with measures to enforce political censorship or religious doctrine (OECD, 2011).


Today many systems exist to enforce content rules on the internet, but they are developed on an ad hoc basis to deal with specific problems (Mann & Belzley, 2005).
Courts, regulators and lawmakers adopt different approaches for online copyright infringement, illegal gambling, hate speech, child pornography, right to be forgotten, and terrorism cases. There lacks a reference methodology against which to measure these initiatives.
To illustrate the point, below are examples of French measures adopted to address different content issues. These examples show the diversity of approaches used even within a single country.

Right to be forgotten

The right to be forgotten permits an individual to ask a search engine to remove certain search results that appear when someone conducts a search using the individual’s name. The underlying content is not illegal. If it were, the individual could ask for removal of the content at its source, using legal claims such as defamation or invasion of privacy. In the case of the right to be forgotten, the original content (eg. a newspaper article) is legitimate, protected by law, and remains available on the internet. The individual simply wants the content to be less easy to find in search results because the content is old and harms the individual’s current life.
The right to be forgotten flows from a court decision interpreting the broad provisions of the European Data Protection Directive 95/46/EC, and in particular the provisions that guaranty each individual a right to object to processing of his or her personal data.
For the so-called « right to be forgotten, » the French data protection authority, the CNIL, has assumed the role of dispute resolution body in situations where claimants are not satisfied with the solution proposed by a search engine. In its dispute resolution capacity, the CNIL relies solely on the European Court of Justice’s decision in Google France vs.
AEPD2 (the « Costeja » decision). The CNIL applies the principles of the Costeja decision to France’s Data Protection Act3, and then issues an individual decision ordering the search engine to remove a certain search result from searches made using the individual’s name. As indicated in the Costeja decision, the claimant’s right to be forgotten request must be balanced against the public’s right to have free access to information. If the relevant information is irrelevant, outdated and harmful to the individual, the CNIL would grant the request unless the public has a legitimate interest in having access to the information. This would be the case if the claimant were a public figure, for example. If the CNIL is satisfied that the balancing test comes out in favor of the claimant, the CNIL will order the search engine to remove the search results whenever an internet user conducts a search using the relevant individual’s name. The CNIL’s position is that the search engine should eliminate the search results from all searches worldwide, regardless of the country from which the search was initiated. The CNIL’s decision therefore would have extraterritorial effect, limiting the information that would be seen by an internet user in the United States, for example. In rendering its orders, the CNIL currently does not take into account in its balancing test the fact that the search engine’s action would likely impede access to content protected by the First Amendment of the United States Constitution. Similarly, the CNIL does not take into consideration in its balancing the precedential effect that a global order against the search engine could have for other countries who may also wish to apply their own content policies worldwide.
The current « right to be forgotten » only applies to search engines. Other internet intermediaries are not affected. The publisher of the original content, and the hosting provider, are under no obligation to remove the relevant content because the content itself is not illegal. The right to be forgotten is therefore unique in that the objective sought is not to remove or block access to the original content, but instead to make the original content more difficult to find using certain search terms and a certain kind of internet intermediary.

Online copyright infringement

France was the first country in the world to adopt a regulatory framework for fighting online copyright infringement using the so-called « graduated response » approach.
Under the HADOPI4 graduated response regime, right holder organizations collect IP addresses of suspected infringers using peer-to-peer networks. The evidence is then transmitted to the HADOPI regulatory authority, who then asks the internet access providers to communicate the names of the subscribers corresponding to the IP addresses. According to HADOPI’s activity report for 2013 – 2014, 12,265,004 identification requests were sent in total to the internet access providers. Once the HADOPI receives the names of the subscribers, HADOPI can take three steps. First, the HADOPI sends an initial e-mail to the relevant subscribers informing them of their duty to ensure that their internet access is not used for infringing purposes, and reminding the subscriber of the existence of legal online offers. According to its activity report for 2013 – 2014, the HADOPI has sent out 3,249,481 first warnings. Second, repeat infringers then receive a registered letter from the HADOPI stating that the subscriber has been identified again as the source of infringing content, and that if the conduct does not cease the HADOPI may transmit the file to the public prosecutor for sanctions, which may include suspension of internet access. According to the last figures published by the HADOPI in 2014, 333,723 registered letters of this type have been sent. For subscribers that continue to show evidence of infringing activity, the HADOPI then selects, in the third step, the files to be reviewed and may ask the relevant subscriber to participate in a hearing. The HADOPI can send the files to the public prosecutor if infringement continues.
In addition to relying on the HADOPI graduated response system, victims of copyright infringement have successfully obtained French court orders to block access to streaming sites.
Finally, the French Minister of Culture has nudged the principal French internet advertising players to agree to refuse to purchase advertising space from internet sites that manifestly promote illegal copyright infringement.5 The list of the sites affected by this measure will be put together by an industry coordination committee based in part on a list provided by the French police authorities. The code of conduct does not provide for any sanction against advertisers that violate the code. The French Minister of Culture also hopes that a similar code will be signed shortly by banks and payment service providers. The code would prohibit payment service providers from knowingly providing service to sites that promote copyright infringement.

Illegal online gambling

France allows online gambling, but only with gambling service providers that have obtained a license. The licensing conditions are intended to protect individuals against the harms associated with addictive gambling, as well as to protect society against the development of organized crime around online gambling activities. The French regulations on online gambling are administered by a specialized regulatory authority called the ARJEL.6 French law gives the ARJEL authority to take measures to limit access by French users to unlicensed gambling sites. The ARJEL has authority to draw up lists of unlicensed gambling sites to which access should be blocked. The ARJEL then submits the list to a court in an ex parte proceeding. The court then issues an order requiring that French ISPs block access to the relevant sites by inserting an erroneous IP address for the site in the access provider’s local DNS server. The decree relating to ARJEL’s blocking authority specifically provides for DNS blocking.7

Child pornography and terrorist propaganda

For internet content involving either child pornography or incitement to commit terrorist acts, the French police authorities are able to send blocking requests directly to ISPs without first obtaining a judge’s approval.8 The police must first attempt to obtain removal of content at its source through a request to the publisher and hosting provider, but if unsuccessful after 24 hours, the police may direct their request to local ISPs. The decree relating to blocking of child pornography or terrorist sites does not specifically refer to DNS blocking.9 The decree says that ISPs should block access to addresses « by any appropriate means », and redirect visitors toward a website of the French police. The law refers here to blocking « addresses », not to blocking « sites » or « content », which suggests that ISPs would use DNS blocking rather than other more intrusive forms of blocking such as deep packet inspection. The French government must reimburse ISPs for the cost associated with the blocking measures.
To compensate for the fact that no judge is involved in blocking decisions, the law provides that a person named by the French data protection authority will receive copies of blocking requests and can issue recommendations to the police authorities, or ask a court to intervene.
The law relating to child pornography and terrorist site blocking also authorizes the police to address removal requests to search engines and directories.
In addition to these regulatory measures, many internet intermediaries apply self-regulatory measures to facilitate the reporting and blocking of child porn sites. This is done through an international reporting network called « INHOPE » (

READ  Detecting sound-hard cracks in isotropic inhomogeneities 

Hate speech

French law prohibits content that incites racial hatred or anti-Semitism, as well as content that incites discrimination or hatred based on sex, sexual orientation or handicap. As for any illegal content, hosting providers must remove hate speech content promptly upon receiving notice, under the « notice and takedown » regime. Otherwise, victims may apply for blocking orders before courts. The court can then order internet access providers to block access to the relevant sites.

French audiovisual policy

Both French and European law impose « must carry » obligations on telecom operators (a term that includes internet access providers) that distribute audiovisual programs.10 The operators must distribute certain public service television channels to all subscribers, generally free of charge. The must carry obligation is limited to certain qualifying television channels that serve a general interest.
Audiovisual policy is also promoted via an obligation on providers of « on demand audiovisual media services » to invest a certain amount of their revenues in French and European production, to include in their catalog a majority of European works, and to present French and European works on the service’s home page. Under the country of origin rule established by the Audiovisual Media Services Directive11, these obligations only apply to providers of on-demand audiovisual media services established in France.

Apparent lack of coherence

The foregoing summary of measures applied in France to advance content policies on the internet is a great simplification. The purpose of the summary is to illustrate the diversity of mechanisms currently used within a single country, and the lack of any apparent methodology to explain the differences in approaches. An invisible methodology may be at work. Each measure adapted by regulators takes into account prior experience, constitutional constraints, political realities, and international benchmarks. But there’s no visible roadmap. This leads to questions: Why is an independent regulatory authority used for some content (eg. illegal gambling) but not for other content (eg. hate speech)? Why is a court order required for some forms of blocking (eg. copyright infringement), but not for others (DNS blocking for child pornography, or CNIL right to be forgotten orders)? Where has self-regulation been the most successful? Why are internet access providers targeted for certain kinds of actions, and search engines targeted for others?
There are no doubt good reasons why different solutions apply to different content problems. However, policy makers approach each problem in isolation and can give the impression of reinventing the wheel each time. Without a baseline methodology against which to measure regulatory proposals, the solutions appear inconsistent and uncoordinated. Moreover, the measures cannot easily be compared to gather knowledge on what works, and what doesn’t.


Another factor is at work, which is the diminishing role of audiovisual regulation in advancing national content policies. Historically, the licensing of broadcasting spectrum was the best way to ensure compliance with a wide range of national content policies. In addition to prohibiting illegal content, broadcasting licenses include rules to promote media diversity, plurality of opinions, to protect national security (via foreign ownership limitations), minors, public health, culture, language and national cinema. In some cases, broadcasting rules are designed to protect other economic sectors. The range of content policies contained in broadcasting licenses goes from matters of great national importance, such as rules protecting the proper functioning of democracy, to matters involving narrow economic interests, such as the protection of advertising revenues for regional newspapers. For decades, the broadcasting license has been a convenient basket in which politicians could throw numerous content rules designed to satisfy various stakeholders.
The license to use spectrum is a convenient tool. Spectrum is part of the public domain. It belongs to the government, so the government can legitimately impose conditions in connection with its use. Just as the government can impose building rules for beachfront property designed to protect cultural and environmental aesthetics, it can impose usage rules on spectrum designed to promote French culture. In France at least, broadcasting spectrum is licensed free of charge, whereas mobile broadband spectrum is licensed in exchange for a hefty license fee. The government imposes content rules on broadcasters in lieu of a license fee.
Over-the-air broadcasting still commands a large share of audience and viewing time in France. But its influence is diminishing, and will one day disappear. Some day in the future, most viewers will consume content online, using connected TVs, smartphones or tablets. Over-the-air channels received via a rooftop antenna will become the exception. Most broadcasters will make their content available via broadband (fiber, DSL, 4G/5G). When content providers no longer need broadcasting spectrum, governments will no longer have an easy « hook » through which to impose content policies. Governments will have to look elsewhere, and will turn to telecom operators and other internet intermediaries to fill the regulatory void. In 2006 Eli Noam predicted that telecommunication regulation will « become » broadcasting regulation, and that telecom operators will be asked to enforce national content policies because they are the only entities that regulators can reach within their jurisdictions (Noam, 2006). In fact, Noam’s prophecy can be applied not just to telecom operators, but to any internet intermediary that falls within a regulator’s jurisdictional reach. The decline of broadcasting regulation as a tool to regulate access to content explains why regulators look increasingly to internet intermediaries for solutions.


As the examples above show, lawmakers adopt measures to address a particular problem. Sunstein (1996) refers to this as the « pollutant of the month » syndrome. The measures create controversy and are often challenged in court. The measures are often accused of disproportionately harming fundamental rights, being costly and ineffective, and/or threatening the internet ecosystem (Haber, 2010). There lacks today a theoretical benchmark against which to test the relevant measures – whether imposed by government or voluntary — to ensure that they are as efficient as possible, and harm fundamental rights as little as possible.

Technical measures are inevitable

We will start from the premise that national measures to involve internet intermediaries in the enforcement of content policies are inevitable (Noam, 2006). The question is not whether they will emerge, but how they should be built (Mann & Belzley, 2005).
As noted above, measures implemented by internet intermediaries to block or limit access to illegal content can create negative externalities, including adverse effects for fundamental rights, net neutrality, and the internet ecosystem. Like a potent medicine, measures affecting internet intermediaries must be prescribed with care in order to avoid dangerous side effects. A blocking measure targeting illegal content can also block legal content. Some technical measures create privacy risks for users, and/or significant costs for internet intermediaries. Some measures may simply be ineffective. A government-imposed measure may also set a precedent for other countries, thereby creating an international arms race in regulation that could threaten or destroy the open character of the internet.

A reference methodology will help avoid mistakes

If we accept as a given that certain technical measures are necessary, we need to consider what form should those measures take, and who should be in charge of administering them so as to minimize their harmful side effects.
I propose in this thesis a reference methodology that could be used to evaluate regulatory proposals that affect internet intermediaries. The methodology would involve a questionnaire designed to help policy makers define the problem to be addressed, the alternatives available to address the problem and the direct and indirect costs generated by each alternative. The questionnaire would be followed by a cost benefit analysis (CBA), and the application of certain constraints. The questionnaire and cost-benefit analysis would then be subject to public consultation and to an institutional peer-review process. Finally, the regulatory measure would be subject to periodic ex post reviews to ensure that the measure is delivering its expected benefits and is not creating unexpected side-effects.


The proposal builds on five categories of existing literature:
i. Law and economics literature dealing with the most efficient level of law enforcement (Shavell, 1993) and most efficient level of ISP involvement to fight copyright infringement and other forms of illegal content (Lichtman & Landes, 2003; Lichtman & Posner, 2006; Mann & Belzley, 2005);
ii. Law and economics literature on net neutrality (Wu, 2003; Yoo, 2005; Curien & Maxwell, 2011);
iii. Law and economics literature on better regulation, the new public management and cost benefit analyses, with particular emphasis on environmental, health and safety regulation (Breyer, 1982; Sunstein, 1996; Hahn, 2004; Hahn & Litan, 2005; Posner, 2002; Ogus, 1998; Hancher, Larouche & Lavrijssen, 2003; Renda et al., 2013);
iv. Literature on institutional alternatives for internet governance, including self-regulatory and co-regulatory structures (Marsden, 2011; Brousseau, 2007; Weiser, 2009; OECD, 2011);
v. Literature on the principle of proportionality, and the balancing of fundamental rights (Hickman, 2008; Tranberg, 2011; Sauter, 2013; Monaghan, 1970; Lemley & Volokh, 1998; Callanan et al., 2009).
The remainder of this thesis is organized as follows:
Chapter 2 will make an inventory of the factors that must be weighed when developing regulatory proposals. Chapter 2 will list the national content policies for which regulators may be tempted to enact regulation. Enforcement of those content policies is the expected benefit of regulation. Chapter 2 will establish a list of technical intermediaries and technical measures that can potentially be used to implement content policies. These are the technical tools that regulators might consider using. Chapter 2 will mention institutional alternatives and list the negative side effects that government-imposed technical measures might cause. These are the potential costs of regulation.

Table of contents :

1. Why most politicians and regulators will hate this thesis
2. Bringing smart telecommunications regulation to the internet
3. The challenges of regulating access to content on the Internet
4. Current regulatory approaches are uncoordinated, with no guiding methodology
4.1 Right to be forgotten
4.2 Online copyright infringement
4.3 Illegal online gambling
4.4 Child pornography and terrorist propaganda
4.5 Hate speech
4.6 French audiovisual policy
4.7 Apparent lack of coherence
5. Diminishing role of broadcasting regulation
6. A methodology against which to measure regulatory proposals
6.1 Technical measures are inevitable
6.2 Technical measures create harmful side-effects
6.3 A reference methodology will help avoid mistakes
7. Existing literature
8. How the remaining chapters are divided
2. Content policies
2.1 Cybersecurity threats
2.2 Spam and phishing
2.3 Cookies and other form of tracking software
2.4 « Right to be forgotten » content
2.5 Illegal online gambling
2.6 Sale of cigarettes and alcohol
2.7 Counterfeit drugs, illegal drugs
2.8 Intellectual infringement
2.9 Defamation and the protection of privacy
2.10 Websites promoting racial, ethnic or religious hatred
2.11 Regulations designed to protect local culture and language
2.12 Advertising laws
2.13 Protection of minors against adult content
2.14 Child pornography
2.15 Content promoting terrorism
3. Assisting law enforcement
4. Valuing content policies and their enforcement
5. Internet intermediaries
5.1 Search engines
5.2 Hosting providers
5.3 Internet access providers
5.4 Internet domain name registrar
5.5 Payment service providers
5.6 Internet advertising networks
5.7 Application stores
5.8 Content delivery networks (CDNs)
5.9 Internet backbone providers
5.10 End-user software
5.11 Set-top boxes or modems
5.12 Device operating systems
6. The institutional framework
7. Negative externalities caused by Internet intermediary actions
7.1 Adverse effects on fundamental rights
7.2 Internet-specific harms
7.3 Unintended effects linked to user behavior
7.4 International effects
8. Solving the problem so as to maximizing social benefits
2. What are fundamental rights?
2.1 Characteristics of fundamental rights
2.2 The cost of fundamental rights
2.3 Economic vs. non-economic rights
2.4 The expressive value of fundamental rights
3. Freedom of expression
3.1 General limitations to freedom of expression
3.2 Is the Internet like television?
3.3 The nature of harms to freedom of expression
3.4 Internet intermediary liability and free speech
3.5 The Dennis formula and its limits
3.6 Law and economics explanations for the high protection given to freedom of expression
3.7 Freedom of expression and self-regulatory measures
4. Privacy
4.1 Privacy and data protection as fundamental rights
4.2 Privacy rights in law and economics literature
4.3 Behavioral economics and privacy
4.4 Cost-benefit analysis applied to data protection
4.5 How to measure costs and benefits in privacy
5. Fundamental rights and proportionality
5.1 The three-criteria test of the European Court of Human Rights
5.2 Should a court give deference to lawmakers’ balancing?
5.3 Identification of the conflicting rights and interests
5.4 Balancing the relevant interests
5.5 Absolute versus relative proportionality, cost-benefit analysis
5.6 Proportionality and the « least injurious means » test
5.7 Robert Alexy’s balancing test
5.8 Nussbaum’s ethical filter
5.9 Fundamental rights and the Hand formula
Chapter 4 – institutional alternatives for regulating access to Internet content 
2. General liability or property rules enforced by the courts
2.1 Advantages and disadvantages of regulation by courts
2.2 Summary table
3. Administrative Regulation
3.1 Division of responsibilities between the lawmaker and the regulator
3.2 General versus detailed legislation
3.3 Regulatory authorities have better access to information and industry expertise
3.4 Risk of industry capture
3.5 Risk of regulatory creep
3.6 Territorial limitations to regulators’ powers
3.7 Example of administrative regulation: the FTC’s regulation of privacy
3.8 Summary table
4. Self Regulation
4.1 Self-regulation and the Internet
4.2 Self-regulation works well in groups with stable membership
4.3 Self-regulation works well where the self-regulatory organization (SRO) controls access to a scarce resource
4.4 The difference between unilateral self-regulation and multilateral self-regulation
4.5 Unilateral self-regulation by Internet platforms
4.6 Multilateral self-regulation and SROs
4.7 Conflicts of interest in SRO enforcement
4.8 Self-regulatory rules may not represent the public interest
4.9 Self-regulation and legislative threat
4.10 Example of multilateral self-regulation: the advertising industry
4.11 Summary table
5. Co-Regulation
5.1 The role of the state in co-regulation
5.2 Co-regulation and accountability
5.3 Preservation of public interest objectives
5.4 Enhanced legitimacy of the rules
5.5 Co-regulation in telecommunications regulation
5.6 Co-regulation in data privacy
5.7 Summary table
6. Brousseau’s multilevel approach to governance
7. Internet requires a « racket and strings » regulatory approach
2.1 Early scholarship: Breyer, Morrall, Hahn, and Sunstein
2.2 Dieter Helm examines the meaning of « good regulation »
3.1 OECD 2012 Recommendation on Regulatory Policy
3.2 OECD 2011 recommendations on internet policy making
4. Better regulation methodology in the United States
4.1 Peer review by OIRA
4.2 Creating a baseline scenario
4.3 Identifying the relevant harm
4.4 Identifying regulatory options
4.5 Applying cost-benefit analysis to each alternative
4.6 How to quantify costs and benefits
4.7 Benefits and costs that are difficult to monetize
5. Better regulation methodology in Europe
5.1 European better regulation guidelines
5.2 European Commission guidelines assessing fundamental rights in impact assessments
5.3 2015 European toolbox for better regulation
5.4 The Renda study on cost-benefit analyses
5.5 Areas requiring further study
6. Impacts on innovation
6.1 Why Internet firms innovate
6.2 Knut Blind explains link between regulation and innovation
7. Adaptive or experimental regulation
8. Criticisms of cost-benefit analyses in regulatory decisions
8.1 Robert Baldwin asserts that impact assessments are ill-adapted to political realities
8.2 Radaelli and De Francesco compare U.S. and E.U. approaches
8.3 Robert Hahn and Paul Tetlock evaluate the costs of regulatory impact assessments in the U.S.
8.4 Ackerman and Heinzerling criticize CBAs that attempt to « price the priceless »
8.5 Greenstone: cost-benefit analyses require experimentation
9. Why conduct a cost-benefit analysis?
2. Elements of the methodology
3. The questionnaire
3.1 Analysis of the underlying content policy that needs to be enforced
3.2 The range of Internet intermediaries and actions they could take to help enforce the content policy, ranging from the least intrusive to the most intrusive
3.3 Remedies used in other countries
3.4 The institutional alternatives, including liability and property rules, self-regulation, co-regulation, and/or full-fledged administrative regulation
3.5 The fundamental rights affected by each proposed measure
3.6 Internet ecosystem
3.7 Behavorial economics and « nudges »
3.8 Adaptive and experimental regulation
4. A cost-benefit analysis under constraint
4.1 How to deal with hard-to-quantify benefits and costs?
4.2 Prepare a baseline scenario of no regulatory intervention, takin into account dynamic aspects such as possible evolutions of Internet technology and markets, and application of existing laws and self-regulatory policies
4.3 Measuring benefits compared to the baseline scenario
4.4 The direct costs resulting from each proposal, including direct costs for Internet intermediaries, their customers, and taxpayers
4.5 The indirect costs resulting from each proposal, including relative impacts on fundamental rights, on the internet ecosystem and innovation
4.6 How to rank proposals
4.7 Applying additional constraints
4.8 Conclusion on how to rank proposals
5. Public consultation
6. Institutional peer review
7. Periodic review of the measure
8. Conclusion
2. Weaknesses of the methodology, and possible responses
3. Quantifying the unquantifiable


Related Posts