How will EC plans to reboot rules for digital services impact startups?

A framework for ensuring fairness in digital marketplaces and tackling abusive behavior online is brewing in Europe, fed by a smorgasbord of issues and ideas, from online safety and the spread of disinformation, to platform accountability, data portability and the fair functioning of digital markets.

European Commission lawmakers are even turning their eye to labor rights, spurred by regional concern over unfair conditions for platform workers.

On the content side, the core question is how to balance individual freedom of expression online against threats to public discourse, safety and democracy from illegal or junk content that can be deployed cheaply, anonymously and at massive scale to pollute genuine public debate.

The age-old conviction that the cure for bad speech is more speech can stumble in the face of such scale. While illegal or harmful content can be a money spinner, outrage-driven engagement is an economic incentive that often gets overlooked or edited out of this policy debate.

Certainly the platform giants — whose business models depend on background data-mining of internet users in order to program their content-sorting and behavioral ad-targeting (activity that, notably, remains under regulatory scrutiny in relation to EU data protection law) — prefer to frame what’s at stake as a matter of free speech, rather than bad business models.

But with EU lawmakers opening a wide-ranging consultation about the future of digital regulation, there’s a chance for broader perspectives on platform power to shape the next decades online, and much more besides.

In search of cutting-edge standards

For the past two decades, the EU’s legal framework for regulating digital services has been the e-commerce Directive — a cornerstone law that harmonizes basic principles and bakes in liabilities exemptions, greasing the groove of cross-border e-commerce.

In recent years, the Commission has supplemented this by applying pressure on big platforms to self-regulate certain types of content, via a voluntary Code of Conduct on illegal hate speech takedowns — and another on disinformation. However, the codes lack legal bite and lawmakers continue to chastise platforms for not doing enough nor being transparent enough about what they are doing.

Hence the Commission has decided it’s time to revisit the rules around digital services to ensure the legal frameworks are taking account of the vast societal power platforms now wield.

“Various forces have led us to this place,” says Mark Owen, partner and head of the technology, media and communication sector group for law firm Taylor Wessing. “One is a sense that these platforms have got very big and established and is this balance still fair? That’s the fundamental basis of a lot of the consultation. It mirrors what’s going on in the U.S. — things like the similar safe harbors under the Copyright Act there are being looked at… to re-examine some of the balances that have been in place for the same level of time there to say are they still fair? Is this still what society wants?

“There’s also the whole ‘techlash’ thing, which I think continues. And, possibly, is easier for European politicians to adopt and get behind — because very few of these ‘hyperscalers’ are European. So there’s more of an imperative for them to rein it in, if it can be. The other bit is the whole online harms piece… So it’s partly about fairness of commerce but it’s also about harms and safety of people online — and, given this is the environment we’re all in now, is this what society should look like?”

The EU’s executive body, headed by Commission president Ursula von der Leyen, made the drafting of a Digital Services Act (DSA) a flagship policy pledge as it took up its five-year mandate at the back end of last year — with no let off in momentum since then, despite the huge disruption that is the coronavirus pandemic. 

The backdrop to the DSA is rising public attention to ugly online problems like hate speech; cyberbullying; election interference; and misinformation. Some EU Member States have also already independently passed their own laws targeted at tackling certain issues (like France and Germany regulating how quickly platforms must remove hate speech, for example) — while the U.K., still technically an EU member until December, has proposed regulating a whole range of online harms — so momentum within the region is toward greater regulation of internet platforms.

On the economic side, the Commission will be hoping to fashion a consensus that works to harmonize rules for digital services, discouraging further Balkanisation of the Digital Single Market.

Simultaneously, it wants to keep flying an EU policy flag on the global stage. This follows an update to long-standing data protection rules (aka, the GDPR), which began being applied in 2018, training a global spotlight on Europe as a leader in digital regulation.

If the EU can similarly be first to reframe rules for platform liability and fairness, there’s an even bigger opportunity to influence a space that has, to date, been dominated by U.S. standards and platform giants.

Set and enforce European standards on internet giants and homegrown startups may finally gain an edge, runs this line of thought.

“It’s very interesting the idea that the EU not having its own tech industry, really — not to the same extent as either China or the U.S. — wants to be the leader in standards,” says Taylor Wessings’ Owen. “I’m not sure how that benefits us from a business point of view but it’s really interesting — and it might benefit us… That seems to be their ambition. To at least lead in regulation — and having a modern approach.”

In a speech on June 2, commissioner Thierry Breton, who is heading up the DSA package, expressed the mission thus: “We want to propose clear rules before the end of the year to define the responsibilities of platforms in protecting our citizens and values, without making them liable for all content. Certain fundamental rules must apply to everyone, from the smallest online shop to the major platforms. We are also thinking about specific rules for gatekeeper platforms.”

While the exact shape of the legislation remains to be seen, a public consultation kicked off earlier this month — revealing the expansive scope of issues under consideration. It’s the start of what could be a multi-year process to try to achieve consensus in areas that often have strong opinions on either side. Whatever the Commission proposes, only a version will pass — assuming the DSA is not derailed entirely.

Any legislation will need to gain the backing of the EU parliament and Member States’ representatives in the EU Council before it can become pan-EU law, so the debate is really only just getting started.

That’s not all, either. Two other related planks of the Commission’s digital-economic policy have already been laid out in some detail — one piece aimed at encouraging the reuse of (non-personal) data; and another proposing a framework for regulating risks related to AI.

It is also consulting on a number of updates to competition law — including proposing a new tool that would allow regulators to pro-actively intervene to prevent markets tipping (a particular competitive problem in the digital realm, given network effects and the associated “winner takes all” phenomenon); and asking whether there should be specific rules for gatekeeper platforms on account of disproportionate market power versus those businesses whose route to market is mediated by powerful intermediaries.

The ambition is truly big-picture. EU lawmakers are backing connected technology as the wagon to power a digital-economic transformation and drive regional growth for the coming decades.

This, perhaps, begs the question of whether they’re biting off more than they can chew by trying to tackle so much simultaneously. Yet, at the same time, the legislative pieces are interlinked and more piecemeal updates would carry their own risks. A major transformation mission may well demand a big-bang approach.

The limits of platform responsibility

Among contested questions the Commission wants the DSA to address — as voiced by Breton in his speech earlier this month — are: “What role do platforms play in avoiding misinformation during an election or health crisis? How do we avoid hate speech from spreading online? How do we protect our children against being bullied on social media?”

“Should speaking time in democratic debate be limited online as we do offline?” he also wondered. “How do we achieve all this without choking off the freedom of expression of platform users? Are public interest notices the right tool to avoid censorship while advising users to check others sources?”

Breton has also claimed personal reluctance to regulate internet platforms and AI in a number of public remarks, although this mostly seems intended to nudge U.S. tech giants to get with the EU’s program voluntarily, while perhaps also trying to manage the expectations of EU businesses and keep various sectors on side.

“I have always said that it is up to the platforms to adapt to us, and not the other way around,” Breton has also said, without specifying exactly what this “adaptation” means in practice — talking instead about striking “the right balance” between preserving freedom of expression and protecting citizens against illegal content, hate speech and disinformation. (On balance, Twitter’s recent “public interest notices” — slapped on rule-breaking Trump tweets — do seem to have turned appreciative heads in Brussels.)

All these problems are, to a greater or lesser degree, fuzzy, subjective concepts — lacking even a pan-EU definition — which means any associated rules run the risk of failing to achieve the clarity of consensus.

So in shooting so very big the Commission may end up falling short of its legislative ambitions — and be left having to propose a more bare-bones update that feels more fudge than fresh start. 

“Something as ambitious as this, covering so many areas, you can imagine how long that could get stuck in consultations and arguments,” says Owen. “These things can take a huge amount of time because you’ve got so many lobbyists and laws.”

“They clearly want to move quickly,” he adds. “They’ve got lots of other consultations going on and initiatives which sounds as though they’re all trying to reach a peak by the end of the year… So I think we’ll have a much better idea by then — I suspect there’ll be some immediate things they try to do quickly and some other things which they maybe hive off into a separate initiative so it doesn’t hold up the other stuff.”

What might a bare bones DSA package look like — if the Commission decides to cherry-pick lower-hanging fruit where it believes regulatory consensus may be built more quickly?

“Look at something like data; I think they will probably be able to get to the point where they can have clear rules about what people can and cannot do with data outside the privacy aspects, in terms of the ownership, the rights of it — so IP,” says Owen. “They’ve got to come up with some rules about how AI can be used — because it’s moving so many areas, and some including around autonomous vehicles where there’s high degrees of potential risk, so I think they’ll have to come up with rules around things like that.”

On the liability question, he predicts the Commission will zero in on technical requirements. “I think there will definitely be a move towards saying to the extent that better safeguards are technically possible that those should be adopted,” he tells TechCrunch. “There’s been a slippery slope going on — ‘if we don’t know about it, then we don’t have to do anything about it; and we only have to do something about it once we know about it’ — through to, we’ve seen this in several cases over the last few years, where courts are saying ‘yes but you could tell that this was going on and that this was the same person that you’d previously blocked.’ ”

“So I think there is a growing sense that if there’s technology there you can’t just hide behind safe harbor and say ‘well I’m not doing anything about it’ — there should be more of an obligation on you to adopt the technology which enables you to control things. I think it’ll definitely move in that direction.”

“Whether it’ll then go further and change the fundamental balance — and say you have got to take more of an overview on what’s going on on your platform; you can’t just wait for people to tell you — I think that’s possible, but it’s going to be massively resisted,” he adds. “Past experience with these sorts of things would suggest that it takes years and years to move these sorts of changes even slightly.”

Taylor Wessings’ Louise Popple, a senior professional support lawyer in the IP & media group, suggests one possible halfway house the Commission could adopt is to limit controversy by narrowing scope — say by focusing requirements only on “more serious criminal harms.”

“Even in the U.K.’s Online Harms proposal one of the strands of that was to have a positive monitoring obligation for certain tightly defined categories of content — [such as] terrorism — so it could be that they introduce some sort of halfway house of that nature,” she suggests. “It’s definitely a focus and they have said that they’re expecting this to lead to new and revised legislation. But it’s difficult to know yet what that will be, and how extensive. There are no clues, really, in the consultation questions themselves as to the direction of travel.”

There is a high risk of controversy and political conflict should the DSA proposal get the balance wrong by misjudging the public mood or if it ends up alienating swathes of EU businesses.

Nonetheless, it’s tasked itself with drafting some limits. And the Commission’s appetite to grapple with tricky digital topics is not in doubt (see, for example, the divisive digital copyright reform; while reform of ePrivacy rules stalled post-Commission proposal, down the negotiating line in the EU Council). Though it may well seek a path of lowest resistance. 

On the content front, Twitter’s recent decision to apply warning labels to certain of U.S. president Donald Trump’s tweets does seem to have hit on the sort of “balanced compromise” EU lawmakers are actively eyeing when they consider how to define speech standards for online platforms.

“I have been saying for a long time that I want platforms to become more responsible, therefore I support Twitter’s action to implement transparent and consistent moderation policy,” said Vera Jourova, the EU commissioner who heads up a parallel European Democracy Action Plan that aims to combat online disinformation. “This is not about censorship. Everyone can still see the tweets. But it is about having some limits and taking some responsibility of what is happening in the digital world.”

Even more recently she signaled that the DSA will approach disinformation carefully, with legal limits likely reserved for illegal content.

“I do not foresee that we will come with hard regulation on that,” she said this month. “It is too sensitive to assess this information and have some rules — it is playing with the freedom of speech and I really want to come with a balanced proposal. So in the Digital Services Act you will see regulatory action very probably against illegal content — because what’s illegal offline must be clearly illegal online and the platforms have to proactively work in this direction. But for disinformation we will have to consider the efficient way how to decrease harmful impact.”

Market power in the digital age

The largest tech platforms, meanwhile, have been busy lobbying for what they couch as the “right” regulation for their business empires — with Facebook’s CEO, Mark Zuckerberg, seeking to skate over the notion of a third European way for internet speech rules. He talks instead of a choice between either the US “free” standard or China’s censorship. Albeit that’s an imperialist (and binary) world view that won’t earn brownie points with Europe’s commissioners.

While Google’s Sundar Pichai has been pressing EU regulators for a dilute-enabling framework for AI.

Likewise EDiMA — a regional trade association that advocates for pro-platform policy and counts the likes of Airbnb, Amazon, Apple, eBay, Facebook, Google, Microsoft, Twitter and Verizon (TechCrunch’s parent company) among its members — is pushing for limited liability to be reaffirmed under any updated legislation.

It also wants the directive’s current prohibition against a general monitoring obligation to be retained. This foundational principle of EU e-commerce law is already being eroded, per some commentators — who point to the EU copyright reform package and the Commission’s proposal for regulating terrorist content, as encouraging platforms to monitor uploads.

Last year, Europe’s top court also ruled that platforms can be ordered to hunt for and remove illegal speech, including speech that’s deemed “equivalent” to content already judged illegal, without such action contravening the prohibition on general content monitoring.

The CJEU judgement talked about platforms having “recourse to automated search tools and technologies” to power targeted identification of equivalent speech — enabling more expansive takedowns of illegal content without breaching the general prohibition rule. It’s a legal logic that might offer a path for EU law to impose partial filtering requirements for particular types of illegal content.

Though such a step would still undoubtedly attract major controversy — given the rights risks to freedom of expression should any such automated filtering get it wrong (as AI is wont to). 

In a paper proposing what it describes as an “Online Responsibility Framework,” EDiMA can be seen pushing for the DSA to make a clear distinction between “the principles of responsibility and liability” — allowing for more narrowly defined “illegal” content to be linked to the latter (under the same “notice and action” regime as currently exists, rather than requiring a more proactive standard).

While fuzzier “harmful” content would be left to “self- and co-regulatory initiatives,” and a gradual build up of what EDiMA characterizes as “best practices on these complex issues.”

So, in other words, tech giants want very little to change — beyond the setting of a pan-EU response standard for a narrow slice of “illegal” content. Or, put another way, the “right” regulation for the current crop of platform giants is a narrow rulebook and a broad enabling framework that doesn’t stir the pond of business as usual.

The Commission seems to have other ideas, though. It’s no accident that it’s simultaneously looking at reforming the bloc’s competition rules to allow regulators to intervene more swiftly in instances where digital markets have become captured and tipped by so called “gatekeepers.”

This suggests it has tech giants in its sights and is actively looking for new ways to clip their wings. (Last year EU institutions agreed a first pass at regulating fairness and transparency in online platform trading — but more looks set to come.)

Platforms that gain so much market power they’re able to unfairly crush competition is another type of abusive behavior the Commission is setting itself the task of tackling — looking in parallel at whether there should be ex ante platform regulation in the form of specific requirements for so-called “gatekeeper” platforms that control marketplace parameters and have full visibility of competitive data.

It’s interesting that the possibility of ex ante platform regulation — which is very clearly forming a part of the Commission’s thinking on ensuring functionally competitive digital markets — has been included as part of the DSA consultation, suggesting EU lawmakers see platform monopolies as a key component of the overall abuse problem they’re gunning to rectify; and view effective competition enforcement as a key piece of the wider puzzle related to reining in platform giants.

Above all this, there’s also the “Holy Grail” of the Commission’s digital policy: Effective digital taxation — a long-running complaint it’s quickly repurposed as a stick to beat platforms as they lobby to try to influence the substance of the DSA.

It’s also set itself the far-reaching — and some might say “pipedream” — goal of achieving European “technological sovereignty.”

“European technological sovereignty starts from ensuring the integrity and resilience of our data infrastructure, networks and communications,” the Commission wrote in a communication in February. “It requires creating the right conditions for Europe to develop and deploy its own key capacities, thereby reducing our dependency on other parts of the globe for the most crucial technologies.

“Europe’s ability to define its own rules and values in the digital age will be reinforced by such capacities. European technological sovereignty is not defined against anyone else, but by focusing on the needs of Europeans and of the European social model. The EU will remain open to anyone willing to play by European rules and meet European standards, regardless of where they are based.”

The appetite for regional regulators to fashion a uniquely European set of guardrails for internet platforms is therefore being fueled by geopolitical ambition as much as safety-related consideration, for all the Commission’s talk of “European values.”

The techlash has also created an opportunity for EU digital policymaking which can command broad public support. This in turn means there’s less space for lawmakers to reverse course; dropping the gambit would mean dropping out of the race to influence essential global trade and societal parameters for the coming decades — and that’s not an option any EU lawmaker would willingly entertain.

Startup perspectives on the Digital Services Act

What does all this mean for European startups? It means that how online business is done in the region is no longer just a matter of neutral or technical practicalities — such as ensuring cross-border payments flow smoothly or applying limits on geoblocking — it’s being seen as far more important than that.

What’s at stake is your processes are being bound up with a game of 3D geopolitical chess, and your high-tech businesses cast as conduits in the construction of a greater regional transformation.

At the same time, for startups that have found their businesses hamstrung by opaque platform policies and sudden unilateral changes with crushingly negative impacts — that require huge one-sided effort or expense to rectify — clearer limits on market-deforming “hyperscalers” are likely to be very welcome.

“Gatekeepers and non-EU online players — where access to data is way more flexible — are a threat to EU competitiveness worldwide. It’s in all EU companies’ interests to quickly find a solution so that the EU stays competitive,” says Olivier Plante, CEO of a Barcelona-based startup that develops an AI keyboard app, called Fleksy.

Last year his company had a run in with Google — after the Android maker decided to apply a higher age rating to Fleksy’s app on the Play Store than Google’s Gboard keyboard. (The reason was an entirely spurious reason and the same standard had not been simultaneously applied to Google’s own keyboard app.)

“We need to have access to the right regulations where gatekeepers can’t keep having their ‘executioner power’ over EU companies,” adds Plante. “Fleksy is a proud EU company but we often see roadblocks where outside companies thrive faster simply because some regulations are too limiting in Europe — finding the right balance between global competitiveness versus limited power for gatekeepers is a challenge, but necessary for our market’s prosperity.”

Berlin-based tree-planting search engine Ecosia is another EU startup that’s faced visibility challenges from trying to do business in a market so thoroughly dominated by Google. It also told us it wants to see action from the Commission to boost competition in digital markets.

“Ecosia has repeatedly raised concerns about the anti-competitive behaviour of dominant platforms in Europe and will continue to do so,” said CEO Christian Kroll. “If the European Commission is to truly provide the conditions for a ‘Europe fit for the digital age,’ it must take action now to increase competition in digital markets. We therefore welcome the DSA public consultation, as it provides a much-needed opportunity for the European Commission to develop true ex ante regulation that addresses the dominance and reach of large platforms.”

There are also likely to be growing opportunities for European entrepreneurs to build technologies that help platforms navigate a new regime of tighter regulations and accountability — or which take advantage of any regulatory requirements that enforce platform data portability or service interoperability.

L1ght, an Israeli startup that sells AI-fueled content-monitoring tools, strongly supports the EU’s move. “I’m not a big fan of governments and I believe that governments should not be involved that much in technology. But I believe that in this case the big companies — the internet giants — just abdicated their responsibility and created a need for the government to step in,” says CEO Zohar Levkovitz.

“Google are in the market for 20 years, Facebook for 15. They had many, many opportunities to take care of the problem — and they just neglected it. Because of this I believe that someone should step in. Because kids are dying, day after day. CSAM is spreading all over the world and they’re not doing enough, or not doing, actually, almost nothing. Because of that we are in favor of the regulation to step in — at least to start a discussion.”

“It is not about our technology. Our hands are full anyway — we have more than enough clients today because there are so many good companies that want to do it just because they decide to do it, regardless of the regulator. But I believe that for the sake of the world it is better that there will be some kind of regulation — to tell companies to stop doing it,” he adds. “Today violence and racism and child abuse and all of this stuff is increasing, day after day, and the internet giants are not doing anything to stop it.”

“So yes there is our technology and some other technology that can help them but it is not specifically about technology, it is about attitude. And once the giants will decide they want to solve it, they can do it. It’s just attitude.”

We also spoke to Allied for Startups, a startup member association that represents more than 40 regional and national trade associations and not-for-profits across Europe, Asia and the Americas, for its views on the risks and opportunities coming down the pipe for startups as the Commission looks at updating the digital services rulebook.

It is more circumspect on the looming legislative package, given the breadth of the consultation — combined with what it dubbed a “supercharged” political focus — so is directing its energy at encouraging members to respond to the consultation, and running a series of webinars to garner feedback, using the hashtag #DSA4Startups.

“We want to encourage platform startups who are slap bang in the scope of this to respond,” says Benedikt Blomeyer, director of EU Policy at Allied for Startups. “For instance, we would want the SoundCloud of the European platform economy to be responding to this.” (In one of these webinars, SoundCloud suggested increased platform liability could triple its costs — as it said it would need to staff up to run 24/7 monitoring.)

It also said it’s putting together a toolkit to help members and startups respond to the parts of the consultation that matter to them.

“They won’t have to respond to everything but it is still not the kind of user-friendly design that some of the startups themselves would be proud of,” Blomeyer adds, saying part of the reason it set up a base in Europe a few years ago is that there were “no — or far too few — startups replying to GDPR and Copyright Directives.”

“That’s why we want to make sure startups and startup communities are on the record now,” he adds. “When we’ve seen all-encompassing legislation with political pressure coming from several sides, frankly speaking this can be concerning… We will be having to have a rational, fact-based debate about tech and it’ll not be easier if you add more variables and politicization.”

Blomeyer also argues that the more complex the legislation the greater the risk the biggest platform players will be the only ones able to comfortably navigate it — making it harder for startups to meaningfully participate (with the associated risk of a “regulatory moat” cementing the position of incumbent platforms).

Although he told us he also believes there are opportunities in the resetting of Europe’s e-commerce rules that could be positive for the startup sector — such as if regulation succeeds in clarifying relevant rules and gives entrepreneurs legal certainty on thorny issues like intermediary liability.

The best-case scenario would be if all the various stakeholders achieve “a new political consensus around what kind of structure we want to have in the digital economy,” he suggests.

“When we go out on these webinars, oftentimes we’re explaining what the eCommerce Directive is about — even though we would say it’s like the invisible backbone of our economy. We find ourselves relying on this eCommerce Directive even though there’s far too little knowledge about it. So I think there’s an opportunity to have this conversation — and find a consensus together.”

In terms of specifics, Blomeyer argues the current distinction between passive and active liability for e-commerce is outdated, suggesting fresh thinking here could help regional startups manage risk while simultaneously supporting the Commission’s goal of squashing online toxicity.

“Most if not all of the 10,000 platform startups in Europe are going to be ‘active’ and we want to find ways to incentivize them to tackle some of these challenges put forward without facing increased liability for doing so,” he says. “So we think there should be a different way to deal with these challenges.”

Rebuilding the eCommerce Directive’s “country of origin principle” would be another way to help regional startups, according to Blomeyer — who points to legislation already passed by Member States (such as Germany’s NetzDG law) that he says has chipped away at the foundational principle, injecting friction into the Digital Single Market and making it harder for startups to scale within Europe.

“The reality is that the Single Market is being cut apart in this way. NetzDG has not made life easier for our platforms in Germany. And a tweet — even if it’s not from a German account and it’s infringing the NetzDG — has to be taken down in Germany. Twitter can deal with that but, again, if it’s the SoundCloud or so of our economy, that’ll force them to scale up 27x in Europe,” says Blomeyer, arguing there’s now an opportunity “to lay a common foundation again” via a consensus update to e-commerce rules.

“Of course not everything will be harmonizable,” he adds. “Member States will be [asking] what is hate speech? What is disinformation? It will be something where there is national sensitivities. That’s not something I see changing any time soon. But there is the dynamic right now that the country of origin principle is being undercut or undermined. And there’s a chance here to reaffirm it to the largest extent that we can. The more we can do it, the better it is for the ability of startups to scale up across Europe.”

What about if the DSA package were to include rules that actively seek to treat startups differently to hyperscaled platforms? Such tiered regulation would be a “delicate” issue, in Blomeyer’s view, though he supports the idea of proportionality around enforcement (aka “focus on the problem and focus on the problem maker,” as he puts it).

“If we’re talking about regulatory exemptions that’s something that we’re not generally in favor of because any exemption is a glass ceiling in the future for a startup,” he says. “That might work more for an SME which doesn’t have the same ambition… But if it’s more getting to the point of proportionality — then I think, yes, we would be in favor. If there’s an issue with the digital economy and the platform economy, focus on the problem and focus on the problem maker. If you think it through reversely that means if there’s a small platform that has nothing to do with hate speech discussions online… then they also shouldn’t fuel the legislative sledgehammer in the same way that a bigger platform should.”

Overall, the big hope is for the DSA to deliver a “clear framework” for founders to achieve limited liability.

“To do that we want to give them the right incentives and tools to lead and go out and proactively tackle illegal content,” he says, adding that many of the EU platform startups Allied for Startups has talked to are already trying to use tools and algorithms to detect illegal speech or fake products on their platforms.

“They’re all about saying we want to do it the right way, we’re trying to implement technology, we’re trying to be smart about it because we don’t have the same amount of resources as the big guys, but the thing we cannot afford is to be made liable at the end of the day — so a pathway for limited liability. That’s what’s being discussed in Brussels right now: What else can these platform startups do? And that’s something we’re very curious to understand ourselves, as we learn more and understand more about the platform economy we also want to know. So EU policymakers, if you see there’s a problem — please tell us what else you think we should be doing.”

What if users were in the driving seat?

On the platform user side, perhaps the most interesting possibility for radical change — that’s at least being entertained via the scope of the Commission’s DSA consultation — relates to economic incentives that underpin the things platforms show us.

Which is to say the commercial imperatives that invisibly power platforms’ content-sorting algorithms and generate a proprietary hierarchy of harms — dumping the lion’s share of toxicity clean-up costs on society.

In an essay considering the debate around how speech is regulated online, Harvard academic, Elettra Bietti, criticises the dualism of current discourse — with what she describes as a simplistic focus on the question of “government versus platform censorship” — going on to argue that speech regulation in the modern era should not be left exclusively to either of these players, as neither is trustworthy nor entirely aligned with the public interest.

“The real battle is how to ensure that the speech and fundamental rights of the least powerful and the least loud in society are protected and not curtailed and abused by the concerted efforts of online platforms and vote-hungry politicians,” she writes. “The question, therefore, is not how to secure the right to speak freely without interferences, but rather how to ensure that the combination of pre-existing and novel interferences do not disproportionately favor the powerful actors at the expense of the powerless.”

“Instead of asking whether or not Twitter did the right thing with Trump’s tweets, we should think of structural regulation that goes beyond mere cosmetic voluntary interventions of platforms and that instead tackles the profit motives and capitalist infrastructure that foundationally shape online speech today,” she adds.

“The task is to imagine a different platform ecosystem that does not assume privatized control as the default but instead envisages and enables the possible reconfiguration of speech and communication as a public service that is not primarily enabled, driven and shaped by profit considerations.”

Bietti suggests such a transformation would require treating some platforms as “utilities, common carriers, or essential facilities,” as well as “re-envisioning infrastructure ownership and control so that more power and control over data and content is distributed to persons.”

“It may also require diversification and the co-existence of a plural ecosystem of platforms of varying shapes, sizes, regional and topical relevance. It will require further democratizing platform governance, creating public charters, and ensuring accountability to users. Finally, it will require re-imagining the platform ecosystem as a space that is not designed to maximize profits, but that instead is designed to enhance human interaction, social and cultural fulfillment, and political empowerment,” she adds, dubbing this mission an urgent one as demagogue politicians have learn to “strategically weaponize” platforms to hack democracy.

While achieving such root and branch reform of the platform ecosystem might seem a tall order for any single piece of legislation, MEP, Patrick Breyer (of the Pirate Party) — who is rapporteur of the EU parliament’s Legal Affairs Committee — has put out an interesting proposal for regulating digital services that suggests tackling toxicity by empowering end users with agency over platforms’ commercial priorities.

The six-point plan proposes to combat the spread of false and racist information on social media by giving users control over the content they see — instead of, as now, algorithms simply being allowed to define speech parameters unchallenged by opaquely controlling the curation of content to maximise profit (via user engagement and data).

If platforms are curating what users see based on tracking their actions, that should require active consent, per the plan. It also proposes that users have a right to see their timeline in chronological order (and which Twitter user annoyed at the algorithm silently flipping them back to a curated view would not want that).

Another suggestion is for dominant platforms to have to provide an API so users can have content curated by software or services of their choice — an idea that could open up a marketplace around content curation.

For overcoming the “lock-in effect” of centralised networks, the plan suggests giving users of dominant social media and messaging services a right to cross-platform interaction via open interfaces — another intriguing idea that could punch holes in walled gardens.

“The free exchange of opinions on the internet, consumer choice, the right to privacy and the basic principles of a global internet should be at the heart of any regulation of digital services,” Breyer argues. “This is about our digital living space. When regulating global technology corporations, the internet community expects Europe to assert freedom of expression instead of censorship machines; and privacy instead of surveillance capitalism. Disinformation and hate messages are spreading so quickly on the internet because ad-financed internet platforms push sensationalist content without asking what their users want to see. Users should have a right to decide which content is proposed in timelines. They should be able to have their timelines sorted by external, possibly non-commercial services. User control is key to tackling problematic content bubbles.”