BILETA Annual Conference 2021

Wednesday, 14th April, 2021 – Friday, 16th April, 2021
Online

Abstracts

Connected and Vulnerable: Cybersecurity in Vehicles

Session: Cyber crime and Cyber security

Nynke Vellinga University of Groningen, Groningen, Netherlands

Abstract

Back in 2015, two hackers were able to hack a Jeep Cherokee, wirelessly gaining access to the controls of the vehicle through the vehicle’s entertainment system. The hackers slowed the vehicle down on a highway. Remarkably, this did not result in accidents. This did, however, illustrate the already existing cybersecurity risks of vehicles and their threat to road safety, thereby making legislators aware of these dangers.

Recently, several legislative steps were made to improve the cybersecurity in both conventional and automated vehicles. A great number of legislators and organisations, from the United Nations to the European Union to standards organisations, is committed to improving the cybersecurity in vehicles. However, as this contribution argues, the focus of the current legislative measures is only on the ‘first line of defence’. These measures aim to prevent unauthorised access to the vehicle’s systems, but fail to identify the steps necessary to limit the damage that can be done if this first line of defence is breached and unauthorised access is gained. There are, for instance, currently no safeguards in place to prevent a hacker from gaining access to the vehicle’s safety-critical controls once he has accessed other systems of the vehicle.

As cybersecurity enters the realm of road safety, the focus should not only be on preventing unauthorised access to the vehicle’s systems, but also on limiting the damage a hacker can do once gaining unauthorised access. Therefore, this contribution will propose the legislative steps necessary to limit that damage.


Set for surprise?: clashes and alignment of rights and regimes

Session: Intellectual property law and technology

Abbe Brown University of Aberdeen, Aberdeen, United Kingdom

Abstract

In times of crisis, information and technology can be key to identifying needs, developing solutions and assessing their effectiveness. This applies in specific crises, such  as responding to COVID, but also to more ongoing crises such as delivering energy security, managing climate change and addressing health needs.

Diverse strands of law and governance can be relevant to this.  All sit in their own landscapes with their own underpinning values (sharing, control, reward, protection, precaution, dignity) and enforcement or compliance frameworks. Each have the balance between private and public interests at their heart; each can struggle to engage with other sets of values and frameworks, at a time when more porous sharing could be of benefit.  In times of crisis the extent to which these strands intersect effectively can become more evident and more relevant.

This paper will explore this fundamental question and the extent to which taking one field as a case study, health, can provide a useful wider set of solutions. It will consider intellectual property, regulation of technologies, re-use of health information and the national and wider legal, governance and enforcement frameworks.  Can new approaches make us more ready to deliver the critical?


Fake News in Strasbourg: Balancing the Regulation of Electoral Disinformation with Freedom of Expression

Session: Human rights and technology

Ethan Shattock, Maynooth Univetrsity, Kildare, Ireland

Abstract

The need to protect freedom of expression while regulating electoral disinformation has been widely acknowledged in European legislative debates. However, such debates largely neglect to outline the manner in which this delicate balancing act should occur. This paper argues that the regulation of electoral disinformation should be understood through case law on Article 10 of the European Convention on Human Rights (ECHR). As the judiciary tasked with interpreting freedom of expression under Article 10, the European Court of Human Rights (ECtHR) has developed an extensive track record in which legal interferences with freedom of expression have been carefully scrutinised. In numerous cases, the Court has addressed how the dissemination of false information should be reconciled with Article 10 protections. Case law in this area demonstrates a strict protection of political expression, while underscoring distinctions between statements of alleged fact and value judgments. Moreover, case law on the dissemination of false information highlights the Court’s tendency to weigh freedom of expression against other Convention rights. In particular, the Court has invoked the right to free elections and the right to privacy when determining whether the dissemination of false information should receive protection under Article 10. This paper outlines how case law on Article 10 should inform the regulation of electoral disinformation in line with this jurisprudence. In assessing the tests, patterns, and rationale that underpin the ECtHR’s decisions in this area, urgently needed regulation can be achieved in a manner that ensures harmony with the provisions and spirit of the Convention.


The Purge: The missing element in the ongoing battle against Botnets

Session: Cyber crime and cyber security

Iain Nash Centre for Commercial Law Studies, Queen Mary University, London, United Kingdom

Abstract

In recent years, there has been a number of high-profile announcements of Botnet and other criminal infrastructure ‘takedowns’. These takedowns are usually the result of a coordinated group of Law Enforcement, Cybersecurity and Internet Infrastructure agencies who seek both to arrest the operators of these criminal networks and disrupt the network through the seizure of network administration tools, although recent examples (such as those championed by Microsoft) have focused solely on the seizure and skipped the criminal law elements.

This presentation examines the efficacy of such takedowns and concludes that while such outcomes are welcome as societal goods, the impact of such ‘takedowns’ on criminal enterprise is limited, even with regard to the specific assets of the network which has been ‘taken down’.

Following this analysis, an alternative methodology is proposed, which targets instead both the nodes of the criminal networks and the devices which are most commonly employed to create and expand such networks. This methodology does not include applications of criminal law, and does not target the operators of such networks, nor does it require co-ordinated working groups.

This presentation will outline the proposed methodology from both a stand-alone theoretical concept as well as outlining its potential as part of a cybersecurity regulator’s toolkit.


Remote Justice: Information Rights as a Tool of Empowerment

Session: Human rights and technology

Mo Egan University of Stirling, Stirling, United Kingdom

Abstract

The coronavirus pandemic has resulted in a compulsory retreat from public spaces. While, for some, this displacement has brought about engagement with digital technologies in new and interesting ways, for others, digital technologies have proved to be the site of technology facilitated abuse (TFA). Notoriously problematic before the pandemic (Barker, 2016; Barker & Jurasz, 2018), it has been reported that TFA has increased during lockdown (Powell & Flynn, 2020; Bracewell et al, 2020). Since a dominant strand of literature examines TFA with a sexual element, this frames much of the academic debate. The resulting inference is a misplaced expectation that criminalisation of conduct will provide a solution (McGlynn et al 2017). However, the scope of behaviour perpetrated with, or through, digital technologies is much broader and demands a range of responses that offer access to justice. Information rights offer significant potential to enable survivors to gain control over personal information. This is important because being able to control information could help survivors to feel empowered and improve their mental health and wellbeing (McGlynn et al 2017; Lloyd et al 2017). This paper explores information rights as a tool of empowerment. To do this, it will define information rights and how information rights are accessed. It will apply the literature establishing the relationship between legal rights and empowerment to this context. And, lastly, it will reflect on if, and how, information rights have been used within the UK to provide reflections on harnessing their potential in this area.


Privacy-Enhancing Technologies in the Financial Services Sector: Of Data Privacy Laws and Regulation

Session: Digital, cloud and Internet regulation and governance

Asma Vranaki University of Bristol, Bristol, United Kingdom

Abstract

The European data protection law framework’s recent reform has heralded a new data protection dawn premised on stronger and new compliance obligations, data subjects’ rights, principles and concepts with many sectors including the financial services industry having to recalibrate their approach to compliance in various areas including fraud detection, know your customer and anti-money laundering obligations as well as marketing and advertising analytics. Concurrently, in today’s data-driven age, privacy-enhancing technologies (‘PETs’) have recently regained traction at policy and regulatory levels as appropriate tools to safeguard fundamental rights and freedoms. This has reinvigorated old debates about the roles of PETs in data privacy regulation, with many sectors, like financial services, leading the way in using PETs to tackle compliance on the ground. 

This innovative empirical project aims to understand the use of PETs in the financial services sector to protect the individual’s data privacy rights in particular use cases. It engages with key stakeholders including data protection authorities, financial services organisation, PET developers and policy actors to generate unique and rich insights into the advantages and challenges of PETs to achieve or maintain compliance with data privacy laws. The project also aims to broaden existing theoretical understandings about the roles of PETs in the data protection regulatory arsenal. 

The paper will explore some of the project’s preliminary findings including the current use cases to achieve data privacy compliance in practice; the advantages and challenges of using PETs for compliance and the data protection regulators’s roles in PETs’ development/use.


Disinformation and the Future of Conflict

Session: Cyber crime and cyber security

Natasha Gooden, Subhajit Basu University of Leeds, Leeds, United Kingdom

Abstract

Disinformation is not new — it merely has adapted to meet today’s technical and political realities. It is a strategy deployed by many countries aimed at destabilising democracies to promote their geopolitical interests by spreading falsehoods to undermine and disorient our sense of truth. The United Kingdom has recognised the potential national security threat, with foreign actors seeking to influence UK citizens and the possibility of undermining democracy at the domestic and the international level. However, is disinformation part of information warfare?  

While there is currently no legally accepted definition of information warfare, it is typically conceptualised as using and managing information to pursue a competitive advantage, including offensive and defensive efforts. States refrain from clearly labelling them as internationally wrongful acts under international law and are characterised as taking place below the threshold of an ‘armed conflict’. Thus, leaving the operations to lie within a ‘grey zone’ which lacks certainty, effective regulation and avoids international condemnation.  

The paper focuses on the inherent legal problems within international law and seeks to highlight the requirement of developing a unified multi-stakeholder response that is based upon shared ‘norms’ and includes social media platforms that ultimately facilitate how information is disseminated. The paper addresses the need for effective domestic regulation and explores individual countries’ regulatory challenges. However, persistent challenges remain; how to ensure both domestic and international regulation will be functional as information is now regarded as a commodity of high value and has the ability to be deployed as a ‘weapon’?


A commons for online learning : Supporting civic data protection education in the digital classroom

Session: Legal education, regulation and technology-enhanced learning

Janis Wong, Tristan Henderson, Kirstie Ball, University of St Andrews, St Andrews, United Kingdom

Abstract

COVID-19 meant that UK higher education institutions had to implement online teaching models overnight, without time for due consideration of appropriate data protection practices or impact assessments. This not only affects students’ and staff’s university experience, but also how their institution processes their personal data as data subjects under the General Data Protection Regulation. As lecture capture, tutorial recording, and educational surveillance become implemented and, in many cases, mandatory, students and staff may less willingly participate in classes, creating a chilling effect on education. The GDPR offers solutions such as applying fairness and transparency principles and exercising data subject rights. However, this is insufficient where data subjects have no choice but to accept their institutions’ terms or be locked out of academia, the lack of consideration of group rights under the GDPR, and the power imbalances between data subjects and their institutions.

Given data subjects’ lack of autonomy over their online learning personal data, our paper introduces a data protection-focused data commons, a collaborative space for resource sharing and contribution to support data subjects’ personal data preferences. We develop an application for existing educational platforms that accounts for best practices for tutorial recording, learning analytics, and educational technology. Further, we test the application on students to measure its effectiveness for encouraging the co-creation of better data protection solutions. Our paper demonstrates how a data protection-focused commons can balance the implementation of educational technologies with the need to deliver online learning to benefit student’s and staff’s academic experience.


Unwrapping the protection for celebrities in Passing off

Session: Digital cultures

Hiroko Onishi, Kingston University, Kingston Upon Thames, United Kingdom

Abstract

mechanism of providing legal protection for the ‘status’ of ‘celebrities’.  

The author argues that providing the adequate protection for the ‘statue’ of celebrities has become significant based on the following grounds: (i) the commercial and marketing values of celebrities’ names, images, fame and likeness have played an integral part of business; (ii) means, in which the commercial attractiveness attached to ‘celebrities’ can be misappropriated, has dramatically increased as a result of as a result of technological advancement; (iii) platforms that can help achieving the ‘status’ of ‘celebrities’ has also drastically increased and diversified as a result of the rapid expansion of the fast growing video-sharing social networking sites, such as YouTube and Tiktok and the photo and video-sharing social networking sites such as Instagram.  

The paper argues that in current times misappropriation can be very easily achieved and a likelihood of non-celebrities obtaining the ‘status’ of ‘celebrity’ has tremendously increased, it is time for us to revisit how law can provide the appropriate protection for ‘celebrities’ in both traditional and non-traditional senses when their status has been misappropriated by a third party. 


Fascinating policing technologies & how to oversee them: a three-pillar approach to achieving trustworthy use of emerging technologies in UK policing

Session: Human rights and technology

Marion Oswald, Northumbria University, Newcastle, United Kingdom

Abstract

There is no shortage of ethical standards for AI and data analytics; but most have yet to be translated into operational guidance (Babuta, Oswald and Janjeva, 2020).  Frameworks tend to focus upon project methodology in an attempt to make it ‘more ethical’ (Data Ethics Framework), potentially obviating personal responsibility for assessing the validity and wider consequences of AI objectively using all relevant evidence and factors.  But no amount of ethical discussion can alleviate the inconvenient truth that policing technology innovation must be within the boundaries of the law and good science before further consideration of ethics is valuable.

The policing technology landscape has been described as ‘very confused’ with ‘no single regulatory voice’ and much uncertainty as to whether the police have ‘sufficient public trust to take things forward on their own without being accused of marking their own homework’. Themes emerging from proceedings of the West Midlands Police data ethics committee (which the author chairs) demonstrate that the technical and statistical aspects of data analytics should not be isolated from the legal, contextual, operational and ethical considerations, as each will influence the other. 

This paper proposes a three-pillar approach to achieving trustworthy use of emerging technologies in UK policing: first, governing law plus guidance/policy interpreted for the relevant context; secondly, standards, both ethical standards attached to personal responsibility and scientific standards; and thirdly, the need for people at all levels within the policing body who are committed to accountability; all of which should be subject to rolling independent oversight.


Tackling Tech Through ‘Trust’: Regulation Overload?

Session: Digital, cloud and Internet regulation and governance

Kim Barker, Open University, Milton Keynes, United Kingdom

Abstract

Virality and ‘trending topics’ are two of the modes by which social media & online content becomes part of common conversation. But that does not mean that the content is accurate, correct, nor trustworthy. In the era of user-generated content, and misleading information, our ability to make rapid assessments of what is believable and verifiable has never been more important. Lawyers, politicians, reporters, and influencers are all grappling with notions of regulation, and trust. Public perceptions are being challenged more now than ever, in part due to information overload, but also due to disingenuous actions and content, by users, by platforms, and their self-regulatory mechanisms. Do we trust platforms and platform providers to regulate their spaces and users? Should we trust platforms to regulate? Do the Digital Safety Act proposals or the UK Online Harm proposals address issues of trust and regulation?

This paper critiques the developments within the UK and the EU to assess whether it is possible to tame our digital overlords, and social media platforms. In light of the content crises, and lack of trust in both content and self-regulation of online platforms, the legislative and policy agendas have seen a raft of proposals come to fruition. In the debates surrounding platforms, regulation, self-regulation of technology, and online safety, little discussion has been paid to ‘trust’. This paper assesses whether these go far enough in addressing the missing element of ‘trust’. Will these proposals – which represent a significant shift in online regulation – allow us to tame our digital overlords?


Reform Article 22 of the GDPR to Make Way for Ethical Artificial Intelligence.

Session: Future technologies and law

Chloe Haden, University of Hertfordshire, Hatfield, United Kingdom

Abstract

Artificial Intelligence (AI) has brought significant and unprecedented challenges to human rights, contesting existing data protection principles and legislation. It is imperative that AI be developed in an ethical manner, and concern has been raised that current protections are insufficient. Currently, the GDPR refers explicitly to automated decision-making and profiling, however can be interpreted to cover other aspects and systems of AI. Whilst the literature is vast in interpreting the extent of the ‘right to explanation’ contained within Articles 13-15 GDPR, this Article aims to contribute by suggesting the best approach to encourage ethical AI is reform of Article 22 GDPR, in addition to an introduction of a separate legislative framework for AI and data protection. In respect of this, it is argued that the GDPR is insufficient in regulating AI, and the specific provisions included within Article 22 are too vague, or do not work in practice. In reflection, reform of the GDPR could provide not only stronger protections, but also clarity, to encourage the ethical progression of AI. The unique challenges brought by AI are complex; and ultimately should require specific rules and strict regimes to address the distinctive issues this technology brings to society. Decisions should be made in light of the rapid progression and invasion of AI into daily life, ideally in terms of adjustment and introduction of legislation, to address and encourage the progression of ethical AI, rather than it only conforming to the wide and vague scope of a general data protection legislation.


Constitutionalising the prohibition of mass surveillance in the EU: Judicial impulses from the data retention saga

Session: Privacy and surveillance

Edoardo Celeste, Giulia Formici, Dublin City University, Dublin, Ireland. University of Parma, Parma, Italy

Abstract

Mass surveillance is one of the most feared prophecies of the Twentieth century that has unfortunately become reality. In 2013, Snowden revealed that in the aftermath of 9/11 mass surveillance practices  were common among law enforcement and intelligence agencies of many powerful countries. These revelations contributed to trigger a series of legal actions in Europe which aimed to contest such practices. However, neither national and supranational constitutional texts adopted after WWII nor more recently adopted instruments that enshrine an autonomous right to data protection explicitly prohibit the adoption of bulk and indiscriminate surveillance methods to protect interests such as national security. This paper reconstructs the EU judicial saga on data retention and posits that through these judicial impulses we are witnessing a progressive constitutionalisation of the prohibition of mass surveillance in the EU. We argue that this principle and its derogations are gradually acquiring more defined contours and are becoming generally applicable across the commercial and national security sectors. This argument will also be supported by an empirical analysis of a series of recently emerged Internet bills of rights, non-binding civil society declarations that explicitly articulate the right to data protection as encompassing the principle of prohibition of mass surveillance. This paper will then conclude by contextualising this phenomenon in the framework of digital constitutionalism, arguing that the constitutionalisation of the prohibition of mass surveillance represents a necessary step to adapt the constitutional ecosystem to the new challenges of the digital society.


Framing Accountability in B2G Sharing Beyond Data Protection: Businesses’ Corporate Digital Responsibility

Session: Sustainability, energy, technologies and law

Giulia Schneider Sant’Anna School of Advanced Studies, Pisa, Italy

Abstract

The pandemic has shown the vital nature of business-to-government (B2G) data transfers, not only in respect to health data but also to mobility and pollution data. This contribution moves from the acknowledgment that the emphasis on the European Commission’s objective of unlocking private datasets for the “public good”, has not been followed by an adequate consideration of the subsequent moment of the performance and execution of B2G sharing agreements in a transparent and accountable manner to the public that should benefit from these.
In light of some of the principles newly proposed by the High Level Expert Group on Business-to-Government Data Sharing, the analysis demonstrates that in the context of the “systematic use” of private-sector data by governments, an integrated approach to accountability is needed in order to design private-public collaborations established for the attraction of private data resources to the public sector. This integrated approach suggests the need of “accountability shifts” from the public to the private stakeholders that are part to B2G sharing agreements. Accordingly, the particular nature of sharing’s recipients and the purposes for which data are pooled should push businesses to heighten their “corporate digital responsibility” tasks. As this study demonstrates, these responsibility tasks directly stem from businesses’ social accountability duties under the more general corporate social responsibility framework (CSR) and the more specific data protection accountability obligations.


Complementing competition law for the digital economy- The Digital Markets Act

Session: Digital, cloud and Internet regulation and governance

Aysem Diker Vanberg University of Greenwich, London, United Kingdom

Abstract

Digital platforms such as Amazon, Apple, Facebook and Google have become an intrinsic part of our lives and are amongst the top 10 global companies in terms of market capitalisation. However, there is unease as to the regional and global concentration of power with a limited number of large firms. In this regard Article 102 of the Treaty on the Functioning of the European Union (TFEU), which deals with the abuse of dominant position, plays a significant role. 

Competition regulators around the globe are increasingly calling for new approaches to the regulation of digital markets. The European Commission has been proactive in relation to the regulation of digital markets.  In December 2020 the Commission adopted its long-awaited proposal for the Digital Markets Act (DMA), which will change how digital platforms are allowed to operate in the EU. 

Against this backdrop, this paper will analyse whether traditional competition law tools and practices, in particular Article 102 TFEU, are capable of effective application in the digital era. The paper adopts the view that the current competition law framework is flexible enough to adapt to the new digital environment however it needs to be complemented by other forms of legislation.Arguably, the DMA will complement the enforcement of competition law at EU and national level by tackling unfair practices by gatekeepers that either fall outside the scope of the existing competition rules or practices that cannot be effectively addressed by EU competition rules. 


There and Back Again: How Target Market Determination Obligations May Incentivise Consumer Data Profiling

Session: Human rights and technology

Zofia Bednarz, Leo Wu University of New South Wales, Sydney, Australia

Abstract

Digitalisation has had a profound impact on financial services, with increasingly precise data profiling of consumers being one of the drivers of profit for the industry in the digital age. However, data profiling brings about harms to consumers, ranging from privacy breaches to unfair pricing, discrimination, and exclusion of vulnerable consumers. It can be particularly problematic in financial services due to the consequences it has on consumer access to financial products.

In the recent years there has been a paradigm shift in the area of consumer protection in financial services beyond disclosure as a sole means of protecting weaker parties. Financial services providers now have an obligation to target consumers for whom the financial products offered will be appropriate, according to consumers’ needs and the ability to bear the financial risk. However, there is a risk that the obligation to determine the target market for the products will further incentivise data profiling by financial services providers.

Against this background, this paper focuses on the EU MiFID II product governance requirements and the Australian product design and distribution obligations. The purpose of the obligation to determine the target market for financial products is consumer protection, but at the same time it promotes the use of big data and algorithmic decision-making. If these frameworks fail to strike a balance between (surprisingly) competing interests of consumer protection regarding the provision of appropriate financial products and the use of consumers’ data in digital profiling, they may backfire, just as disclosure duties have previously.


Competing ideas of the individual under the General Data Protection Regulation

Session: Privacy and surveillance

Katherine Nolan London School of Economics and Political Science, London, United Kingdom

Abstract

The General Data Protection Regulation (GDPR) sets out to secure the rights and freedoms of the individual, the same individual who is said to be at the heart of the European Union’s fundamental rights mission. But who is that individual? This paper offers an account of some of the historic, legal and conceptual factors which inform the central position of the individual under the GDPR, and the way in which that individual is conceived. In particular, this paper addresses two particular influences upon the GDPR. First, the shaping influence that diverse notions of privacy have had upon the GDPR is considered, and how power disparities which early privacy rights sought to safeguard become translated to questions of individual autonomy and fairness under the GDPR. Second, this paper considers the place of the GDPR within the wider European Union context—its centrality to both the European Union’s economic order and its fundamental rights mission, and the tension between the two. By tracing these influences upon the conception and role of the individual under the GDPR, this paper seeks to make explicit some of the implicit assumptions and choices underlying the regime. I suggest that only with the acknowledgment of such assumptions and choices can we question them, and engage critically with the capacity of the GDPR to meet its objectives and to address structural and systemic abuses in today’s digital environment.  


Breaching the ‘Dam’ of Information Governance? The Potential Legacy of Rapid Data-Driven Responses to Covid-19

Session: Future technologies and law

Rachel Allsopp, Marion Oswald Northumbria University, Newcastle upon Tyne, United Kingdom

Abstract

The past year has witnessed rapid development in the use of data-driven approaches to Covid-19. Indeed, the pandemic has necessitated that organisations – both public and private – move at pace and the role that data has come to play in this emergency response cannot be understated. We have seen a spectrum of data-driven responses, ranging from purpose-built technological solutions to, more fundamentally, the use of mass datasets to inform public policy decision-making, as illustrated in the government’s ‘data not dates’ approach to easing lockdown. Of necessity, organisations have rapidly adapted priorities, overcoming pre-existing ‘barriers’ in relation to data-sharing; information governance processes have been fast-tracked in the interests of responding swiftly to this emergency. As one participant observed, a ‘dam has been breached’ regarding the perceived barriers of data protection.

In this paper, we pause to reflect on the legacy of this rapid development and ask what it means for future information governance, by reference to initial findings of the AHRC-funded Observatory for Monitoring Data-Driven Approaches to Covid-19. Undoubtedly, many benefits have been realised by organisations who, naturally, are eager to maintain momentum. However, we question what the long-term impact of this rapid innovation will be, outside an emergency context, from a legal, ethical and social perspective. What role will information governance serve in a post-Covid-19 data-driven landscape if the ‘dam’ has been breached? As norms regarding sharing and use of, often sensitive, data appear to be shifting, the need for mature information governance frameworks is perhaps more pressing than ever.


Not so innocent: Conglomerate mergers as a means of solidifying data ecosystem market power

Session: Digital, cloud and Internet regulation and governance

Peter van de Waerdt University of Groningen, Groningen, Netherlands

Abstract

Conglomerate mergers, mergers involving companies which operate on different markets, have long been thought to be ‘innocent’. Under the European Commission’s Non-horizontal Merger Guidelines, they are generally considered unlikely to significantly impede effective competition. They do not eliminate any direct competitors, and in fact may give rise to new efficiencies to the consumer’s benefit. Only when the merger could lead to foreclosure or coordinated effects among the remaining market players should they be handled with greater care.   

However, this analysis does not hold true for online data-driven conglomerates: “data ecosystems”. Such data ecosystems derive market power from the collection, combination, analysis, and monetization of person data from as many divergent sources as possible. Accordingly, conglomerate mergers are an essential strategy to solidify and strengthen such market power. Namely, new users are brought into the ecosystem through new entry points, and with them more of their personal data is integrated into the ecosystem. Ultimately, the growing reach of the ecosystem may even lock in consumers and their data.

Since personal data functions as a shared resource of all the branches of ‘digital conglomerates’, the (non-)horizontality of a merger between online companies becomes essentially irrelevant. In fact, since expansion of the ecosystem is such a central pillar of its market power, it is submitted that online conglomerate mergers actually deserve higher scrutiny rather than lower. New theories of harm must be envisioned to catch the full effects which online conglomerate mergers can have, both on competition and data protection.


Narratives of privacy expectations in the age of platforms

Session: Digital cultures

Petros Terzis, Emma Nottingham, Martina Hutton University of Winchester, Winchester, United Kingdom

Abstract

The test of reasonable expectations of privacy test has been one of the most prevalent concepts in data protection and privacy jurisprudence. Although the test was developed in the pre-Internet era, due to non-Internet related technology (ie  camera) and for non-Internet related purposes (ie protection of celebrities), it found its way in modern privacy law. However, little attention has been drawn on the qualitative differences between the factual background that gave rise to the concept on the one hand, and the modern challenges for privacy on the other. For it is by no means self-evident that in the age of platforms, people’s expectations of privacy are contextually similar with what judges and lawyers had in mind when deliberating on this test.

By building on these discourses and on pertinent case-law, this socio-legal project brought together groups of research participants in attempt to shape the picture of  ‘reasonable expectations of privacy’ today and examine its applicability in the platform era. Funded by the HDI Network (UK), we organised focus groups to collect and combine narratives on people’s expectations of privacy. Our findings suggest a picture much more complicated and mystifying than the current theoretical debate and legal reality seem to accept. The final project report is due to be published in May but by the time of the conference we will be in a position to present our preliminary findings.


The Invisible Hand/Mind within the Black Box

Session: E-commerce, m-commerce and e-governance

Israel Cedillo Lazcano Universidad de las Américas Puebla, San Andrés Cholula, Mexico

Abstract

Each financial crisis has been human-made, not an outcome of a metaphysical invisible hand, and this fact will not disappear even if one relies on artificial interpretations of Homo oeconomicus. Most of the time we do not stop to think about risks relating to AI playing chess, selecting genetic algorithms or turning the lights in domestic environments; however, if AI is employed to help to make decisions in portfolio structuring or monetary policy, then we need to understand how it reaches those decisions and have different views of Calabresi’s cathedral. Furthermore, we are assuming that the parameter for our financial black boxes are human financial intelligences, but can we be certain about it? The answer to this question is rather relevant given that the bias behind financial algorithms –and, consequently, behind the Intellectual Property Rights (IPRs) behind financial infrastructures- could set the cornerstone for the next great financial crisis. Risks and their contagion will not depend on Eugene Fama’s rationality, but on the interaction among infrastructures, which are based on “irrational” coded algorithms, which, in turn, will not react on a coordinated way. Building upon this fact, through this research, my aim is to highlight the risks behind financial black boxes that could give form to the next generation of financial infrastructures and their systemic relevance, and propose a set of standards to uniform these structures to coordinate actions and avoid contagion in Machine-to-Machine (M2M) interactions.


Remote Teaching Today and Tomorrow: an empirical study on copyright perception by educators in Italy, the Netherlands, and the UK

Session: Intellectual property law and technology

Giulia Priora1, Guido Noto La Diega2, Bernd Justin Jütte3, Guido Salza41Sant’Anna School of Advanced Studies, Pisa, Italy. 2University of Stirling, Stirling, United Kingdom. 3University College Dublin, Dublin, Ireland. 4University of Trento, Trento, Italy

Abstract

It may sound trite to say that in understanding today’s reality of remote education lies the key to unleash its potential and ensure its sustainability for the future. Yet, the nexus between education and digital technologies opens a Pandora’s box of cross-disciplinary questions and dilemmas. The legal implications of remote teaching, especially its private law aspects such as copyright law, might – more often than not – be perceived as a secondary concern by educators, students, and the society at large. However, the impact of property rights in digital assets cannot be underestimated, especially during the epochal transition schools and universities are experiencing. In fact, copyright law not only directly addresses the creation and use of educational materials, but also profoundly determines the functioning of digital platforms, tools, and practices. Building on a previous study on copyright issues in the early stage of the COVID-19 pandemic, we present an empirical analysis on educators’ copyright perception in the consolidating reality of remote education. A representative sample of universities in Italy, the Netherlands, and the UK has been selected, and a structured survey circulated among their faculties. Based on the socio-legal methodology, the study explores the digital use of protected content by educators, their awareness of legal aspects and potential liabilities, and their experiences with copyright in the digital classroom. The study sheds light on the impact of copyright law in current and future remote teaching practices, exposing possible discrepancies between copyright’s purposes and its effects in building the post-pandemic university.


Data protection by design: lessons from the pandemic

Session: Privacy and surveillance

Julija Kalpokiene Vytautas Magnus University, Kaunas, Lithuania

Abstract

In order to combat the spread of the corona virus, many countries raced to introduce effective measures to combat the spread of Covid-19. Among such measures there were various methods of contact tracing (including track and trace apps and other ways of registering individuals and their whereabouts or movements). All this has meant that vast amounts of personal data were collected and processed.

This paper will look at the tensions between, on the one hand, the right to privacy and principles of data protection which must be applied despite the pandemic and, on the other hand, the public interest to collect personal data during a pandemic and will highlight what lessons should be learnt for future.

It will be looked at the measures applied during the pandemic and what implications for personal data protection these could or did have. As a case study, the measures of compulsory customer registration at bars and restaurants adopted by the Republic of Lithuania and the Lithuanian contact tracking application “Karantinas” will be analysed. Notably, the developers of the app and the Lithuanian National Authority for Public Health have recently been fined by the Data Protection Authority due to data protection breaches when collecting and processing personal data via the app.

This paper aims to highlight the issues that arose during the pandemic later offering a neat summary of lessons to be learnt. The focus will be to show the importance of data protection by design offering insights from the experience during the pandemic.


Better cybersecurity, better democracy? The public interest case for amending national and international legal instruments criminalising cybercrime.

Session: Human rights and technology

Audrey Guinchard University of Essex, Colchester, United Kingdom

Abstract

Criminalisation is an exercise in balancing conflicting principles and values: protecting society against harm vs complying with human rights so that the criminal law does not become an instrument of oppression.

When the UK Computer Misuse Act (CMA) was enacted in 1990, the digital world was very different to ours. Since, the speed and scale of innovations have transformed our world. It is time to reassess the balancing underlining the UK CMA. Has it achieved its aim of protecting society against harm while still ensuring compliance with human rights?

I argue that it is failing. The conflation of hacking with crime, notably unauthorised access, has led to the criminalisation of a wide range of legitimate security practices and newsgathering practices. Little protection is available to the actors implicated: security researchers, whistle-blowers, journalists, and those disrupting news gathering (Wikileaks).

 The UK CMA has become a threat for security and democracy, for reasons hardly foreseen in 1990, and largely ignored in 2006 and 2015 when the CMA was amended. The same assessment can be made for the Convention on cybercrime n. 185, which has de facto become the international treaty on cybercrime, and the Directive 2013/40/EU on attacks against information systems.

To reinstate a balance, I propose to add a public interest defence to the computer-focused offences of the UK CMA, the Directive 2013/40/EU, and the Convention n.185. Acting both at national and international level would remedy the fragmented governance of cybercrime and its negative impact on cybersecurity and democracy.


Towards Evolutionary Regulation of Post-Crisis Surveillance

Session: Privacy and surveillance

Ivo Emanuilov, Katerina Yordanova KU Leuven, Leuven, Belgium

Abstract

The crisis narrative used to describe the societal and legal implications of Covid-19 has shown the deficiencies of the institutional design and resilience of the post-industrial society. Prominent technologies, once expected to solve a crisis quite like the one we are currently facing, have failed miserably to live up to these expectations. Even more, governmental and industrial actors alike have been quick to deploy these very same technologies for purposes of facial recognition, crowd control and continuous surveillance under the, in principle, valid heading of public health safety and security. This paper explores the fundamental question of how this re-purposed infrastructure can be rolled back to its original state, and whether its architecture can accommodate such evolution. We argue that the institutional and technical design of the digital infrastructure we use to mitigate the effects of (global) crises is ill-suited to meet the challenges of long-term evolution. Furthermore, returning to the pre-Covid status quo could prove to be challenging also because it is almost impossible to determine what exactly this status quo was for each individual State. Essentially, we are facing infrastructure that is by design difficult to remove and whose utility is conveniently justified and reinforced by crisis narratives


What’s up with digital markets?: Discussion from the perspective of economic regulation

Session: Digital, cloud and Internet regulation and governance

Mehmet Unver University of Hertfordshire, Hatfield, United Kingdom

Abstract

It is recently acknowledged digital markets bring out various risks to consumer welfare alongside promoting it. Debates on the regulation of digital markets resulted in recent proposals in the EU and the UK, respectively the Digital Market Act (DMA) Proposal and the Competition and Market Authority’s advice in December 2020.

This study aims to investigate each proposal from the perspective of economic regulation and its application to the digital markets. In so doing, a three-pronged analysis is made, examining how digital markets regulation is contextualised with respect to the digital services and service providers, what criteria will apply to designate the behaviours that need to be addressed; and what tools and remedies are contained to correct such behaviours. 

Overall, while the principles of economic regulation seem to have been distilled for the digital markets with substantial room being left to the regulator in the UK, this flexibility seems to be replaced with more clarity with respect to the regulatory roadmap to be followed in the EU. The three-pronged analysis made through this paper demonstrates the unsubstantiated links between ‘context’, ‘criteria’ and ‘containment’ within the latter.

The paper concludes that pigeonholes in the EU’s approach potentially cause fragility that would end up with further regulatory engineering that would not well serve the regulation of ever-fast evolving digital markets. It is therefore proposed that the DMA Proposal be amended to include a process of ‘confinement’ with a view to refining the contained remedies towards identified sources of the problem.


Cryptocurrency According to Zimbabwe: Examining Cryptocurrency Regulation through a non-Western Lens

Session: Development and rural challenges

Emelie Pepito Queen’s University Belfast, Belfast, United Kingdom

Abstract

Many nations are beginning to understand, use and regulate emerging blockchain-based cryptocurrencies. In sub-Saharan Africa, economic instability led to the adoption of mobile money in Kenya as early as 2007. In 2018, a United Nations publication claimed ‘Africa Could Be the Next Frontier for Cryptocurrency’. Recently, the advent of PayPal’s acceptance of cryptocurrencies and the company’s history of facilitating remittance payments into developing nations will contribute to cryptocurrency growth across sub-Saharan Africa. However, the primary influences shaping future global regulatory frameworks remain West-centric. As a result, issues unique to sub-Saharan developing nations are under-represented in the technology’s regulation and subsequent development and use. This paper analyses Zimbabwe as a case study to understand what cryptocurrency regulation should consider, from the perspective of a developing nation. Zimbabwe is used because their recent national currency failed, and they briefly banned the US Dollar which drove up cryptocurrency use. In a Zimbabwean context, the paper analyses development issues tied to Jean-Jacques Laffont’s work on institutional differences between developed and developing nations; blockchain-based cryptocurrency’s “immutable record-keeping” and its potential to reduce misappropriated development funds; financial inclusion; and the entrepreneurial development positively associated with cryptocurrency. Due to country differences, these issues are not treated the same or given the same weight in Western approaches. Also, some types of developmental goals like financial inclusion and AML/KYC requirements form an ideological clash between regulation and development which could blunt cryptocurrencies’ function in reaching the unbanked in Zimbabwe.  


Online lectures from (EU) legal perspective – new challenges for copyright and personal data protection during the COVID-19 pandemic

Session: Legal education, regulation and technology-enhanced learning

Martin Samek, Jana Soukupová Charles University, Prague, Czech Republic

Abstract

In 2020, the world saw a rise of online lectures as schools were closed for the most part of the year due to Covid-19 pandemic measurements in many countries. However, providing online education brings its own set of legal questions and challenges, mostly in the area of copyright and personal data protection. Therefore, this paper aims to answer questions like Can students record and upload their lectures? To what extent can teachers process students’ data during online courses? Can the school force its teachers and students to use webcams during online classes? or Can the lecture be copyrighted work, and what are the legal implications of that?  The issues concerning personal data processing are even more pressing, given that many students are minors. 

In its analysis, the paper focuses on relevant EU and Czech legislation, mainly the General Data Protection Regulation, Directive on copyright and related rights in the Digital Single Market, Directive 2001/29/EC and the Czech Copyright Act. The outcome of the paper is a risk analysis of the issues mentioned above. First, the paper discusses the copyright concerns of online lectures, and then it goes into detail regarding the processing of personal data and the use of copyrighted works.


Patents, pandemics, and people: on the role of patent flexibilities in innovation and equitable access to healthcare

Session: Intellectual property law and technology

Joanna Al-Qaddoumi University of Sussex, Falmer, United Kingdom

Abstract

With the growing research around COVID-19 and the highly anticipated development of a vaccine, questions concerning patent rights, innovation, and access to healthcare are inevitable. Will pharmaceutical companies voluntarily license their products? Would under-developed countries be able to access these vaccines and medicines? Will the new norm be government-funded research into health-related matters that pose a threat to the welfare of society? What role do patent ‘flexibilities’ play in the healthcare innovation realm? 

This research paper sets the context in which patent law operates and the theoretical justifications that underpin intellectual property (the common ‘driver of innovation’ and ‘natural rights’ debates). In its discussion of patent right justifications, it draws from political economy theories to set the foundation for arguing a universal socialist approach to the production and distribution of vaccines. It critically examines patent flexibilities set in the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) and the Doha Ministerial Declaration on the TRIPS Agreement and Public Health (Doha Declaration). It adopts a socialist theoretical perspective to analyse patent flexibilities while maintaining the view that a socialist approach will enhance the equitable distribution of vaccines and medicines. Examining patent flexibilities from this critical evaluation lens allows for a more nuanced understanding of their role in securing equitable healthcare access for the public. It aims to evaluate how the current system attempts to strike a balance between access and innovation – yet it remains to be seen how this will play out in the real world.


The Security Union Ecosystem: An Overview of Information Management in the Area of Freedom, Security and Justice – 10 Years On

Session: Human rights and technology

Lauren E. Elrick University of Groningen, Groningen, Netherlands

Abstract

The European Commission’s 2020-2025 Security Strategy (COM-2020-605) was released in mid-2020. Noting that the COVID-19 pandemic has ‘reshaped our notion of safety and security threats’ (p.1), it seeks to develop a ‘strong European security ecosystem,’ (p.20) implementing a ‘whole-of-society approach’ (p.2) to security. This paper therefore seeks to identify how this ‘security ecosystem’ concept is understood. To work successfully, the ‘security ecosystem’ requires effective cooperation and information exchange (p.21), not only between Member States, but also third countries, EU institutions/agencies, and the private sector. Law enforcement and migration management are two fields where this is essential. In 2010, an ‘Overview of Information Management in the AFSJ’ (COM-2010-385) was conducted, identifying how personal information was collected, stored and exchanged at EU-level for these fields. It informed citizens: (a) what personal data was processed and exchanged, (b) by whom and (c) for what purposes, while also establishing a set of guiding principles for future measures. Ten years later and information management within the AFSJ has undergone extensive changes due to legislative revisions (SIS/VIS) and new systems being developed (EES/ETIAS). This paper provides an updated overview, identifying how the past decade has changed the legislative regime and considering whether these guiding principles (safeguarding fundamental rights/necessity etc) were respected. Such an update is essential in order for citizens to adequately understand who is using their personal data, and how this affects their rights (privacy/data protection). Consequently, it seeks to identify how these measures fall within the ‘security ecosystem.’


Learning and Education in a Datafied world. Do children have a right to opt-out?

Session: Legal education, regulation and technology-enhanced learning

Emma Nottingham, Caroline Stockman University of Winchester, Winchester, United Kingdom

Abstract

The influx of digital technologies in the modern-day school environment has led to vast amounts of data being collected about children through the process of learning. Simply being enrolled in school exposes children to data collection. Digital education is more pervasive and underpinned by data collection and data processing, often to the financial advantage of third-party companies. The national lockdowns during the Covid-19 pandemic exacerbated the need to rely on digital forms of learning. This digital dependency also meant that there was no alternative option to learning through digital platforms that were potentially collecting and harvesting the data of its users. Although the right to opt out is available in principle, a child who opts out of digital learning is at risk of not getting an education all. This paper explores the notion of the legal ‘right to opt out’ of digital platforms in the context of the school learning environment. It suggests that a right to opt out already exists in the law and that children, especially 16- and 17-year-olds, should be able to opt out of digital learning if there are legitimate and justified concerns about the platform being used, such as concerns relating to their privacy or how the platform collects, stores and uses data. This paper further suggests that support should be offered to a child who wishes to exercise the right to opt out, which should include the availability of a sufficient alternative such as a safer digital platform or a non-digital alternative.


On the path to the future: Mapping the notion of transparency in the upcoming EU regulatory framework for artificial intelligence

Session: Future technologies and law

Ida Varošanec University of Groningen, Groningen, Netherlands

Abstract

Since 2018, the European Union [EU] has initiated a process for regulation of artificial intelligence [AI] through the development of a legislative and ethical framework. The latter features soft-law language (guidelines). We should consider potential implications of such division.

The European Commission [EC] will soon publish a legislative proposal for AI. This article will map out current initiatives leading up to this stage and identify potential approaches therein by identifying key elements in each document chronologically and observing how they have evolved. One of the aims of this mapping analysis is to identify how the policy-maker is viewing issues of transparency.  

This paper will consider the value of transparency as essential to promote trust in AI-driven systems vis-à-vis the barrier of intellectual property [IP] rights in underlying processes. The regulatory disconnect between the public interest of promoting transparent AI systems – envisaged as a part of the ethical framework – and the traditional narrative of secrecy posited by IP law – featuring in the legislative framework – will be examined. An analysis will be conducted against the backdrop of the evolving EU AI regulatory framework to see whether this disconnect is already addressed in the EC’s suggested strategy. The paper will argue that the dangers of keeping ethical considerations as values without a legal force justify further reflection. The transparency-IP dichotomy might be a perfect example of this since the potential enforcement of rights between the individuals with an interest in transparency of AI and companies with IP rights might be disproportionate.


Applicable law to infringement disputes of the right to trade names

Session: Intellectual property law and technology

Helena Pullmannová Masaryk University, Brno, Czech Republic

Abstract

When the entrepreneur’s right to his trade name is violated, it is important to know answers to the questions under the law of which state claims of this way the injured entrepreneur (i.e., from non-contractual obligation) will be assessed and for which related questions thus determined law is decisive as well.

These issues are mainly analysed in the light of the Regulation (EC) No 864/2007 of the European Parliament and of the Council of 11 July 2007 on the law applicable to non‑contractual obligations (Rome II) and attention is beside other things also paid to the issue of the mosaic way of determining application law(s), especially the issue how to deal with the question whether the infringement, which was committed online, was committed on the territory of a particular state.

Subsequently, the criteria formulated in the Rome II are compared with the criteria listed in the CLIP Principles (soft law) for the purpose to determine how much is the European Union point of view different from the (European) academic point of view.

For the sake of completeness, the presentation mentions situations where the Rome II is not used to ascertain the applicable law. And in detail, the rules for determining applicable law under Czech law are analysed.

Moreover, the presentation is introduced with a general consideration of the importance of classifying the “examined conduct” under lex fori as a legal act which sets a non‑contractual obligation arising from the infringement of the right to the trade name.


Data, socially distanced: cumulative data disclosure and data narratives during Covid

Session: Privacy and surveillance

Burkhard Schafer1, Emma Nicol2, Wendy Moncur2, Leif Azzopardi2, Amal Htait2, Jo Briggs3, Daniel Carey31The Universiy of Edinburgh, Edinburgh, United Kingdom. 2University of Strathclyde, Glasgow, United Kingdom. 3Northumbria University, Newcastle, United Kingdom

Abstract

The paper discusses methodological issues of, and some first results from, a research project into legal and technological responses to the risk of cumulative disclosure of data. Pieces of personal data shared and generated across multiple platforms can collectively pose risks to reputation and security, for individuals, their employers, and their nation. As part of this project, we conducted qualitative interviews with 26 participants, which examined their personal digital ecosystems. Crucially, the beginning of the study coincided with the outbreak of the COVID pandemic. This not only required changes to interviewing modality, it also meant that the interviewees were experiencing an unprecedented need to adjust their own online presence and working patterns. The theoretical lens through which we analyze the interviews is that of “data narratives” as a sense-making activity. But how can people construct such narratives in the face of unprecedented disruption of their lives? What value is there in concepts such as “context collapse” when it is not any longer a hidden feature of social media use, but becomes the most visible aspect of daily experience, as the home becomes also the office?  We discuss themes associated with digital traces, Lockdown-specific changes in practice, and the metaphors we may want to use to make sense of them – including those from the pandemic discourse itself. We conclude by discussing the implications of cumulative revelations for individuals and employers and the intersection with COVID-related homeworking on online practices and behaviours.


Too many moving parts: autonomous vehicles and the modeling of complex technico-legal systems

Session: Future technologies and law

Scott McLachlan, Burkhard Schafer The University of Edinburgh, Edinburgh, United Kingdom

Abstract

The increased integration of autonomous or semi-autonomous capabilities into everyday objects poses significant regulatory challenges. Autonomous vehicles are an excellent test of such a complex system, one that plays a prominent role in the current discussion around trustworthy autonomous systems: a large number of very different sub-systems need to be integrated, complex (often international) supply chains contribute the component parts, and a use-scenario would require them to function in another complex environment, modern traffic infrastructures populated by other autonomous cars, traditional cars, pedestrians, and the physical environment. One approach to reducing this complexity is by certification of component parts. If it were possible to connect these formal properties directly to legal characteristics, the resulting system would be “law-abiding by design”. One problem with this regulatory atomism is that it may fail to account for the problems that only emerge at the integration stage. But equally, it assumes a degree of legal atomism that defies the way legal systems are built. Legal systems are at least as complex as the technology they regulate. For autonomous vehicles, we find a  multitude of legal initiatives trying to respond to the technology. Based on a systematic literature review combined with a mapping exercise of the regulatory environment we explore the potential of bringing together recent approaches in legal AI to depict the complex interactions within legal systems and Complex System Governance (CSG) for a more holistic approach to design-based regulation,


Criticisms of Copyright in a Fandemic: How Copyright Gets Online Fan Creative Behaviour Wrong

Session: Intellectual property law and technology

Ruth Flaherty University of East Anglia, Norwich, United Kingdom. University of Suffolk, Ipswich, United Kingdom

Abstract

Fanfiction is a type of increasingly popular user-generated content (UGC) produced mostly online for free on websites such as Fanfiction.Net, used as a method of escapism and communication.  Amateur writers reuse characters, locations and plotlines from commercially successful works to bring alternative viewpoints and storylines to life.  This raises issues in relation to copyright in a digital market – specifically whether any of the existing fair dealing exceptions should cover this type of behaviour, and how the CDSM Directive will apply to creators in a post-Brexit disUnited Kingdom. This is increasingly vital during a lockdown, when more activities are being undertaken online, and standard methods of income generation are blocked to creators. This paper adopts a distinctive approach, applying doctrinal and quantitative methods together to test the economic biases within copyright law. It makes several important contributions to knowledge – it analyses the fair dealing exceptions as they stand in UK law after the recent Pelham/Funke Median cases, and suggests a potential test for the as-yet undefined s30A CDPA 1988 pastiche fair dealing exception. By using a dataset of user posts from the world’s largest online fanfiction archive (Fanfiction.Net) and sales data (Nielsen), this paper further suggests that Article 17 of the CDSM Directive contains serious misapprehensions regarding culture in the digital age. This research suggests that existing theories of copyright harm are incomplete, and there may be important social incentives and welfare benefits to permitting this type of use.


The IoT and the new EU cybersecurity landscape

Session: Cyber crime and cyber security

Pier Giorgio Chiara University of Luxembourg, Luxembourg, Luxembourg. University of Bologna, Bologna, Italy

Abstract

The article aims at casting the light on how the fast-evolving European cybersecurity regulatory framework would impact the Internet of Things (IoT) domain. Taking into account the new cybersecurity strategy of the EU Commission for the digital decade, two EU policy initiatives are under scrutiny i.e., the proposal for a NIS2, concerning networks’ security at a higher layer of abstraction, and the incoming delegated acts pursuant to the RED Directive, as regards products’ security. 

The legal analysis will investigate whether and to what extent these initiatives would be adequate to cope with the manifold challenges in securing IoT and the supply chain. Against this backdrop, the article will further discuss the benefit a possible future horizontal legislation on cybersecurity as opposed to non-regulatory voluntary approaches, which have been endorsed by many governments afraid that regulatory intervention could stifle innovation and competition. Invoked by both the EU Council and the EU Commission, the horizontal policy option on cybersecurity shall consist in a combined set of ex-ante and ex-post requirements applied to all IoT sectors and products categories producers and vendors in order to avoid ambiguity and fragmentation in the EU’s Single Market.


Ready, Steady, Go! A techno-legal analysis on Software Maturity Assessment.

Session: Cyber crime and cyber security

Archana Kumari, Sandra Schmitz, Stefan Schiffner University of Luxembourg, Esch sur Alzette, Luxembourg

Abstract

State of the Art is generally understood as the highest level of general development, as of a device, technique, or scientific field achieved at a particular time.

The notion is also widely used in legislation.  Most recentlythe General Data Protection Regulation and the Network and Information Systems Directive refer to state of the art when determining the appropriate level of protection measures. As of date EU legislation lacks a definition of state of the art, although the notion is commonly referred to in particular in relation to cybersecurity. This leaves software designers with the open question how to measure whether a technology constitutes “state of the art”.

We will approach the matter of “state of the art” from a technical persepective and outline how the so called Technology Readiness Level (TRL) assessment has been deployed in order to determine the state of the art. TRL is a means to mitigate production risks with NASAs TRLs being, in fact, a de facto industry standard. While TLR assessments provide a systematic evaluation process resulting in meaningful metrics, they leave an inherent gap in their definition, ie they do not allow for individual interpretation on maturity levels, which include the IT products quality. We will show how this may result in a mismatch of legal aim and technological reality that not only poses a risk in terms of legal compliance but as a consequence also leads to weaker protected systems than otherwise possible.


The New Spam: Regulating Annoyingly Persistent Notifications in Social Media, Mobile and IoT Ecosystems

Session: Digital, cloud and Internet regulation and governance

Wenlong Li University of Birmingham, Birmingham, United Kingdom

Abstract

Annoyance is deemed to be a settled marketing problem as the advertising industry embraces the ‘digital turn’. Traditional marketing tools, e.g. spamming are disfavoured by marketers who are now fonder of behavioural tracking as an effective medium to reach their customers. This transition is prompted by legal reforms in the EU and elsewhere that place a higher standard for unsolicited marketing. However, the medium, pattern and techno-legal landscape have dramatically changed since the anti-spam regulations are put in place. As such, annoying or aggressive marketing has never ended but morphed into subtler forms. This paper considers how persistent notifications that are prevalent in social media, mobile, IoT, what this paper calls the New Spam (NS), should be regulated. It contextualises NS in the transitions from direct, remote marketing to closed, ‘appified’ ecosystems, from unsolicited marketing to behavioural targeting. It then reviews the anti-spam law in EU, UK and US, and analyses why these rules are less effective in capturing new forms of spam. The paper examines three avenues for regulating the NS, including unsolicited communications, persistent marketing and aggressive advertising. It is argued that the NS poses unique regulatory challenges and the existing anti-spam framework, due to the lack of technology neutrality, should be updated to accommodate new forms of ill-mannered marketing. As marketing techniques are no longer ‘remote’ to users as a result of mass appification, it is critical to reflect on concepts, e.g. direct marketing, remote media on which the existing anti-spam rules hinges.


Algorithms Patrolling Content: Where’s the Harm? An empirical examination of Facebook shadow bans and their impact on users

Session: Digital, cloud and Internet regulation and governance

Monica Horten Independent scholar, n/a, United Kingdom

Abstract

This  paper reveals ways in which algorithmic decision-making on the Facebook platform has the effect of suppressing content distribution without specifically targeting it for removal, and results in the stifling of users’ speech. At the heart of the paper is an examination of  the colloquial concept  of a ‘shadow ban’.  This  term  refers to  specific scenario where users’ content is hidden or deprioritised without informing them. The paper reveals how the Facebook shadow ban works by blocking dissemination in News Feed  (Facebook’s  recommender system and also the name of algorithm that encodes the process).  The decision-making criteria are automated and  are based on ‘behaviour’ (activity) of the page. This technique is rooted in computer security, and raises questions about the balance between security and  freedom of expression.

The paper works through the lens of the user, whose position is assessed under human rights standards. The paper draws on an empirical study of the user experience, from November 2019 to January 2021. The study benefitted from a corpus of evidence comprising data, emails and screenshots from 20 Facebook Pages in the UK.  Data from Facebook Insights was analysed to produce a comparative metric. The paper concludes with a recommendation for quality controls,  potentially  with a form of triage, as  a vital step to safeguarding the online platforms as a forum for public discourse.


Filtering Out Expressions: The Rise of User-Generated Content and Upload Filters

Session: Intellectual property law and technology

Sevra Guler Guzel University of Hertfordshire, Hertfordshire, United Kingdom

Abstract

This paper conducts an in-depth analysis of primary sources such as EU legislation and CJEU and ECHR case law, and recent secondary sources to provide an implementation of Article 17 that ensures a fertile online environment for user-generated content(UGC). As people search for new ways to connect, stay informed, and entertained during the COVID-19 pandemic, the use of user-generated content platforms surges to record levels. These platforms became the primary source for users to exercise their right to freedom of expression and the right to receive and impart information and ideas that is enshrined in Article 11 of the Charter of Fundamental Rights of the European Union(Charter). However, Article 17 of the Directive on Copyright in the Digital Single Market changes the traditional notice and takedown regime with the obligation to employ upload filters and notice and stay down mechanism for online content-sharing service providers. This results in upload filters automatically blocking or removing UGC without asking rightholders and notifying the users, therefore interfering with the users’ freedom of expression and information. Thus, to fill the literature gap, this paper suggests a user-generated content copyright exception to protect the non-commercial uploads of end-users and an implementation of Article 17 that includes a filtering system that respects fundamental rights exercised via UGC. Moreover, this paper extends the knowledge regarding the problems with online platforms to contribute to the discussions of the Digital Services Act.


Disinformation and Content Moderation in Times of Pandemics: The Lesson of European Digital Constitutionalism

Session: Human rights and technology

Giovanni De Gregorio Bocconi University, Milan, Italy

Abstract

The spread of disinformation online has not stopped in times of pandemic. Conspiracy theories around 5G or false information about Covid-19 treatments are only two examples of the health disinformation flowing on social media’ spaces. Although the cooperative efforts of platforms to fight disinformation during a global pandemic, the use of artificial intelligence to moderate content has led to amplify the dissemination of false content in a time where reliance over good health information has been critical. The decision of Google and Facebook to limit the process of human moderation has led to the suspensions and removal of accounts but also the spread of false content.

This troubling situation is the result of the discretion social media enjoy in deciding how to moderate content. Social media companies can select which information to maintain or delete according to standards based on the interest to avoid any monetary penalty or reputational damage. To some extent, the result of this private-driven activity mirrors the exercise of judicial balancing and public enforcement carried out by state actors.

This work focuses on how to address the spread of disinformation during the global pandemic under a European constitutional perspective. The first part describes the role of content moderation in the spreading of false content online during the global pandemic. The second part underlines the Union’s efforts to limit platforms’ discretion while the third part underlines the role of European digital constitutionalism in providing a normative perspective addressing the challenges of the spread of health disinformation.


SOCMINT and the “effet utile” of data protection laws

Session: Human rights and technology

Jonida Milaj University of Groningen, Groningen, Netherlands

Abstract

The new motto of the European Union regarding open data in the framework of the Digital Single Market Strategy is “As open as possible, as close as necessary”. Compliance with the GDPR is required for the re-use of public sector information when this qualifies under the definition of personal data. By contrast to the public sector, the use of open data in the private sector, especially relating to social media, is not regulated. This creates the wrong understanding that there is no need to comply with data protection rules when processing these data since data subjects have chosen by themselves not to protect their privacy online. This paper challenges the mainstream view by addressing the use of open data in social media in light of the principle of “effet utile” of European Law. Decisions from the ECtHR and from the CJEU regarding protection of privacy and personal data in public spaces are used to understand the notions of “expectation of privacy” and “data manifestly made public” and contrast these to the unlimited use of open data. It is argued that data protection laws would lose their “effet utile” if there is no protection for open data published on social media.


Virtual Worlds: User Perspectives on Terms and Conditions, Community, and Copyright

Session: Intellectual property law and technology

Megan Rae Blakely Lancaster University Law School, Lancaster, United Kingdom

Abstract

This article presents the findings from a BILETA-funded study on user perspectives on virtual worlds.  The study focused on a massively multiplayer online game (MMO), the Elder Scrolls Online.  User perspectives were gathered through 22 semi-structured interviews and 124 survey responses.  Responses came from across the globe, including the Netherlands, the United Kingdom, South Africa, Portugal, France, Ireland, the United States, and Germany.  The questions covered a range of issues related to virtual worlds.  The users shared their reasoning for their engagement (or non-engagement) with the Terms and Conditions.  Views on copyright and ownership in a collaborative community were explored, especially for users who create content utilising other platforms like Twitch and users who engage with related creative activities such as crafting and community hosting.  Long term users also discussed the introduction of loot boxes and the role of microtransactions, of current interest to the Government, in light of gambling and protection of minor users.  Additionally, users explained how relationships are built in virtual worlds; these insights are compelling during the global pandemic, when in-person socialisation is severely limited.  This data demonstrates that MMO users are not a ubiquitous hive mind but a diverse cross section of the population with a variety of methods of engagement and views, all of legal interest. The data nonetheless reveals common threads throughout that show how users of this technology interact distinctly with this collaborative, creative, social environment, from their perspective.  


Whither data portability? Obstacles to the GDPR’s newest data subject right

Session: Digital, cloud and Internet regulation and governance

Lea Racine, Karl Sykes, Janis Wong, Tristan Henderson University of St Andrews, St Andrews, United Kingdom

Abstract

 The General Data Protection Regulation bolsters a number of the rights of data subjects but also introduces one brand new right – Article 20’s right to data portability. This provides data subjects with the right to request their data in a structured, commonly used and machine-readable format, but also to have this transmitted directly from one controller to another.

In this paper we present an empirical study of Article 20(2) to show how direct transmission between controllers is poorly supported in a number of domains. First, we study the privacy notices of a large set of retailers and social network sites. We find that although portability is often mentioned, it is infeasible to directly transmit data between controllers.

Secondly, we study portability amongst academic research repositories, a domain where interoperable standards for
machine-readable data exist and so one might expect to see feasible direct transmission. We find that again there is no mechanism to support direct transfers, despite the technical feasibility of this. We demonstrate this technical feasibility through the development of a tool for porting data.

We suggest some possible mechanisms for strengthening portability through tools and standards. Our findings may be useful as portability becomes more popular with legislators (e.g. through the forthcoming European Digital Markets Act).


“Every Student Can Learn, just not on the same Day” – Data Protection and Cybersecurity Challenges for E-Learning Platforms

Session: Digital, cloud and Internet regulation and governance

Sandra Schmitz-Berndt, Stefan Schiffner Université du Luxembourg, Esch-sur-Alzette, Luxembourg

Abstract

When George Evans stated that every student can learn, just not on the same day, he had probably not in mind the despair of pupils trying to access an e-learning platform.

With the COVID19 crisis, online learning became an everyday commodity almost overnight; however, not all schools were prepared to swiftly switch from in class to remote teaching. Concerns were raised with regard to data protection and cyber security, which in some cases led to the implementation of “home-made” solutions. 

Taking the example of the federalist state of Germany, where education is within the sole competence of the Länder, this paper will explore the functioning and technical implementation of a variety of e-learning platforms before data protection concerns are addressed. We will then explore whether the NIS Directive, which foresees similar security requirements as the GDPR, is applicable to the diverse models, and outline the consequences.

In light of the acceleration of the revision of the NIS Directive due to the COVID-19 crisis, we take the example of learning platforms to outline the flaws of the 2016 Directive before we critically evaluate some aspects of the NIS 2.0 proposal of December 2020.


Article 17 of the EU Copyright Directive: aligning AI-based notice and staydown systems with monitoring obligations

Session: Intellectual property law and technology

Dr Felipe Romero-Moreno Hertfordshire University, Hertfordshire, United Kingdom

Abstract

This paper critically evaluates to what extent Article 17 EU Copyright Directive, could be implemented in a way that would not lead to a general monitoring obligation being imposed on OCSSPs, thus respecting Article 15(1) E-Commerce Directive. The analysis draws on AI-related material, the case-law of the Court of Justice of the EU, and the European Court of Human Rights, as well as academic literature. Building on the author previous research, the paper argues that for AI-based notice and staydown systems not to result in OCSSPs’ general monitoring of user uploaded content, these systems must be targeted specifically at online infringement of copyright on a commercial-scale. Following the EDPS Opinion on the Digital Services Act, it critically examines the compliance of Article 17 EU Copyright Directive with the data minimisation, data protection by design and data protection by default principles. It proposes that for general monitoring obligations to become lawful ‘duties of care’ and specific enough to observe Recitals 47 and 48 E-Commerce Directive, OCSSPs’ processing of uploaders’ personal data should be adequate, relevant and necessary to specifically detect and prevent commercial-scale online copyright infringement. Moreover, Article 17 should also be compliant with the GDPR principles as well as being an essential requirement for OCSSPs that a Data Protection Impact Assessment be carried out before adopting any kind of automated system. It concludes that, unless the procedural safeguards suggested are heeded, the adoption of Article 17 is likely to lead to general monitoring obligations being imposed on OCCSPs.


Processing personal data on the grounds of public interest by nonpublic entities in public health emergencies

Session: Privacy and surveillance

Michal Koscik Masaryk university, Brno, Czech Republic

Abstract

The pandemic of COVID-19 showed that businesses and non-governmental organizations have an important role in responding to health emergencies. They can serve as sources of crucial information (i.e. mobile network operators, insurance companies, healthcare providers) and can directly perform tasks in the public interest (i.e. healthcare, social care, preventive measures in the workforce). Some technology companies even initiated actions that served as an extension or alternative to governmental actions in public health (exposure notification systems).

The GDPR acknowledges that the nonpublic entities may be beneficiaries of public interest exceptions (recital no. 45). However, the mere potential to be useful in public health action is not enough to justify personal data processing based on public interest (PI) or public health (PH). The paper aims to analyze the processing of personal data based on PI/PH [Article 6.1. (e) and Article 9.2. (g) GDPR] by nonpublic institutions. The paper will analyze requirements for the application of PI/PH basis. 

The analysis outcomes are demonstrated in practical scenarios, namely: i) secondary use of customer/user/patient data for public health action ii) cooperation of public and nonpublic institutions in pandemic response iii) public health actions initiated by nonpublic entities. 


Hitting the Target but Missing the Point? Assessing the Adequacy of the European Data Protection Legal Framework for Biometric Technologies.

Session: Privacy and surveillance

Monique Kalsi University of Groningen, Groningen, Netherlands

Abstract

The General Data Protection Regulation (GDPR) divides data pertaining to human body attributes into three categories; i) physical, physiological or behavioral characteristics of a person without any technical processing, e.g., photographs ii) the above characteristics gathered through specific technical processing, i.e., biometric data, which allow or confirm unique identification of an individual, and iii) biometric data processed for the sole purpose of uniquely identifying a person. While the GDPR only takes into account the means of processing and the purpose, serious risks could result from potential data reuse and its storage in a centralized database.

We contend that the approach followed by the GDPR results in an artificial distinction that does not account for the intrusiveness of different processing operations. First, we analyze the intention of the European legislator by looking at the history of the GDPR. Second, we study the definition of biometric data through the underlying concepts of ‘human body’ and ‘identity’ and explore the difference with other data relating to human body – genetic and health data. Third, this paper examines the emergence of second-generation biometrics based on behavioral features like gait, voice, etc. Their use poses significant legal challenges which are inadequately apprehended by the current data protection framework. Finally, we argue that the current rules need to be adapted to technological advancements. However, in the absence of such a drastic solution, we propose the strengthening of transparency and data protection by design and by default obligations could offer intermediate and immediate solutions.


Zero wrongs can’t make it right: Code, cities and the foreclosure of wrongdoing

Session: Future technologies and law

Matthew Jewell University of Edinburgh, Edinburgh, United Kingdom

Abstract

Cities are being made and remade in the idiom of smartness, using technologies that offer to guide, shape and control behaviour in ways that plausibly impact the practical fulfilment of rights. Taking into account the historic importance of the city as a site where social practice, private ordering and government confront each other, and with an awareness of the changing nature of cyber-physical objects and the practice of law, the case for legal scholarship to engage in the critical discourse of smart cities is easily made.

Here, I take Lefebvre’s characterisation of urban space as a site of political contest (the ‘right to the city’, and the notion of spatial justice within) as a lens in order to discuss an arising tension: As we introduce smart, data-driven technologies with new power geometries into our urban space, they become mixed with the ‘disciplinary strategy’ (Vanolo, 2014) and prevailing logics of smart city initiatives. “Inefficient” behaviours risk being disfavoured or disenabled in these environments, abridging the opportunity to “do wrong” and challenging exercises of moral conscience such as those Thoreau calls for when asking us to be disobedient, to ‘be a counter-friction to stop the machine’.

I explore this ecological shift and how it challenges the legal accommodation of lawful protest, stresses the conceptual foundations of civil disobedience and calls for the need to equip designers to think normatively in terms of how a space that allows for less wrongdoing is a less just one. 


The impact of 3D printing in the Covid-19 pandemic and the challenges of product liability in non-professional context

Session: Future technologies and law

Maria Samantha Esposito Politecnico di Torino, Torino, Italy

Abstract

During the COVID-19 pandemic, 3D printing technology has shown its current and potential impact, playing an important role in addressing the shortage of some medical supplies. The rapid growth of this technology and its relatively low access cost have facilitated a ‘democratisation’ of production which, in the pandemic context, has played an important complementary role in areas where the supply chain of medical devices is still inadequate. 

Against this background, critical issues have emerged about product liability and its allocation. 3D printing enables non-professionals to become producers and sellers, even in the case of complex goods. This increases the risk related to the circulation of products affected by a lack of quality and safety controls, and lowers the chances of obtaining compensation for damages. Moreover, 3D printing production chain is no linear and not only prosumers but also online platforms play an important role in it, as well as fab-labs and 3D printing services. 

This paper will discuss how the existing legal framework addresses these challenges regarding 3D printing product liability. In this context, the paper stresses the need for an adequate level of legal protection, based on new transparency obligations regarding non-professional production and non-compliance with safety standards. Specific requirements and appropriate technologies for product traceability throughout the supply chain will also play an important role. Finally, the paper elaborates on the existing notion of ‘product’ by emphasising the need to extend it to digital goods, to properly address damage resulting from defective CAD files used in 3D printing.


Measuring the Brussels Effect through Access Requests

Session: Digital, cloud and Internet regulation and governance

René Mahieu1, Hadi Asghari2, Christopher Parsons3, Joris van Hoboken1, Masashi Crete-Nishihata3, Andrew Hilts3, Siena Anstis31VUB (LSTS), Brussels, Belgium. 2Hiig, Berlin, Germany. 3Citizen Lab, Toronto, Canada

Abstract

The introduction of the GDPR reheated the ongoing debate about the extraterritorial effect of European data protection law. In this debate, Anu Bradford argued that European data protection law affects global markets through the so-called “Brussels Effect”, according to which policies diffuse primarily through market mechanisms. Specifically, this phenomenon operates even when the laws of non-EU countries, which set the rules for companies operating in those markets, have not changed to adopt provisions which equal those of EU law.

In this paper we investigate empirically whether the introduction of the GDPR has initiated a “Brussels Effect”, improving compliance with data protection law and exporting GDPR standards outside of Europe. By measuring compliance with the right of access for residents of the EU and Canada, we find that this is indeed the case. We suggest that the GDPR’s stronger enforcement provisions are the key driver of this effect, which allows the EU to de facto unilaterally affect companies’ behavior globally.


Human Rights Law Reasoning 2.0: Accommodating the technical features of the Internet into human rights law?

Session: Human rights and technology

Mando Rachovitsa University of Groningen, Groningen, Netherlands

Abstract

The paper raises one main question: Should and, if yes, how human rights law may appreciate the impact of a given measure (legislative, regulatory, technical) has on the general operation of the Internet? 

The analysis aims at answering this question by focusing on the case law of the European Court of Human Rights (ECtHR). Comparative insights will be drawn with international bodies, especially the Inter-American Special Rapporteur on Freedom of Expression. 

The first part of the discussion discerns how the existing case law accommodates considerations regarding the well-functioning (integrity and interoperability) of the Internet in human rights law reasoning. The analysis focuses on the concepts of collateral effect, Internet archives as well as data localization and geo-blocking. It demonstrates how these concepts are employed in different stages of legal reasoning as criteria to acknowledge victim status; to assess the lawfulness of a measure and/or the necessity and proportionality of a measure against human rights law standards.

The second part of the paper turns to critically reflect upon the potential and limitations of human rights law to “digest” and accommodate the impact of various measures to the well-functioning of the Internet. Two main points are discussed. First, the divergence of the underlying values and rationale of the well-functioning of the Internet and human rights law. Second, the challenges for human rights law to appreciate communal interests and large-scale effects within the confines of a mostly individualistic human rights law paradigm and admissibility requirements.


Campus Surveillance in COVID Times: a Preliminary UK Perspective

Session: Legal education, regulation and technology-enhanced learning

Lilian Edwards, Hannah Smethurst Newcastle, Newcastle, United Kingdom

Abstract

COVID-19 has had an unprecedented impact on education. By April 2020, most universities had been forced to transition to online-only learning, in order to control the spread of the virus and remain so, almost a year later. This paper argues that COVID has driven an expansion of data related surveillance in the educational context, especially in the US but increasingly also in the UK, and examines if law is fit for purpose in this area.  The authors undertook a short project to examine these issues  via literature review and informal key informant interviews.

HE providers have found themselves balancing duties to ensure physical health & safety with duties to respect privacy and DP rights, as well as other fundamental rights. Most campuses have made use of data driven techniques during COVID eg collecting data via COVID PCR and LFT tests, state and local tracing apps, & more advanced use in some cases to monitor occupancy and movement. Furthermore, the use of “EdTech” such as data analytics and online plagiarism detection and “e-proctoring” has also  incrementally expanded and raises substantial concerns about legal bases for processing, sharing, retention and solely automated decisions. 

While we may hopefully be effective at controlling the spread of COVID-19   and at least partially return to campus soon, it is likely these data driven on- and off- campus innovations will persist. We intend to propose good practice guidelines with safeguards for all at the end of the project and welcome input from the BILETA audience.


A full house: applying the GDPR to vocal assistants

Session: Privacy and surveillance

Silvia De Conca Vrije Universiteit Amsterdam, Amsterdam, Netherlands

Abstract

Vocal assistants such as Alexa or Google Assistant are popular, easy to use, and so convenient they are almost addictive. They are placed inside the home, the sacred precinct of the private sphere, although their functionality depends entirely on data collection, processing, and profiling. This article analyses vocal assistants from two angles. On one hand, it shows how vocal assistants create new apertures in the private sphere, making it more permeable and transparent to corporate surveillance. They do so leveraging their sensors and seamless vocal interaction, but also persuasive design techniques that prompt individuals to use them more, sharing more data. Subsequently, the article discusses how selected provisions of the GDPR can be applied to vocal assistants, to mitigate the effects Alexa & company have on the opacity and permeability of the private sphere. Particular attention is given in this regard to, among others, the role of consent, the application of privacy by design, and the provision concerning automated decisions and profiling. The inherent features of vocal assistants appear difficult to reconcile with these provisions, and new problematic aspects emerge with regard to all of them. Possible solutions to these new challenges are, however, on the horizon, and are discussed in the conclusions of the article.


Examining the Concept of Piracy in the Film Industry in Nigeria

Session: Intellectual property law and technology

Nkem Itanyi Queen’s University, Belfast, Belfast, United Kingdom

Abstract

Nigeria’s home-grown film industry, dubbed “Nollywood”, is reported to be the world’s third-largest film industry regarding films produced. There is, however, insufficient revenue to show for it. Some of the reasons adduced for this include piracy and enforcement of the laws against piracy. The impact of the COVID-19 pandemic brought with it lockdown restrictions worldwide. Numerous people were confined to their homes for weeks on end. The creative industries witnessed a surge in activities globally as many have turned to television and to the Internet for entertainment to ease the boredom associated with the lockdown. This new development raises questions regarding the type of content being viewed. What kind of content are people accessing? Are they accessing paid-for services, or are they accessing content from unauthorised or unlicensed sites? This paper seeks to establish that piracy is economically harmful to the Nigerian film industry. To substantiate this argument, I will engage with the body of literature on piracy to understand what piracy is and how it works. An attempt will be made to measure piracy as a statistical phenomenon. Several excellent studies on piracy have been carried out in the United Kingdom, the United States, and elsewhere in the world. The trends and lessons learned from these studies will be interpreted in the Nigerian context. Finally, workable solutions that could aid in curbing piracy in the industry will be discussed.


Data Protection and the Right to Private Life as Constitutional Rights within the European Union Member States: A Comparison

Session: Privacy and surveillance

David Erdos University of Cambridge, Cambridge, United Kingdom

Abstract

Although both data protection and the right to private life are recognised within the EU Charter, they are often seen as having very different constitutional histories with the former extremely recent and the latter long-standing.  Based on a comprehensive analysis of rights within EU Member State constitutions, it is found that this distinction is somewhat overdrawn.   Even a general constitutional right to privacy or private life of very recent origin.  Only five States recognised this right within their constitutional law prior to 1990, although appropriately three quarters do so today.  Nevertheless, discrete constitutional rights to domicile and correspondence (although not so much reputation) which now sit within the right to private life have much longer histories and are even more ubiquitously recognised.  A formal constitutional right to data protection emerged roughly contemporaneously with that of an abstract right to private life although today it found only in approximately fifty percent of EU Member States.  There is also far less of a genealogy or agreed elaboration of this right’s particular nature.  Nevertheless, around half of the States who recognise it elaborate specific transparency rights and a slightly lower number rights in relation to consent or another legal basis in law.  This could suggest that data subject empowerment is an important emerging particularity tied to data protection as a constitutional right.


Data Protection in the design process of eHealth: the Dutch contact tracing app CoronaMelder as an example

Session: Human rights and technology

Jessica Hof1,2, Petra Oden21University of Groningen, Groningen, Netherlands. 2Hanze University of Applied Sciences, Groningen, Netherlands

Abstract

Privacy is an important value in the design of eHealth. When privacy is not well secured in the design of eHealth, there can be consequences around the acceptance and the use of it. This paper reflects on the design process of CoronaMelder – the Dutch contact tracing app.

Following the announcement by the Dutch Minister of Public Health, Welfare and Sports that a contact tracing app would be developed as part of fight against COVID-19, concerns on privacy were expressed by several human rights organisations and users alike. 

Seeing this push for privacy (by users), it is important to include users’ needs in the design. This paper considers whether developers can use the methods of Value Sensitive Design (VSD) to achieve this aim of including users and their privacy in the design process. VSD developed by Friedman, can be used to account for values (like privacy) in a principled and systematic manner throughout the design process.

This paper shows that privacy by design in combination with VSD methodology require from the very start of eHealth an identification of 1) who the stakeholders are, 2) which values, including privacy need to be considered in the process, and 3) what the legal values at stake are. These values are the starting point for the rest of the design process. In this way, adopting the values of stakeholders in an early stage of the design process can contribute to the acceptance and use of eHealth in the future.


Deep Nostalgia and Spooky AI – legal and ethical considerations surrounding post-mortem deep fakes

Session: Future technologies and law

Edina Harbinja1, Lilian Edwards2, Marisa McVey11Aston University, Birmingham, United Kingdom. 2Newcastle University, Newcastle, United Kingdom

Abstract

MyHeritage has introduced a new feature to their service, which animates photos of late relatives. While the company is asking people not to use the tool to create deep fakes of the living, nothing is preventing the creation of deep fakes of the deceased.

The service recalls the Black Mirror episode ‘Be Right Back’, where the protagonist recreates their dead partner, initially as a chatbot based on social media data and communications, and then as a humanoid reincarnation. A recently approved chatbot patent filed by Microsoft is a reminder that what we leave behind digitally may continue to live on through sophisticated technology. The patent contains provisions that would allow for deceased users’ data to be used to create a chatbot in their likeness.

Both services raise numerous ethical and legal challenges regarding living individuals, e.g.:

•            Tort of misuse of private information,

•            The protection of one’s image,

•            Data protection,

•     IP rights.

In terms of the deceased, there is no clarity in the MyHeritage terms and the creation of “deep fakes” of the deceased seems to be allowed. In England, sharing these would not violate privacy of the deceased or data protection, as these do not extend post-mortem. This would not violate the ECHR either. It could violate the right to privacy of certain photographs and films under s. 85 CDPA 1988, provided that the person who shares the photographs does not own copyright. 

We explore these and other issues, questioning the need to regulate deceased’s deepfakes and chatbots.  


Scientific Research using Health Data: the case for utilising the research exception

Session: Privacy and surveillance

Mireille Caruana, Claude Bajada University of Malta, Msida, Malta

Abstract

The goal of scientific research is lofty: to expand human knowledge. Medical research, in particular, promises to decrease human suffering and increase overall wellbeing. In this context, it is clear that such goals have a public interest.

Unsurprisingly, medically useful research is often carried out by processing biometric, genetic, and other forms of health related data. In the European Union, such data processing is regulated by data privacy laws, in particular the General Data Protection Regulation (GDPR).

There is a long tradition in medical research, codified in the World Medical Association Declaration of Helsinki (1964), that all medical research must be consensual. However, we argue that, within the framework of the GDPR, data subject rights and interests are not best upheld by processing scientific research data on the basis of the individual’s consent; rather, the preferable legal bases for the processing of personal data for scientific research purposes lie in Article 6(1)(e) ‘performance of a task carried out in the public interest’, and Article 9(2)(j) when the processing involves ‘special categories of personal data’.

The complexities of data management on the scale that is needed to carry out potentially ground-breaking medical research, coupled with the all-encompassing definition of personal data in the GDPR, are so far beyond what one would normally expect when understanding day-to-day data management, that both the public interest, and the individual research participant, are best served by means of the implementation of appropriate derogations and safeguards as contemplated in the GDPR.


Who should govern cyberspace? A socio-legal analysis of digital sovereignty in the race towards developing sixth generation wireless technologies.

Session: Digital, cloud and Internet regulation and governance

Jordan Leinster Queen’s University Belfast, Belfast, United Kingdom

Abstract

Cyberspace governance and digital sovereignty are becoming increasingly difficult for states to manage in the 5G world. This has resulted in a growing number of jurisdictions employing data localisation laws and virtual borders with the ethos of protecting their data from outside influence. However, myopic forms of governance relying on the enclosure of cloud computing hinders the investigation of online harms whilst foreshadowing the reality of a splinternet. 

This presentation will engage with current debates in cyberspace governance by pushing academic debate beyond the scope of 5G networks to signal potential problems caused by the advent of 6G and associated technologies. Building on the premise that 6G networks will span the globe using non-terrestrial networks, it will use the island of Ireland as a model to highlight how attempts at domestic cyberspace governance in a 6G world threatens to fragment cyberspace, shifting current notions of geopolitical and economic power. The conclusions of this presentation will show that neither libertarian nor domestic systems represent the type of governance necessary to effectively regulate 6G. Furthermore, international alignment on extraterritorial data transfer laws is necessary to enable successful multijurisdictional investigations against future online harms. 

Legal, social, economic and technological arguments will be utilised in this presentation alongside legal instruments and recent judgments from the UK, EU and USA. Forward looking findings on 6G which have yet to be considered within the context of socio-legal research will be offered to promote legal exploration into this oncoming area of scholarship.


War Never Changes: Privacy as a Casualty in the Encryption Wars

Session: Privacy and surveillance

Shane P McNamee Avast, Dublin, Ireland

Abstract

From the early days of cryptography and munitions-grade export control on encryption technology, through to the development of and need for commercial applications of encryption, and the most recent debates about limiting end-to-end encrypted messaging services, privacy and data protection rights have been frequent casualties of the flashpoints in this particular theatre of war.

Despite the fact that many of these issues have been debated in a multitude on contexts over the years, we often see the same questions and arguments raised whenever the conflict flares up again. With the current proposals from the European Council calling for “security through encryption and security despite encryption” (which many decry as a contradiction in terms), and other international moves to compel technology companies to grant law enforcement and security agencies access to encrypted messages, the debate is fraught with conflicting interests and goals, and both novel and familiar arguments. 

In addition to the perennial debate about the need to put security and privacy in the hands of the public, versus the interests and goals of security, surveillance, and law enforcement agencies, there are particularities of the current technological, market, and legal ecosystem which make the most recent developments in the encryption wars particular consequential for privacy and data protection. Indeed, much of the current privacy debate and landscape can be seen as a direct result of the changes in attitudes brought about by the last decade of revelations regarding US surveillance practices, in particular the NSA’s weakening and circumventing of encryption.


The Marrakesh VIP Treaty, Accessibility, and (E)U.

Session: Human rights and technology

Liam Sunner Maynooth University, Maynooth, Ireland

Abstract

In an effort to combat the global ‘book famine’ for individuals with visual impairments, in that over 90% of the printed works of the work were unavailable to them in accessible forms, lead to the development and ratification of the  Marrakesh Treaty to Facilitate Access to Published Works by Visually Impaired Persons and Persons with Print Disabilities (Marrakesh VIP Treaty). The treaty’s purpose was to provide new and improved methods of accessibility while not infringing the protections afforded by copyright on the printed works. As digitalisation and improvements to text to speech narration advanced in line with other technologies, arguably, it became easier and more cost-effective to adapt the material for those with visual impairments. However, in doing so, the adaptation ran afoul of copyright protection. The Marrakesh VIP Treaty thus facilitates this adaption while not infringing or preventing future commercial exploration of the work in question. 

This paper seeks to address how the Marrakesh VIP Treaty facilitates the innovation, creation, and distribution of copyrighted work to people with visual impairment through digital technology while still adequately protecting the copyright of the rightsholder. In doing so, this is examined through the perspective of progressive inclusion such provisions within its external trade policy, the limitations of such inclusion, and the impact such inclusion has in addressing the global ‘book famine’


How to inform “concise … clear and plain language” cf. Article 12(1) GDPR? Case study on data brokers and apps

Session: Privacy and surveillance

Arno R. Lodder ALTI – Amsterdam Centre for Law and Technology, Amsterdam, Netherlands

Abstract

Around the summer of 2020 we (VU Amsterdam, Leiden, Leuven, and Namur) studied privacy policies of 10 prominent data brokers (e.g., Acxiom, Equifax) and 25 apps in seven sectors (e.g., games, travel, health). In addition to looking into the privacy policies, also all 25 apps were installed. The focus of the analysis was on what information about data processing was provided by the data brokers and apps.

Most apps, and definitely data brokers, process huge amounts of personal data. It will come as no surprise that the privacy policies generally do not meet the requirements Article 13 and 14 set for provision of information, in particular in the light of the language required cf. Article 12(1) GDPR.

In the first part of the paper I will briefly discuss the findings of our analysis. The second part concerns Article 12(1) GDPR. The data brokers and apps do not comply, but how can they do better? Should data controllers mention all data that is being processed and for all data processed the purpose, the legal basis, the period for which the data are retained, the parties that receive the data? And if so, how to describe this information referred to in Articles 13 and 14 GDPR “in a concise, transparent, intelligible and easily accessible form, using clear and plain language”? I will explore how the practice of data controllers in terms of information provision about data processing could improve and whether amendment of the GDPR is needed.


The fallacy of ‘it belongs to the internet’ (and how copyright law can help)

Session: Intellectual property law and technology

Rachel Maguire Royal Holloway, University of London, Egham, United Kingdom

Abstract

Every day, vast amounts of creative user-generated content are shared on online platforms. While previous copyright scholarship on this topic has often focused on the remix phenomenon, this research looks to provide an understanding of the broader reality of the online creative environment, where in many cases amateur creators are not just users, but copyright holders themselves. This paper therefore explores the regulation of the use of amateur creative works, and specifically the commonly held viewpoint that they are in the public domain once they are shared online.

Part of a wider project exploring the interrelationship between copyright law, creativity and anonymity online, this research is situated in the context of long-running and unresolved debates over the proper scope and content of copyright law, particularly in relation to the internet and digital technologies. It also contributes to calls for a better understanding of the reality of the relationship between creativity and copyright in order to properly interrogate the narratives and assumptions that have shaped the law.

Supported by qualitative findings, this paper argues that users often believe that creative works shared online ‘belong to the internet’, and this is caused in part by the incompatibility of copyright law with the anonymous online environment. However, this viewpoint does not accurately align with creator wishes and expectations and this leads to negative consequences for amateur creators and for copyright law itself. In response, this paper puts forward a suggestion for how copyright law could adapt to mitigate some of these issues.


Network effects in merger cases: irrelevant toy or serious contender?

Session: Digital, cloud and Internet regulation and governance

Martin Herz University of Groningen, Groningen, Netherlands

Abstract

At the heart of online platform business understanding lies the economic theory of network effects. For competition law enforcement, network effects present a challenge in terms of their possible dominance-creating or enhancing consequence. In that sense, network effects operate more adversarially vs clarifyingly in the economic theory. In both disciplines, other usual suspects are often signalled when assessing the role of network effects, such as: lock-in and tipping, path-dependence, consumer inertia and switching costs, multi-homing and interoperability. No specific rules have been issued at the European level to enshrine network effects and related elements into the substantive competition legal framework. The European Commission has used network effect analyses in a relatively small number of decisions, justifying it as “case-by-case” approach. However, in the past ten years, most of merger cases concerning platform businesses have had no analysis or account of network effects, where economic theory would have acknowledged them. Instead, the Commission resorted to general economic analyses, without mentioning network effects. Hence, processing network effects in merger cases appears to be a matter of relevance. This is problematic in terms of legal certainty for platform businesses, as the threshold for which network-effects reasoning become relevant is not made known. This research clarifies this situation from a legal perspective. By researching the extent of Commission discretion in merger cases, the rights and obligations for parties in merger procedures, and the role that the pre-notification phase may play, this research comes to more definitive terms with network effects in EU merger cases.


The GSR v the GDPR: A tangled web

Session: Future technologies and law

Nynke Vellinga, Gerard Jan Ritsema van Eck University of Groningen, Groningen, Netherlands

Abstract

In 2018 a self-driving Uber test vehicle collided with a pedestrian, who died from her injuries. Camera footage from inside and outside the Uber, together with data on the vehicle’s driving behaviour, proved essential in reconstructing the accident. Although the vast amount of information available on this crash might have seemed unusual, it could soon become the norm. 

From 2022 onwards, vehicles will need to be equipped with an Event Data Recorder or EDR (art. 6 General Safety Regulation (EU) 2019/2144; hereafter: GSR). Such an EDR has to collect data on, for instance, the vehicle’s speed and braking from a period just before, during and after a collision. These data could be of interest to the public prosecutor, parties in civil law cases and national authorities (e.g. road authorities). However, EDRs give rise to significant data protection concerns. 

At the moment, the requirements for the data which EDRs need to collect are vague and non-exhaustive. We propose that a clear limit should be set on the types of data to be collected. Furthermore, the period for which data should be stored should be made explicit.

More striking, is that the GSR is internally inconsistent: it strives for complete anonymity, but also encourages the collection of all available data. In addition, the relationship between the GSR and the General Data Protection Regulation (Regulation (EU) 2016/679) is unclear. Concludingly, the GSR is a tangled web which poses both challenges to itself, the EU data protection framework as a whole.


Beyond Zoom: Building a simulation ecosystem online

Session: Legal education, regulation and technology-enhanced learning

Paul Maharg1,2, Angela Yenssen21Newcastle University, Newcastle, United Kingdom. 2Osgoode Hall Law School, Toronto, Canada

Abstract

Legal education and regulation tend to be siloed in Canada between provinces and within provinces.  Recently however Canadian legal educators have been collaborating to increase simulation learning in the legal field.  Osgoode Hall Law School, York University, Province of Ontario, started incorporating Simulated Client (SC) interviews in the first year curriculum of the JD, the university degree required to enter legal practice in Ontario.  Osgoode also included SCs in continuing legal education programs in family law and in advanced workplace investigations.

Simultaneously, the Canadian Centre for Professional Legal Education (CPLED) in the Province of Alberta piloted a new bar admission program called PREP (Practice Readiness Education Program) where law students learn professional lawyering skills, values and attitudes.  The program was fully implemented in 2020-2021 to scale.  In interviewing, around 800 students each conducted three interviews with simulated clients (SCs).  Osgoode SCs were used in CPLED’s pilot program.  SCs initially trained by CPLED worked in Osgoode’s SC programs. The two institutions shared SC trainers and training materials as well as ideas for human resources management of SCs. 

Technology was an important element of outreach, training and performance before the pandemic.  During it, it became essential.  Our paper will address the multiple enablers and challenges to cross-jurisdictional cooperation among legal educators and regulators, the ways in which video technology can be used to enhance learning by simulation, and the lessons we have learned from our experience within the broader context of digital cultures and practices.  


Sociotechnical Change and Its Place within Law’s Model of Reality: Exploring a New Analytical Lens for Law and Technology

Session: Future technologies and law

Mara Paun Tilburg University, Tilburg, Netherlands

Abstract

Oftentimes, law is considered to ‘fall behind’ technology due to the fact that changes in law are slower than changes in technology. A panacea for dealing with their difference in pace has not yet been identified. The aim of this paper is to use the theory of autopoiesis to render new theoretical insights into how law as a system relates to sociotechnical reality. The theory could act as a coherent conceptual playing field to situate existing law and technology theories (e.g.  regulatory disconnection) and develop a new lens for analyzing the challenges brought by the difference in pace between law and technology.

The potential of autopoiesis as analytical lens ranges from a new perspective to understanding the pacing problem, to identifying and addressing regulatory disconnections both in abstracto, and in concreto. For instance, one may use it in a specific legal domain such as data protection, to assess the level of structural compatibility when regulating matters of a new technology, e.g. artificial intelligence. While data protection law is the ‘go to’ legal domain for regulating data-processing related issues of AI, it seems it will not address the full extent of regulatory challenges in this rather new technological context. The analytical lens this paper proposes would help identify with more precision where the link between the law’s internal model of reality (assumptions about the world) and sociotechnical reality appears to be broken, and enable a more targeted update of the regulatory environment where necessary.


Balancing the Rights and Safety of Children in the Digital World: A Conundrum

Session: Privacy and surveillance

Cansu Caglar, Abhilash Nair Aston University, Birmingham, United Kingdom

Abstract

The Internet has become a significant part of modern life for children as much as for adults. Although various laws have been adopted in the EU concerning children, two primary instruments that set the legal framework and establish safeguards to protect children online are the AVMSD and the GDPR. However, where policies rest heavily on protecting children through age verification mechanisms and parental consent, there is a possibility that these measures could deprive children of benefiting from the full potential of the Internet. Whilst child safety and protection should remain paramount considerations for states, it is also important that it must be achieved in a manner that regulation does not jeopardise children’s fundamental rights, particularly their right to freedom of expression and privacy.

In light of the privacy and data protection implications of age verification requirements, and the limitations of consent as a control mechanism, this paper critically assesses the current EU regime of mandatory age verification and parental consent for child protection. The paper will analyse the limitations of parental consent as a means of ‘competent supervision’ from the perspective of children’s rights in the digital context. It will then go on to analyse the obligations of platforms to provide age appropriate content and what it means for children’s rights. The paper will share research findings on these issues, and will argue that the positive rights of children must be the predominant consideration in the wider debate on access-focused and consent-focused regulation for child protection.


Human rights and sustainability implications of smart urban agricultural production

Session: Sustainability, energy, technologies and law

Fernando Barrio Queen Mary University of London, London, United Kingdom

Abstract

The planet’s population will grow from 7.7 billion today to 9.7 billion in 2050 and almost 11 billion by 2100, while the number of city dwellers is expected to increase by 2.5 billion people by 2050. The majority of the population of the world lives in cities and in some Europeans countries the percentage exceeds 70%. The cold data reveals a situation of necessary adaptation in the organization of cities, tending to guarantee that they are functional to ensure quality of life for citizens, which includes material, economic and cultural issues, as well as full enjoyment of rights. The increase in urbanization of cities is accompanied by network support systems, a greater demand for food and food transport associated with long supply chains, generating indirect environmental . In this context, urban agriculture is seen as a potential strategy to enhance city’s sustainability, and through the use of Information and Communication Technologies, vg smart-farming, the sustainability can be achieved along side fulfilling food-supply goals. However, the expansion of smart urban farming implies the deployment of an array of technologies in settings for which there is no proper regulatory framework or the one existing was developed with other aims in ming. This paper  analyzes the development of so-called smart cities in response to the challenges posed and within them the use of information and communication technologies for urban agricultural production, coupled with an analysis of the legal issues that arise. 


Legal and pedagogical issues with online exam proctoring

Session: Legal education, regulation and technology-enhanced learning

Fernando Barrio Queen Mary University of London, London, United Kingdom

Abstract

The global COVID-19 pandemic implied that Higher Education institutions have to rapidly adapt to the circumstances to ensure that students were able to receive an adequate education. This change required the deployment of information and communication technolgies for teaching and practice, and included the need to conduct different forms of assessment online. 

In that context, exams are one of the most used forms of assessments, in the understanding that they represent a reliable, valid, cost-effective, accepted and with appropriate educational effect form of evaluating students fulfilment of learning objectives. Conducting them online poises several issues, being the integrity of them one that has received much attention.

To deal with the potential instances of lack of integrity of the assessment process, organisations turned to proctoring technologies that, in order to avoid the exam-takers using resources not allowed during the exam, take control of the students’ devices and control their physical activity.

While it is difficult to summarise in an abstract the vast array of legal issues and potential infringement of fundamental rights that proctoring technologies encompass, the paper first analyse them in the light of English law, following with some considerations related to the pedagogical implications of such an invasive technology has, and the breaking of a bond of trust between teacher and learner that the learning process requires.

It ends offering some forms of assessments that properly fulfil the assessment functions while respecting learners rights and enhancing the trust within the actors of the system.


Big Brother: invited guest or unwelcome intruder?

Session: Privacy and surveillance

Christine Rinik University of Winchester, Winchester, United Kingdom

Abstract

In the twenty first century, the public has welcomed technological advances to streamline and potentially enhance daily activities.  Commercial entities develop products aimed at this market which are eagerly embraced by consumers.  For example, Alexa, the brainchild of Amazon, “helps you feel connected.”  Talking to this robot, has become part of the normal daily routine for many.  

There have also been advances in the area of facial recognition.  Surveillance cameras abound in public places.  This has led to cases such as Bridges v Chief Constable of South Wales Police where a member of the public appealed against the dismissal of his claim challenged the police use of live facial recognition technology in public places.  

Despite the pubic embracing many new technological tools, a number of significant personal data breaches continue to be reported.   In January 2020 a £500,000 fine was levied  by the ICO against DSG Retail after its system was compromised disgorging the personal data of 14 million individuals.

Despite imposition of fines both the government and commercial entities are eager to obtain “personal data” from individuals.  The public are seemingly willing to provide their data but may be unaware of the uses and risks of collection.

When personal data is harvested without informed consent or used in violation of the law or insufficiently protected there is an erosion of privacy.  It is argued that the existing controls and deterrents to commercially driven data harvesting are insufficient and this paper willl consider possible alternatives.


Blockchain for the music industry: a threat or an opportunity for the collective management of copyright?

Session: Intellectual property law and technology

Nicolas Jondet1, Amélie Favreau21Edinburgh Law School, Edinburgh, United Kingdom. 2Grenoble Law School, Grenoble, France

Abstract

Blockchain technology has been presented as the next frontier of the music industry’s digital mutation. The technology, designed to create open, decentralised, and encrypted digital ledgers, can be used in a wide range of scenarios such as crypto-currencies, financial transactions or smart contracts.  It promises direct, more secure, more transparent and cheaper transactions between individuals, removing the need for many of the intermediaries in traditional industries. As such, blockchain technology has been touted as an opportunity to disintermediate much of the copyright industries. Over the past five years, the applicability of blockchain technology to the music industry has been analysed but has not yet materialised. However, the new-found popularity of non-fungible tokens (NFTs) for artworks has reignited interest in the blockchain’s relevance to the copyright industries.

We focus on the applicability of blockchain technology to the collective management of copyright. In this significant, and increasingly regulated, part of the industry, collective management organisations (CMOs) play a central yet controversial role as intermediaries between rightholders and commercial users of copyrighted works. We explore whether blockchain technology, notably through the use of NFTs, could replace CMOs or whether the CMOs themselves will capitalise on their interest in the technology and deploy it to improve their practices and consolidate their place in the industry. More fundamentally, we discuss whether disintermediation, in the context of collective management, is such a desirable outcome for rightholders and commercial users alike.


Digital Platforms and Cyber Tax Governance: Setting a New Governance Model

Session: Digital, cloud and Internet regulation and governance

Vasiliki Koukoulioti Queen Mary University of London, London, United Kingdom

Abstract

Digitization is transforming the global economy and reshaping global value chains. As a result, the international tax law principles, originally founded on physical presence requirements, are challenged and fundamentally reconsidered in the context of tax policy discussions. Most tax policy proposals, including the OECD work on taxation of the digital economy, focus on the increasing role of digital platform users in economic value creation and the need to attribute more taxing rights to market jurisdictions. Additionally, the potential of digital platforms to function as compliance and enforcement intermediaries and contribute to the enhancement of collection of VAT/GST on digital trade has been repeatedly highlighted both by the OECD and the EU. Digital platforms have, thus, become subjects and means of tax regulation. 

Nevertheless, their role as regulatory actors has been thus far disregarded. This article is attempting to use the theories of decentered post-regulatory state regulation and network communitarianism to model the international tax regulatory landscape currently led by the OECD and, thus, lacking inclusivity, transparency and legitimacy. This new model accounts for the role of digital platforms as macro-intermediaries and information gatekeepers where users create communities, interact with internal and external actors and function as communicative nodes carrying regulatory gravity in the network. Users are not isolated but actively participate in the international tax norm-setting and decision-making process. This analysis is also a suggestion for a new form of regulatory intervention, the Cyber Tax Governance model. with the potential to enhance input legitimacy and decentralize authority.  


Sentencing data-driven cybercrimes. An examination of English and Welsh sentencing remarks through the lenses of the cybercrime cascade effect

Session: Cyber crime and cyber security

Maria Grazia GPorcedda
School of Law, Trinity College, Dublin, Ireland

Abstract

This articles examines how courts tackle the changing nature of cybercrime caused by cloud computing and big data, as conceptualised through the lenses of data crime and the cascade effect. Part of a broader research agenda (Porcedda and Wall, 2018; 2019), this work aims to understand whether, and how, the cascade effect of data crime could help the criminal justice system in tackling cybercrime. This work contains an original analysis of sentencing remarks of cases decided in English and Welsh Courts between 2012 and 2019. The analysis unveils the difficulty of protecting the public from cybercrime while reaching fair sentences, particularly for young cyber offenders. It also points to the ways in which the cascade effect could help the criminal justice system by providing a mechanism for appraising the seriousness of the offences. This includes reliance on all relevant legislation, not just the one allowing for the most punitive sentence. Relatedly, there is a need to debate ‘what works’ in cybercrime, and whether deterrence is the best instrument to address it. The analysis also suggests that refined guidelines for s.6 and s.7 of the Fraud Act 2006, as well as brand-new sentencing guidelines for the CMA 1990, may be in order.