Loading…
This event has ended. Visit the official site or create your own event on Sched.
Welcome to the Official Schedule for RightsCon 2019, the world’s leading summit on human rights in the digital age.

Together at RightsCon Tunis, our first summit hosted in the Middle East and North Africa, more than 2500 expert practitioners will come together across over 400 sessions to shape, contribute to, and drive forward the global agenda for the future of our human rights.

Important note: Whether you’re a session organizer, speaker, or participant, you’ll need to login to Sched or create an account in order to get the most out of the program (including creating a profile and building your own customized RightsCon schedule).

Be sure to get your ticket to RightsCon first. You can visit rightscon.org for more information.

RightsCon is brought to you by Access Now.
Artificial Intelligence and Automation and Algorithmic Accountability [clear filter]
Wednesday, June 12
 

9:00am BST

AI Explainability, Explained
Algorithmic decision-making has become synonymous with inexplicable decision-making, often described as a black box. The panel will unpack and discuss AI explainability from both a technical and a legal angle. Speakers will touch on what makes algorithms so difficult to explain, look into what technical and legal solutions are being proposed to mitigate this problem, and the Intuitive Appeal of Explainable Machines.

Moderators
avatar for Fanny Hidvégi

Fanny Hidvégi

European Policy Manager, Access Now
Fanny (@infofannny) is Access Now’s European Policy Manager based in Brussels. Previously, Fanny was International Privacy Fellow at the Electronic Privacy Information Center in Washington, D.C. where she focused on E.U.-U.S. data transfers. For three years Fanny led the Freedom... Read More →

Speakers
AW

Anne Weber

Office of the Commissioner for Human Rights Council of Europe


Wednesday June 12, 2019 9:00am - 10:15am BST
Adean (Palais)

10:30am BST

Beyond the Hype: AI innovation and human rights in the telecoms sector
Artificial Intelligence (AI) refers to a category of computer systems which perform tasks more efficiently and make decisions with a degree of autonomy that normally require human intelligence. There is a lot of discussion around “ethical AI” at the moment, which has lead to the publication of several papers, principles and studies recently on the ethical use of AI. The purpose of this panel is to take a pragmatic perspective on how AI technology is being deployed in the telecommunications sector. We will explore the current safeguarding mechanisms on the ethical and rights respecting use of AI in telecommunications, and aim to draw best practise and evaluate whether the current mechanisms are seen as sufficient from stakeholders' perspective. Questions to be addressed: How is AI used inside the telecoms companies and in the customer interface? How can the UNGPs be applied to the design and use of AI? How should corporate transparency be defined in the use of AI? Are there overarching themes in the current corporate principles on the use of AI which could be combined into industry principles?

Moderators
avatar for Natasha Jackson

Natasha Jackson

Head of Public Policy & Consumer Affairs, GSMA

Speakers
LB

Lisl Brunner

Director, Global Public Policy, AT&T
NM

Nathalie Maréchal

Senior research analyst, Ranking Digital Rights
Corporate transparency & accountability; surveillance capitalism; targeted advertising business models; artificial intelligence & human rights
avatar for Laura Okkonen

Laura Okkonen

Senior Human Rights Manager, Vodafone
A Business & Human Rights subject matter expert working in the ICT sector. A Finn based in the UK.
avatar for Moira Oliver

Moira Oliver

Head of Policy & Chief Counsel, Human/Digital Rights, BT
I'm BT's lawyer and head of policy for human rights - responsible for our programme to implement the UNGPs in our business.


Wednesday June 12, 2019 10:30am - 11:45am BST
Oya 1 (Laico)

3:45pm BST

The Future of Human Rights in the Governance of Artificial Intelligence
This session will explore how human rights frameworks can be integrated into the governance of artificial intelligence and machine learning technologies. Building on international human rights law and the UN Guiding Principles on Business and Human Rights, it will discuss the value the existing human rights framework brings to the conversation about “ethical AI.” With a diverse panel drawn from international organisations, civil society, academics, and private sector, the discussion will explore the varying impact of these technologies in different countries and highlight both recent accomplishments and where further research is needed to inform policy development. The rapid development and deployment of artificial intelligence has many stakeholders concerned, and in the past several years, we have witnessed a proliferation of principles documents that attempt to guide the ethical and rights-respecting development and deployment. These declarations may each have individual value, but do they represent an emerging consensus about the proper use of AI/ML tools? If a consensus is emerging, does it reflect the reality of how AI tools are being deployed globally and cross-culturally, in non-democratic states as well as democratic ones? Do current principles reflect international human rights norms, and how do they anticipate and address governance challenges?

Moderators
avatar for Jessica Fjeld

Jessica Fjeld

Assistant Director, Cyberlaw Clinic, Harvard
avatar for Megan Metzger

Megan Metzger

Research Scholar and Associate Director for Research, Stanford University, Global Digital Policy Incubator
I work on human rights and AI, creative approaches to managing the challenges of online content, and multistakeholder approaches to solving the problems of the digital age. I have also conducted research on social media and protest in Ukraine and Turkey, and on the Russian state’s... Read More →
avatar for David Reichel

David Reichel

Project Manager, European Union Agency for Fundamental Rights

Speakers
avatar for Malavika Jayaram

Malavika Jayaram

Executive Director, Digital Asia Hub
@MalJayaram Malavika is the inaugural Executive Director of the Digital Asia Hub, a Hong Kong-based independent research think-tank incubated by the Berkman Klein Center for Internet and Society at Harvard University, where she is also a Faculty Associate. A technology lawyer for... Read More →


Wednesday June 12, 2019 3:45pm - 5:00pm BST
Carthage 1 (Laico)
 
Thursday, June 13
 

9:00am BST

Do Moderators Dream of Electric Sheep? The potential for AI in regulating online content
Online platforms are increasingly facing calls from lawmakers and regulators to address harmful content of various types, and also frequently wish to moderate their own content for their own terms of use and policies. The scale of the content on many platforms makes some form of algorithmic function a necessary part of content moderation, and recently interest has grown in using AI technologies such as machine learning to play a part too. The goal of this session is to exchange views between expert panelists, and the public, on the potential for AI to be used in content moderation, while respecting fundamental rights.

Moderators
avatar for Richard Wingfield

Richard Wingfield

Head of Legal, Global Partners Digital

Speakers
avatar for Jake Lucchi

Jake Lucchi

Head of AI and Data Value, Google Asia Pacific
AI, data, content
avatar for Oli Bird

Oli Bird

Head of International Internet Policy, Ofcom
Oli leads on international internet issues for Ofcom, the UK's communications regulator.
avatar for Charlotte Slaiman

Charlotte Slaiman

Senior Policy Counsel, Public Knowledge
Charlotte is Senior Policy Counsel for competition policy at Public Knowledge, advocating for ways to increase competition against today's dominant digital platforms. Prior to joining Public Knowledge, Charlotte worked in the Anticompetitive Practices Division of the Federal Trade... Read More →
avatar for Sally Epstein

Sally Epstein

Sally Epstein is a Senior Machine Learning Engineer for Cambridge Consultants, where she drives core R&D into state-of-the-art AI. Based at the company’s Digital Greenhouse AI research lab, Sally is focused on developing novel approaches to deep learning and working with major clients... Read More →


Thursday June 13, 2019 9:00am - 10:15am BST
L'Escale (Laico)

10:30am BST

Do Our Faces Deserve the Same Protection as Our Phones? Regulation and governance of facial recognition technology
In July 2018, Microsoft shared its call for government regulation and responsible industry governance of facial recognition technology. This technology clearly brings important societal benefits, but we recognize the risks which need broader study and discussion. We’ve been pursuing these issues with technologists, civil society, academics and policymakers to expand our understanding of the risks, including in the contexts of bias, privacy and surveillance, and to develop our 6 principles to manage our development and use of facial recognition. We believe there are three problems that governments need to address: 1. Certain uses of facial recognition technology increase the risk of decisions and outcomes that are biased. 2. The widespread use of this technology can lead to new intrusions into privacy. 3. The use of facial recognition technology by a government or law enforcement for mass surveillance can harm democratic freedoms. All three of these problems should be addressed through legislation by requiring transparency, enabling third part testing, ensuring meaningful human review, avoiding the use for unlawful discrimination, ensuring notice, clarifying consent, and limiting on-going government surveillance of specified individuals.

Moderators
avatar for Steve Crown

Steve Crown

Deputy General Counsel, Human Rights, Microsoft
UNGPs. HRIAs. Artificial Intelligence and Human Rights.

Speakers

Thursday June 13, 2019 10:30am - 11:45am BST
Oya 1 (Laico)

2:15pm BST

Artificial Benevolence: Updating informed consent in the age of AI
Informed consent for survey and broader population data is critical to ethical applications of humanitarian intervention, from the use of household surveys to national population data. However, how much of the data we collect is collected by consent, and what does "informed" in the age in AI? While Artificial Intelligence may allow humanitarians to achieve truly meaningful achievements -- to predict and prescribe interventions before the famine or before the outbreak -- is the data behind those future successes ethical? Is data collected for one study but repurposed to train algorithms what those original subjects agreed to? If the platforms we store this data on today are upgraded to incorporate deep learning functions, is that original data fair game? This session aims to explore how data on individuals represents virtual bodies, subject to an interpretation of international law that forbids experimentation, and that even the examination of this data in aggregate does not fulfill our fundamental "do no harm" policies. We ask: what is missing to bring truly informed consent to AI?

Moderators
avatar for Rob Baker

Rob Baker

Director, Signal Program, Harvard Humanitarian Initiative
My professional focus in effective innovation campaigns within larger institutions and the role of emerging technologies for humanitarian response and international development. Particular focuses are on components of AI (i.e., deep learning, machine learning, predictive modeling... Read More →

Speakers
avatar for Jessica Fjeld

Jessica Fjeld

Assistant Director, Cyberlaw Clinic, Harvard
avatar for Vincent Graf Narbel

Vincent Graf Narbel

Strategic Technology Advisor, International Committee of the Red Cross
avatar for Tomiwa Ilori

Tomiwa Ilori

Researcher, Centre for Human Rights, Faculty of Law, University of Pretoria
I'm keen on conversations that link international human rights law on ICTs and emerging tech to national human rights policies and am also interested in how digital technologies impact human rights and democratic development in Africa and across the globe.


Thursday June 13, 2019 2:15pm - 3:30pm BST
Celtic (Palais)

2:15pm BST

Before the ship sails on AI governance, let's talk about trade
Artificial intelligence systems are already playing a major role in how we live our lives on and off the internet. Since data collection has fueled the internet economy, the demand to apply logical systems to make sense of and profit off of this information has surged. This has made algorithmic systems a powerful tool in targeted advertising, content curation, automation, and scientific research. The use of AI however, has been fraught with problems, many times arising from the inherent bias and discrimination that are too easily embedded into the algorithms. There is a growing demand for fairness and accountability in these systems. However, much of these efforts would be constrained by emerging global rules on AI in trade agreements. Universally, they restrict regulators from protecting citizens from algorithmic bias and misuse. This panel will discuss why and how trade agreements contain AI governance rules that will define and constrain policy making at a domestic level. We will exchange ideas on the way forward to advocate for algorithmic transparency in future trade agreements and what CSOs can do.

Moderators
avatar for Burcu Kilic

Burcu Kilic

Director, Digital Rights Program, Public Citizen

Speakers
avatar for Arthur Gwagwa

Arthur Gwagwa

Senior Research Fellow, Strathmore University (CIPIT)
Arthur is currently working on a project funded by the Open Technology Fund. The project will detect, document, and analyze current and emerging cyber threats with a long term goal to mitigate their impact on users at risk in specific Sub-Saharan African countries especially around... Read More →
avatar for Javier Ruiz

Javier Ruiz

Policy Director, javier@openrightsgroup.org
I work on a broad range of digital rights policy, from privacy and surveillance to copyright and technology ethics.
avatar for Francisco Vera

Francisco Vera

Advocacy officer, Privacy International


Thursday June 13, 2019 2:15pm - 3:30pm BST
Cyrene (Laico)
 


Filter sessions
Apply filters to sessions.
  • Artificial Intelligence and Automation and Algorithmic Accountability
  • Countering Online Harassment and Hate Speech and Violent Extremism
  • Data Trust and Protection and User Control
  • Democracy and Conflict and Shrinking Civic Spaces
  • Forging Alternative Models for Business and Human Rights
  • Individual and Organizational Wellness and Resiliency
  • Intersectionality on the Internet: Diversity and Representation
  • Justice and Jurisdiction and the Rule of Law
  • Lock and Key: Cybersecurity and Encryption
  • Main Events
  • Privacy and Surveillance and Individual Security
  • Show and Tell: Skill-building for Advocacy and Campaigning
  • Tech for Public Good: Open Government and Smart Cities
  • The Digital Disruption of Philanthropy
  • The Future of Media in the Age of Misinformation
  • The Impact of Technology on the Sustainable Development Goals
  • Turn It On and #KeepItOn: Connectivity and Shutdowns
  • (un)Censored: The Future of Expression