Schedule
Monday, April 15
Georgetown Law, 9th floor
500 First St NW, Washington, D.C. 20001
Please note that this event will be recorded (both video & photography).
-
Welcome from the organizers
Katherine Lee (The GenLaw Center), Hoda Heidari (K&L Gates Initiative at Carnegie Mellon University), Alexandra Givens (Center for Democracy and Technology), Paul Ohm (Georgetown Law Center)
-
Bio: Zachary Lipton is the Chief Technology Officer and Chief Scientist of Abridge and the Raj Reddy Associate Professor of Machine Learning at Carnegie Mellon University. At CMU, he directs the Approximately Correct Machine Intelligence (ACMI) lab, whose research focuses include the theoretical and engineering foundations of robust and adaptive machine learning algorithms, applications to both prediction and decision-making problems in clinical medicine, natural language processing, and the impact of machine learning systems on society. A key theme in his current work is to take advantage of causal structure underlying the observed data while producing algorithms that are compatible with the modern deep learning power tools that dominate practical applications. He is the founder of the Approximately Correct blog (approximatelycorrect.com) and a co-author of Dive Into Deep Learning, an interactive open-source book drafted entirely through Jupyter notebooks that has reached millions of readers. He can be found on Twitter (@zacharylipton), GitHub (@zackchase), or his lab's website (acmilab.org).
-
Catered by TATTE
-
Bio: Emily Lanza is Senior Counsel for Policy & International Affairs at the U.S. Copyright Office, where her portfolio includes work on the Office’s AI Initiative and international copyright developments in the Asia Pacific region. She earned a JD from Georgetown University Law Center, a MPhil in Archaeological Heritage from the University of Cambridge, and a BA in Anthropology from the University of Pennsylvania. Prior to her legal career, she held several curatorial and public service positions at libraries and museums in the United States and the United Kingdom.
-
Abstract: Today’s generative AI models hold a considerable amount of power and have been built off of original content which comes from artists and creators. To build an ethical framework around generative AI, it is imperative to prioritize transparency, accountability, and inclusivity throughout the development, deployment, and utilization phases to ensure that the technology serves the collective good while respecting fundamental human rights and values. This presentation will cover the practices that Shutterstock has put in place, provide examples of other types of shared success models, and contemplate quality of content as the next priority in artist compensation.
Bio: Sejal Amin is the Chief Technology Officer at Shutterstock. She is a senior technology executive and product development expert with over 25 years of experience leading large teams through cultural, operational and technology transformation for SaaS initiatives. In her role as CTO, Sejal is responsible for delivering a technology vision and strategy that transforms Shutterstock's technology platform to deliver a new and unparalleled experience to customers and contributors.
Recently, she was Chief Product and Technology Officer for Khoros, a Vista Portfolio company where she was integrating a distributed Product and Technology organization while defining a Product and Operational strategy to execute on the company's vision and growth goals. Just prior to that, she was CTO for the Thomson Reuters Tax and Accounting Tax Professionals Business. She has a wide range of technology leadership experience across several business units at Thomson Reuters managing global product development teams and portfolios of growing size and complexity for 15+ years. Over the years, she led several enterprise-wide transformation initiatives focused on project to product transitions, organizational transformations, technology portfolio redesigns, and building high performance product development cultures to keep pace with a rapidly changing technology environment.
Sejal was recognized on the BT150 Transformational Executive List in 2021 and has a number of professional associations including Board Director for the Value Stream Consortium.
-
Preliminary abstract: Unlearning is the problem of removing knowledge from an AI system after it has been trained, without retraining the whole AI from scratch. Unlearning is a major technical challenge that remains an open subject of research: we will survey the state-of-the-art in unlearning research, including the major approaches to unlearning, the ways that researchers measure the quality of unlearning methods, and the long-term challenges facing scientists and practitioners who need to erase specific knowledge from an AI.
Bio: David Bau is an assistant professor at Northeastern University Khoury school of Computer Sciences. He is a pioneer on deep network interpretability and model editing methods for large-scale AI such as large language models and image synthesis diffusion models. He is leading an effort to create a National Deep Inference Fabric.
-
Bio: Professor Solow-Niederman’s scholarship sits at the intersection of law and technology. Her research focuses on how to regulate emerging technologies, such as artificial intelligence, in a way that reckons with social, economic, and political power. With an emphasis on algorithmic accountability, data governance, and information privacy, Professor Solow-Niederman explores how digital technologies can both challenge longstanding regulatory approaches and expose underlying legal values.
Professor Solow-Niederman’s work has been published or is forthcoming in the Harvard Journal on Law & Technology, the Northwestern University Law Review, the Southern California Law Review, and the Berkeley Technology Law Journal, among other law reviews and peer-reviewed journals. Her piece on data breaches was selected as a winner of the 2017 Yale Law Journal Student Essay Competition. Professor Solow-Niederman is a member of the Electronic Privacy Information Center (EPIC) Advisory Board. She is also a faculty affiliate at Harvard University’s Berkman Klein Center for Internet & Society and a visiting fellow at the Yale Law School Information Society Project, where she has worked with the Media Freedom and Information Access Clinic on a series of FOIA requests concerning state government use of AI.
-
Abstract: Differential privacy is the gold standard of computing statistics on private tabular data, creating plausible deniablity for each record contributor. But what is differential privacy not suitable for, and what could go wrong when it is applied in generative AI? In this talk we want to shed light on what differential privacy is, and what it is not by clarifying common misconceptions. We will then discuss how differential privacy comes at a cost to utility and fairness, and how applying it to different data modalities can impact the provided guarantees.
Bio: Niloofar Mireshghallah is a post-doctoral scholar at the Paul G. Allen Center for Computer Science & Engineering at University of Washington. She received her Ph.D. from the CSE department of UC San Diego in 2023. Her research interests are Trustworthy Machine Learning and Natural Language Processing. She is a recipient of the National Center for Women & IT (NCWIT) Collegiate award in 2020 for her work on privacy-preserving inference, a finalist of the Qualcomm Innovation Fellowship in 2021 and a recipient of the 2022 Rising star in Adversarial ML award.
-
Panelists: Sejal Amin, David Bau, Alicia Solow-Niederman, Niloofar Mireshghallah, Kat Walsh
12:45 - 2:00 PM • Lunch
On your own, nearby lunch spots:
TATTE, A Baked Joint, DAIKAYA Ramen, Bistro Bis, The Dubliner, HipCityVeg, Love Makoto/Love on the Run, SUNdeVICH, The Ministry
-
Bio: Andreas is a software engineer at Google Research where he head a group working working on privacy and security for ML systems. Before joining Google he was an associate professor in the Department of Computer Science at the Johns Hopkins University, where he headed the Hopkins InterNetworking Research (HiNRG) Group. He worked on computer networks with an emphasis on low-power and sensor networks.
[website]
-
Abstract: While today's Generative AI systems are novel in many ways, they still ultimately produce content - writing, audio, images, and video. As such, there are a number of straightforward ways to usefuly map our existing trust and safety mitigations onto these systems. Similarly, there are ongoing efforts to enhance those existing mitigation techniques using the power of generative models. In addition to these existing approaches, the way generative models work also opens up new possibilities for mitigation within the models themselves, while simultaneously creating significant new difficulties, especially for unsecured models.
Bio: Dave Willner started his career in at Facebook helping users reset their passwords in 2008. He went on to join the company’s original team of moderators, write Facebook’s first systematic content policies, and build the team that maintains those rules to this day. After leaving Facebook in 2013, he consulted for several start ups before joining Airbnb in 2015 to build the Community Policy team. While there he also took on responsibility for the Quality and Training for the Trust team. After leaving Airbnb in 2021, he began working with OpenAI, first as a consultant and then as the company’s first Head of Trust and Safety. He is currently a Non-Resident Fellow in the Program on Governance of Emerging Technologies at the Stanford Cyber Policy Center.
-
Abstract: A "watermark" is a hidden message sent along
with a piece of media, typically either an image or text.
Watermarking generated media has been proposed as
a solution to many problems, ranging from ensuring
models don't train on their own outputs to providing
reliable methods to detect mis- and dis-information.
In this talk I will briefly discuss the threat models around
watermarking, and comment on what is, and is not,
possible to achieve by watermarking generated media.
Bio:
Nicholas Carlini is a research scientist at Google DeepMind
studying the security and privacy of machine learning, for which
he has received best paper awards at ICML, USENIX Security,
and IEEE S&P. He received his PhD from UC Berkeley in 2018. -
Bio: Raquel is a lawyer specializing in audiovisual media in conflict and human rights crises. At WITNESS, she leads a team that critically examines the impact of emerging technologies, especially generative AI and deepfakes, on our trust in audiovisual media. Her policy portfolio also focuses on the operational and regulatory challenges associated with the retention and disclosure of social media content that may be probative of international crimes.
Raquel leads multi-stakeholder engagement with governments, civil society, and tech companies to understand the risks and opportunities of AI and synthetic media, focusing on global perspectives and their implications for those defending democracy and human rights. Her aim is to bridge the gap between policymakers and grassroots groups by developing inclusive AI policies, legislation, and technical solutions centered on the real needs and harms experienced by diverse communities worldwide.
She has extensive experience advising technology companies and nonprofits on legal, operational, and policy matters, particularly in deploying technology and data collection in humanitarian and high-risk settings. With over a decade of experience in issues related to the verification and provenance of digital content, Raquel played a pivotal role in building eyeWitness, an award-winning organization that developed technology to authenticate audiovisual evidence for legal use. Her work facilitated the adoption of new technologies in Central America, the Middle East, Africa, and Eastern Europe, contributing to the success of the first court case that employed controlled-capture authentication technology—marking a significant advancement in trials for mass atrocities.
Raquel is admitted to the Madrid Bar (Spain). She serves on the Board of The Guardian Foundation, and on the Advisory Board of TRUE, a project that studies the impact of deepfakes on trust in user-generated evidence in accountability processes for human rights violations. She is also a member of PAI (Partnership on AI) Policy Steering Committee, a body considering pressing questions in AI governance. Raquel is a rostered expert at Justice Rapid Response, an organization that deploys specialists to complex international investigations, and was a member of the Technology Advisory Board of the International Criminal Court. She has held visiting research positions on technology and digital evidence at Oxford University and UC Berkeley School of Law (2019-20). In 2020, Raquel was selected by the Obama Foundation as part of their inaugural class of European Leaders.
Raquel holds an MSc in International Strategy and Diplomacy from the London School of Economics and Political Science (LSE), and an Advanced Degree in Law and Business Administration from Universidad Carlos III de Madrid.
She can be found on LinkedIn and X @vazquezllorente.
-
Catered by TATTE
-
Bio: Erie Meyer is an American technologist and federal government executive. She currently serves as Chief Technologist of the Consumer Financial Protection Bureau and previously served as Chief Technologist of the Federal Trade Commission under FTC Chair Lina Khan in 2021.
-
Bio: Elham Tabassi is NIST chief AI advisor and the associate director for emerging technologies in NIST’s Information Technology Laboratory. She also leads NIST’s Trustworthy and Responsible AI program that cultivates the development and deployment of safe, secure, and trustworthy AI systems. She was named one of Time’s 100 Most Influential People in Artificial Intelligence in September 2023. Tabassi has been working on various machine learning and computer vision research projects with applications in biometrics evaluation and standards since she joined NIST in 1999.
-
Bio: Nitarshan Rajkumar is Senior Policy Adviser to the UK Secretary of State for Science, Innovation and Technology, and Co-Founder at the UK AI Safety Institute. He also recently co-created the AI Safety Summit and the UK AI Research Resource. He has a background in AI research at the University of Cambridge and Mila, and in software engineering at the University of Waterloo.
[website]
-
Panelists: Andreas Terzis, Dave Willner, Raquel Vazquez Llorente, Nicholas Carlini, Elham Tabassi, Nitarshan Rajkumar, Amba Kak
-
Thank yous and final thoughts from the organizers