Skip to main content

Partnership on AI’s Terah Lyons talks ethics washing, moonshots, and power

Partners in AI Meeting on November 14, 2018 at Fort Mason in San Francisco, CA. ©2018 Photo by Erin Lubin

testsetset

There’s no organization in the world quite like the Partnership on AI.

Formed in September 2016 by a coalition of the largest tech companies in AI — Apple, Amazon, Facebook, Google, IBM, and Microsoft — it is a nonprofit organization that advises corporations and governments on AI policy and seeks to answer big questions about the future, like how AI will influence the economy and society and how best to make safety-critical or transparent AI systems. Of the more than 100 notable organizations active on five continents that compose the Partnership, more than half are human rights groups like Amnesty International, Future of Life Institute, and GLAAD. They sit alongside some of the world’s most influential tech companies, think tanks, and other organizations.

The Partnership will mark its third year with an annual gathering of member organizations in London in September. But if you haven’t heard of Partnership on AI, that’s understandable, because the group hasn’t done much since launch, or at least not as much as you might expect from such a powerful cohort.

In April, the organization released analysis warning that AI-driven risk assessment tools are not yet ready to replace cash bail systems and calling for a suspension of their use. And the group opened ABOUT ML (an acronym that stands for Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles) to public comment on draft zero. From October to December, the Partnership will collect public input on ABOUT ML from groups typically underrepresented in technology, following the Diverse Voices method devised by the University of Washington’s Tech Policy Lab.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Leading the Partnership is executive director Terah Lyons, the organization’s first hire. Lyons was an architect of AI policy for the Obama Administration, working with the White House’s Office of Science and Technology Policy at a time when government began to more seriously consider unmanned aircraft in the sky; autonomous vehicles on roads; and a series of breakthroughs, like deep learning, that have led to the modern resurgence of machine learning.

That work culminated with the release of an AI research and development strategy for federal agencies, as well as a report with recommendations about the future of AI.

“We’re really trying to push companies to behave in ways that they’re not naturally inclined toward, and likewise, we’re trying to enable and empower civil society organizations and academic institutions to interact with them in a way that they’re not used to interacting with them,” Lyons told VentureBeat in an interview. “Hopefully, the result of that will be a productive set of collaborations that actually result in material influence over the way that technology is developed and deployed, whether in the context of corporate policy and practice or in the context of public policymaking.”

Lyons recently spoke with VentureBeat at the Partnership’s San Francisco office, a short distance from many of its tech giant partners, to discuss AI ethics, power, and the organization’s plans for the future.

She also spoke frankly about the potential for the Partnership to be perceived as an ethics washing operation for tech giants, and the consequences of inaction when convening such a formidable group.

This interview has been edited for brevity and clarity.

VentureBeat: The National Institute of Standards and Technology (NIST) recently delivered recommendations to President Trump about how the federal government should play a role in establishing AI standards. What role do you think the federal government should be playing in the creation of standards among AI practitioners?

Lyons: We actually submitted a comment to that in the request for information that you can read, for our formal perspective, but I think in general, it’s useful for the government to be thinking about these questions. I still think there’s a lot of uncertainty and a lot of work to be done to provide a basis of knowledge for thoughtful standards to be developed, and that’s sort of part and parcel with what we’re here to do as an organization: It’s really to support government innovation and policy contributions on all the topics that we approach as an organization.

This is my personal perspective, given my background: I think that almost no government is prepared to make policy in the right ways on a lot of the questions that are confronting us as a result of AI development, and I think that it’s mostly because government lacks the capacity — the technical capacity — to understand what’s happening in the field effectively and translate that into effective policy measures.

So the more that technologists can converge with policymakers and that sort of cross-talk is developed, I think the better off the policymaking ecosystem will be. And it’s a circular relationship that also benefits the technology sector in ways that are really material, as well, because ultimately the rules that are written are motivated by what’s actually happening in the field, which I think is really helpful.

VentureBeat: “Ethics washing is a term that’s been thrown around a lot this year, and it’s often associated with big tech companies. What would your response be to someone who would say that the Partnership on AI is doing good research and bringing people together, but it’s also helping tech giants be involved in some form of ethics washing? 

Lyons: Yeah, I mean there’s definitely the risk of an organization like this being seen as a fig leaf for industry. I think the response I’d have to that is just that all you really have to do is look at it. I think our work stands for itself insofar as I think that our inclination as a community is toward challenging the biases of the technology industry and being probing and considered and thoughtful about being in partnership with the institutions that have the greatest amount of influence on AI products and services deployed today.

I think part of our theory of change is consideration for the fact that effective behavior change, and change made at scale on these questions can only really be conducted when you have those individuals and institutions who are most empowered sitting at the table with those who are least empowered.

It really is about making sure that tech companies are empowering the people within them who are best positioned to make change — because they’re engineers or researchers, and they’re making decisions about these tools every single day — to be leaders within those organizations.

It harkens back to the very founding of the institution. Partnership on AI was not created by the corporate communications arms of these companies. It was a group of AI research leaders from the field who had in certain ways grown up together in the academic research field.

VentureBeat: Yeah, small community.

Lyons: It’s a really tight-knit community. I have found it in my personal experience to have a lot of moral clarity and a lot of introspection.

We are doing a lot of work to try to identify and empower people who are well-intentioned and interested in doing the right thing, and part of our work is giving them the tools, resources, and information necessary to make those decisions effectively.

The interests they represent are just so crucial — existentially crucial — to these questions, and so we never convene groups without them, although a lot of power right now is held by the tech industry. The way that that power plays out in the rooms that we convene as an organization is such that it is equalized in the conversations that we’re having, so companies can’t veto our work on the basis that it might make them look less than great. 

It’s really crucial to interrogate that for organizations like ours and many others that work with industry. 

VentureBeat: In following and occasionally participating in ethics discussions this year, ethics is the broad word that tends to be used, but I keep coming back to: Yeah ethics, but actually power. What role do you think that power plays in AI ethics and sort of the deployment of AI in society today?

Lyons: Oh my gosh, a huge one. I mean, I agree that power is one of if not the central question. I mean thinking about power in representation is really important. So you look at the field and a big topic of conversation right now is about the lack of diversity within it, which I think is a fundamental challenge that needs to be solved as a sort of step zero, because it’s power-related. That problem has to do with who’s in the room making decisions and what those decisions reflect, and so that’s one example of where power really comes into play. The discrepancy in levels of power between organizations and their influence over the way that systems and tools are built and deployed right now is another. We think a lot about that one too because of the nature of our work as an organization.

So part of how we’ve tried to ameliorate that issue is — we call it “capacity building” internally, and that encompasses a broad range of work and activities. But as it pertains to our specific program work, we have felt very strongly from the beginning that we cannot expect — especially nonprofit organizations that are civil society representatives — to show up, because they’re under-resourced, without appropriate compensation and financial support and other types of support. Without that, they just really can’t show up and be present in the ways that we might hope them to be.

By comparison to these large and especially well-resourced tech companies, there’s just a difference in power and resourcing there in many ways, and so empowering them more effectively I think is a big piece of how you start to level the playing field for effective collaboration.

And power at the individual level is a really big issue too. Like there have been a long set of conversations, especially this year and last, emerging from worker power in the tech industry and the power and influence of individuals organizing within their institutions, and so that’s another big theme to think about and be watchful for, just understanding the trajectory of the field. So yeah, I think it’s central and crucial, and we think in terms of power a lot in our work, because it’s an impoverished conversation unless you’re talking about it.

VentureBeat: Do you feel like there are any specific actions that the Partnership can take that no [other] organization can accomplish? Are you trying to aim for that as your target, to sort of shoot for the moon because you’ve got everyone involved?

Lyons: We actually use that as an internal calibrate for what work we decide to take on, the measure by which we decide to do anything. And the types of questions we asked ourselves internally are: Where can PAI be a force multiplier on this issue, or how can we contribute something that no other institution is currently in a position to contribute to this debate or this question or set of challenges that organizations are having?

I think that on a lot of these projects that we’re taking on, they actually are at a scale where no other organization may be in a position to conduct them effectively because of all the stakeholders that they require. One example is on the issue of AI-generated synthetic media, mis- or disinformation. Over a year ago, we launched a working group on that topic, actually. We did so pretty quietly, but it’s now bearing a lot of fruit, and we’ve been able to pull social media platform companies into the same room as massive media companies on the front line of the concerns associated with these issues, including the BBC and the New York Times and other types of organizations like that, and brokering conversations that just aren’t happening in other places.

We’ll be able to say more publicly on that work soon, but a lot of it has to do with coordinating institutionally scaled concerns that have big impacts on society-scale challenges in a way that isn’t really happening anywhere else.

I think similar things can be said on even questions associated with ABOUT ML. If you look at the program committee for that work, it really is a lot of the heavy hitters in the industry that you would want sitting around a table talking about that in collaboration with deep experts on issue areas that also need to be represented, and that’s kind of a moonshot in and of itself in certain ways.

VentureBeat: Those conversations?

Lyons: Yeah, those conversations and what they’ll produce. I mean, we also exist to produce outputs that have an impact too, so we believe strongly in the power of conversation for the sake of coordination and relationship-building, etc., but also believe in it as an end in itself and a means toward these ends of actually producing behavior change and positive impact for the field writ large.

VentureBeat: There’s nothing else out there quite like the Partnership on AI. Is there a risk potentially, if there aren’t some bold movements or something by the Partnership that people would see an organization like this as either toothless or incapable of delivering on major steps forward that people feel are urgently needed, with AI going everywhere in people’s lives?

Lyons: Yeah, there’s absolutely that risk. I mean we were, we took a long time to get off the ground originally because the six fiercest competitors in the sector had to converge in some way to create a memorandum of understanding with one another to found this organization and then bring all these other very different types of institutions to the table.

We’ve barely been operational for a year, I would say, but I think that hard, impactful things take some time, right? And it might not be time that we have the luxury of in this instance, given the urgency that you and I have talked about, but I do think that we’re in a position to make a really big dent in the universe, just because of the uniqueness of our position — as a community — in a way that perhaps no other organization is.

The accountability question is a really important one. The way that we talk about our work at PAI is as a learning community, so we’re an organization that is attempting to incite more vulnerability from the tech industry than they’re used to providing, and asking them to bring their hardest questions to a collective environment in which they’re subjected to a lot of scrutiny.

I think that process is one that ultimately will be necessary for us to endure in order to produce something worthwhile of the sort of scale that you’re talking about.

The whole idea behind this work is that organizations will feel compelled to enact these standards they’re setting because they’re helping to set them, and that provides a measure of accountability in and of itself to a certain extent, and so hopefully there’s power in that among the other mechanisms that we’re trying to develop.

VentureBeat: Outside of the Partnership, I’m curious what kind of developments in the AI space give you hope for the future?

Lyons: I think AI has an ability to hold a mirror up to ourselves as a society and say: Where have we experienced discrimination or marginalization?

Not just in this era of technological development, but for generations, frankly — Where do we have the greatest dangers of scaling and replicating those behaviors in such a way that what this presents for us is both a challenge and our greatest opportunity to actually remediate those issues?

And I think it’s a clarion call in many ways for all of us to step up and think about these as sociotechnical problems and not just technical problems.

I think that we’ve started to see that shift in the conversation that’s been happening around industry, and the way that the field has been thought about as more expansive than just computer science and encompassing of ethicists and anthropologists and sociologists and all sorts of other disciplines that are really required for this to be a really holistic conversation. But I think that piece gives me a lot of hope, and that’s balancing pessimism with optimism, I guess.

There’s all sorts of use cases too, that have to be sensitively developed, but if we do it in the right ways [they] can make a real difference for a huge range of different problems that we’ve had for a very long time.

Reducing the rate of medical accidents by using these tools is a huge area for potential that personally I’m very excited by. There’s also personalized learning, and environment and energy applications to help us understand where to put our time and energy as a species to ensure that we’re also being appropriately responsive to climate change challenges.

That’s just a set of examples, but I think that in addition to AI exposing societal biases in the way that I described and helping us actually adjust human behaviors around them, there are all these other really proactive, positive use cases that are really exciting as well.