How to ensure we benefit society with the most effective technology being developed today
As CEO of one of the world’s leading AI labs, I spend a lot of time thinking about how our technologies impact people’s lives – and how we can ensure our efforts have a positive impact. This is the focus of my work and the critical message I bring when I meet global leaders and key figures in our industry. For example, it was at the forefront of the panel discussion on “Equality Through Technology” that I hosted this week at world economic forum in Davos, Switzerland.
Inspired by the important discussions taking place at Davos to build a greener, fairer, better world, I wanted to share some thoughts on my own journey as a technology leader, along with some insights into how we at DeepMind are approaching the challenge of building technology that truly benefits the global community.
In 2000, I took a leave of absence from my job at Intel to visit the orphanage in Lebanon where my father grew up. For two months, I worked to install 20 PCs in the orphanage’s first computer lab and train students and teachers to use them. The trip started as a way to honor my dad. But being in a place with such limited technical infrastructure also gave me a new perspective on my work. I realized that without a real effort from the tech community, many of the products I was building at Intel would be out of reach for millions of people. I became acutely aware of how this gap in access exacerbated inequality. Even as computers solved problems and accelerated progress in some parts of the world, others lagged behind.
After that first trip to Lebanon, I began to reevaluate my career priorities. I’ve always wanted to be a part of building cutting edge technology. But when I returned to the US, my focus narrowed to helping build technology that could have a positive and lasting impact on society. This led me to a variety of roles at the intersection of education and technology, including co-founder Team4Techa non-profit organization working to improve access to technology for students in developing countries.
When I joined DeepMind as COO in 2018, I did so in large part because I could tell that the founders and team had the same focus on positive social impact. In fact, at DeepMind, we now champion a term that perfectly captures my own values and hopes for integrating technology into people’s everyday lives: leading the way responsibly.
I believe responsible pioneering should be a priority for anyone working in technology. But I also recognize that it is especially important when it comes to powerful, widespread technologies like artificial intelligence. AI is arguably the most impactful technology being developed today. It has the potential to benefit humanity in countless ways – from fighting climate change to preventing and treating disease. But it is important to consider both positive and negative downstream effects. For example, we need to design AI systems carefully and carefully to avoid reinforcing human biases, such as in recruitment and policing contexts.
The good news is that if we continually challenge our own assumptions about how AI can and should be built and used, we can create this technology in a way that truly benefits everyone. This requires inviting discussion and debate, iterating as we learn, developing social and technical safeguards, and seeking different perspectives. At DeepMind, everything we do stems from our company’s mission to solve intelligence to advance society and benefit humanity, and building a culture of responsible leadership is essential to making that mission a reality.
What does responsible vanguardism look like in practice? I believe it starts with creating space for open, honest discussions about responsibility within an organization. One place where we’ve done this at DeepMind is our interdisciplinary leadership team, which advises on the potential risks and social impact of our research.
Evolving our ethical governance and formalizing this group was one of my first initiatives when I joined the company – and in a somewhat unconventional move, I didn’t give it a name or even a specific goal until we met several times. I wanted us to focus on the functional and practical aspects of responsibility, starting from a space without expectations, where everyone could talk honestly about what pioneering meant to them. These conversations were critical to creating a shared vision and mutual trust – which allowed us to have more open discussions in the future.
Another element of responsible vanguardism is the embrace of a kaizen philosophy and approach. I was introduced to the term kaizen in the 1990s when I moved to Tokyo to work on DVD technology standards for Intel. It’s a Japanese word that translates to “continuous improvement” – and in its simplest sense, a kaizen process is one in which small, incremental improvements, made continuously over time, lead to a more efficient and ideal system. But it’s the mindset behind the process that really matters. For kaizen to work, everyone who touches the system must be on the lookout for weaknesses and opportunities for improvement. This means that everyone must have both the humility to admit that something can be broken and the optimism to believe that they can change it for the better.
During my tenure as COO of online learning company Coursera, we used a kaizen approach to optimize our course structure. When I joined Coursera in 2013, courses on the platform had strict deadlines and each course was only offered a few times a year. We quickly learned that this didn’t provide enough flexibility, so we turned to a completely on-demand, self-tuning format. Sign-ups are up, but completion rates are down – it turns out that while too much structure is stressful and annoying, too little leads to people losing motivation. So we switched again, to a format where class sessions start several times a month and students work towards suggested weekly milestones. It took time and effort to get there, but continuous improvement eventually led to a solution that allowed people to get the most out of their learning experience.
In the example above, our kaizen approach was largely effective because we asked our student community for feedback and listened to their concerns. This is another critical part of responsible leadership: recognizing that we don’t have all the answers and building relationships that allow us to continually leverage outside input.
For DeepMind, that sometimes means consulting with experts in topics like security, privacy, bioethics and psychology. It can also mean reaching out to different communities of people directly affected by our technology and inviting them into a conversation about what they want and need. And sometimes, it just means listening to the people in our lives – regardless of their technical or scientific background – when they talk about their hopes for the future of artificial intelligence.
In essence, pioneering responsibly means prioritizing initiatives that focus on ethics and social impact. A growing area of focus in our research at DeepMind is how we can make AI systems fairer and more inclusive. Over the past two years, we have published research on decolonial AI, queer justice in AI, mitigating ethical and social risks in artificial intelligence language models, and more. At the same time, we are also working to increase diversity in the field of artificial intelligence through our special services scholarship programs. Internally, we recently started hosting Responsible AI Community sessions that bring together different groups and efforts working on security, ethics and governance – and several hundred people have signed up to get involved.
I’m inspired by the enthusiasm for this work among our employees, and I’m very proud of all my colleagues at DeepMind who keep social impact front and center. By ensuring that technology benefits those who need it most, I believe we can make real progress on the challenges facing our society today. In this sense, responsible vanguardism is a moral imperative – and personally, I can think of no better way to proceed.