To give women academics and others their well-deserved—and overdue—time in the spotlight, TechCrunch is publishing a series of interviews focusing on notable women who have contributed to the AI revolution. We’re publishing these pieces throughout the year as the AI boom continues, highlighting essential work that often goes unrecognized. Read more profiles here.
In today’s spotlight: Allison Cohen, the senior project manager for applied artificial intelligence at Mila, a Quebec-based community of more than 1,200 researchers specializing in artificial intelligence and machine learning. It works with researchers, social scientists and external partners to develop socially beneficial AI projects. Cohen’s portfolio of work includes a tool that detects misogyny, an app to track online activity from suspected human trafficking victims, and an agricultural app that recommends sustainable agricultural practices in Rwanda.
Previously, Cohen co-led AI drug discovery at the Global Partnership on Artificial Intelligence, an organization that guides the responsible development and use of artificial intelligence. He has also served as an AI strategy consultant at Deloitte and a project consultant at the Center for International Digital Policy, an independent Canadian think tank.
Q&A
Briefly, how did you get started with AI? What drew you to the space?
The realization that we could mathematically model everything from facial recognition to negotiating trade deals changed the way I saw the world, which is what made AI so exciting to me. Ironically, now that I’m working in AI, I see that we can’t—and in many cases shouldn’t—capture these kinds of phenomena with algorithms.
I was exposed to the field while completing a master’s degree in global affairs at the University of Toronto. The program is designed to teach students to navigate the systems that affect the global order – everything from macroeconomics to international law and human psychology. As I learned more about AI, however, I recognized how vital it would become to world politics and how important it was to educate myself on the subject.
What got me into the field was an essay writing competition. For the competition, I wrote a paper describing how psychedelic drugs would help people stay competitive in a job market filled with artificial intelligence, which allowed me to attend the St. Gallen in 2018 (it was creative writing). My invitation and subsequent participation in that event gave me the confidence to continue pursuing my interest in the field.
What work in AI are you most proud of?
One of the projects I managed involved creating a dataset containing instances of subtle and overt expressions of bias against women.
For this project, staffing and managing an interdisciplinary team of natural language processing experts, linguists, and gender studies specialists throughout the project lifecycle was critical. It’s something I’m very proud of. I learned firsthand why this process is fundamental to building responsible apps, and also why it’s not done enough — it’s hard work! If you can support each of these stakeholders in effective interdisciplinary communication, you can facilitate work that combines decades of tradition in the social sciences and cutting-edge developments in computer science.
I’m also proud that this project has been well received by the community. One of our work was recognized in the spotlight in the Socially Responsible Language Modeling workshop at one of the leading AI conferences, NeurIPS. Also, this work inspired a similar interdisciplinary process that he managed AI Swedenwhich adapted the play to match Swedish concepts and expressions of misogyny.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
It’s sad that in such a cutting-edge industry, we still see problematic gender dynamics. It doesn’t just negatively affect women – we all lose. I’ve been quite inspired by a concept called ‘feminist standpoint theory’ which I learned about in Sasha Costanza-Chock’s book, Design Justice. \
The theory posits that marginalized communities, whose knowledge and experiences do not benefit from the same privileges as others, have an awareness of the world that can bring about just and inclusive change. Of course, not all marginalized communities are the same, and neither are the experiences of individuals within those communities.
That said, a diversity of views from these groups is vital to help us navigate, challenge, and dismantle all kinds of structural challenges and inequities. This is why failure to include women can perpetuate the exclusion of the field of artificial intelligence for an even wider segment of the population, reinforcing power dynamics outside the field as well.
In terms of how I navigated a male-dominated industry, I found allies to be quite important. These allies are the product of strong and trusted relationships. For example, I’ve been very fortunate to have friends like Peter Kurzwelly, who shared his podcast expertise to support me in creating a women-focused podcast called “The World We’re Building.” This podcast allows us to elevate the work of even more women and non-binary people in the field of artificial intelligence.
What advice would you give to women looking to enter the AI field?
Find an open door. It doesn’t have to be paid, it doesn’t have to be a career, and it doesn’t even have to align with your background or experience. If you can find an opening, you can use it to hone your voice in the space and build from there. If you’re a volunteer, give it your all — it’ll allow you to stand out and hopefully get paid for your work as soon as possible.
Of course, there is a perk to being able to volunteer, which I also want to acknowledge.
When I lost my job during the pandemic and unemployment was at an all-time high in Canada, very few companies were looking to hire AI talent, and the ones that were hiring weren’t looking for global affairs students with eight months of consulting experience. While applying for jobs, I started volunteering at an ethical AI organization.
One of the projects I worked on while volunteering was about whether there should be copyright protection for art produced by artificial intelligence. I reached out to a lawyer at a Canadian AI law firm to better understand the space. He connected me to someone at CIFAR, who connected me with Benjamin Prud’homme, the executive director of Mila’s AI for Humanity team. It’s amazing to think that through a series of exchanges about the art of artificial intelligence, I learned about a career opportunity that has since changed my life.
What are some of the most pressing issues facing artificial intelligence as it evolves?
I have three answers to this question that are somewhat interrelated. I think we need to understand:
- How to reconcile the fact that AI is built to scale while ensuring that the tools we build are tailored to fit local knowledge, experience and needs.
- If we want to create tools that are tailored to the local context, we will need to integrate anthropologists and sociologists into the AI design process. However, there are a multitude of incentive structures and other barriers that prevent meaningful interdisciplinary collaboration. How can we overcome this?
- How can we influence the design process even more deeply than simply incorporating multidisciplinary expertise? Specifically, how can we shift incentives so that we design tools built for those who need it most urgently, rather than those whose data or business is most profitable?
What are some issues AI users should be aware of?
Labor exploitation is one of the topics that I don’t think is covered enough. There are many artificial intelligence models that learn from labeled data using supervised learning methods. When the model is based on labeled data, there are people who need to do this labeling (ie, someone adds the label “cat” to an image of a cat). These people (commenters) are often subject to exploitative practices. For models that do not require data to be labeled during the training process (as is the case with some generative AI models and other foundational models), datasets can be created exploitatively, given that developers often do not obtain consent or provide compensation or credit to data creators.
I would recommend checking out the work of Krystal Kauffman, who I was very happy to see featured in this TechCrunch series. It makes progress in defending commenters’ labor rights, including a living wage, an end to “mass dismissal” practices, and detention practices aligned with fundamental human rights (in response to developments such as intrusive surveillance).
What’s the best way to build responsible AI?
People often look to the ethical principles of artificial intelligence in order to claim that their technology is responsible. Unfortunately, ethical reflection can only begin after certain decisions have already been made, including but not limited to:
- What are you building?
- How do you build it?
- How will it develop?
If you wait until these decisions are made, you will have missed countless opportunities to create responsible technology.
In my experience, the best way to build responsible AI is to know — from the earliest stages of your process — how your problem is defined and whose interests it serves. how orientation supports or challenges pre-existing power dynamics; and which communities will be empowered or disempowered through the use of AI.
If you want to build meaningful solutions, you need to navigate these power systems carefully.
How can investors best push for responsible AI?
Ask about team values. If values are defined, at least in part, by the local community and there is a degree of accountability within that community, the group is more likely to incorporate responsible practices.