For today’s “Five minutes with” we caught up with Gemma Jennings, Product Manager on the Applied team, who led a session on sight language models at AI Summit – one of the world’s largest AI events for business.
In DeepMind…
I’m part of the Applied team, which helps bring DeepMind technology to the outside world through Alphabet and Google products and solutions, such as WaveNet and Google Assistant, Maps and Search. As a product manager, I act as a bridge between the two organizations, working very closely with both teams to understand the research and how people can use it. Ultimately, we want to be able to answer the question: How can we use this technology to improve the lives of people around the world?
I am particularly excited about the sustainability portfolio. We’ve already helped reduce the amount of energy needed to cool Google’s data centers, but there’s much more we can do to make a bigger, transformative impact on sustainability.
Before DeepMind…
I worked at the John Lewis Partnership, a UK department store that has a strong sense of purpose built into its DNA. I’ve always loved being part of a company with a sense of social purpose, so DeepMind’s mission to solve intelligence to advance science and benefit humanity really resonated with me. I was interested to know how this ethos would play out in a research-led organization – and at Google, one of the largest companies in the world. Adding this to my academic background in experimental psychology, neuroscience and statistics, DeepMind ticked all the boxes.
The AI Summit…
It’s my first in-person conference in almost three years, so I really want to meet people in the same industry as me and hear what other organizations are working on.
I look forward to attending some talks from the quantum computing track to learn more. It has the potential to drive the next big paradigm shift in computing power, unlocking new use cases for applying AI to the world and allowing us to work on bigger, more complex problems.
My work involves a lot of deep learning methods, and it’s always exciting to hear about the different ways people are using this technology. Currently, these types of models require training on large amounts of data – which can be expensive, time-consuming and resource-intensive given the amount of computation required. So where do we go from here? And what does the future of deep learning look like? These are the kinds of questions I’m looking to answer.
I presented…
Image recognition using deep neural networks, our recently published research on visual language models (VLM). For my presentation, I discussed recent developments in merging large linguistic models (LLMs) with robust visual representations to advance the state of the art in image recognition.
This exciting research has so many potential uses in the real world. It could, one day, act as an assistant to support classroom and informal learning in schools, or help people with blindness or low vision see the world around them, transforming their everyday lives.
I want people to leave the session…
By better understanding what happens after the research discovery is announced. So much amazing research is being done, but we have to think about what’s next, like what global problems could we help solve? And how can we use our research to create products and services that have a purpose?
The future is bright and I’m excited to discover new ways to apply our groundbreaking research to help millions of people around the world.