In today’s “5 Minutes” we meet Gemma Jennings, Product Manager on the Applied team. He led a session on vision language models in Applied. AI Summit – One of the world’s largest AI events for business.
At DeepMind…
I’m part of the Applied team that helps expose DeepMind technology to the outside world through Alphabet and Google products and solutions like WaveNet, Google Assistant, Maps, and Search. As a product manager, I act as a bridge between the two organizations and work very closely with both teams to understand the research and how people can use it. Ultimately, we want to be able to answer the question: how can we use this technology to improve the lives of people around the world?
I am particularly excited about our portfolio of sustainability work. We’ve already helped reduce the amount of energy needed to cool Google’s data centers. But there is much more we can do to have a bigger, more transformative impact within sustainability.
Before DeepMind…
I worked at John Lewis Partnership, a British department store with a strong sense of purpose in its DNA. I’ve always loved being part of companies with a social purpose, so DeepMind’s mission to advance science and solve intelligence problems for the benefit of humanity really resonated with me. I wanted to know how this spirit would manifest itself in a research-led organization and within Google, one of the world’s largest companies. Adding DeepMind to your academic background in experimental psychology, neuroscience, and statistics ticks all the boxes.
AI Summit…
This is my first in-person conference in almost three years, so I’m looking forward to meeting people in the same industry as me and hearing what other organizations are working on.
I’m looking forward to attending some of the talks in the quantum computing track to learn more. This has the potential to drive the next paradigm shift in computing power, open new use cases for applying AI around the world, and solve larger, more complex problems.
My work involves a lot of deep learning methods, and it’s always interesting to hear about the different ways people use this technology. Currently, these types of models require training on large amounts of data. This can be expensive, time-consuming, and resource-intensive given the amount of computing required. So where do we go from here? What does the future of deep learning look like? The types of questions I’d like to answer are:
I presented it…
Image recognition using deep neural networks, a recently published study on visual language models (VLMs). For my presentation, I discussed recent advances in fusing Large Language Models (LLMs) and powerful visual representations to advance the state of the art in image recognition.
This fascinating research has so many potential uses in the real world. One day, you may serve as an aide in your school’s classrooms and support informal learning, or you may even make a difference in the daily lives of people with visual impairments by helping them see the world around them.
I want people to leave the session…
Better understand what happens after research breakthroughs are published. There is so much amazing research going on, but we need to think about what comes next – what global problems it can help solve. And how can we use our research to create purposeful products and services?
The future is bright. We’re excited to discover new ways to apply our groundbreaking research to benefit millions of people around the world.