Browse
Employers / Recruiters

Research Scientist, Cognition, Mountain View

deepmind · 30+ days ago
Continue
By pressing the button above, you agree to our Terms and Privacy Policy, and agree to receive email job alerts. You can unsubscribe anytime.

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

 

Snapshot

Members of the Cognition Team study how the properties of training data, environments, and algorithms shape the learning and generalization of models and agents, with a particular focus on cognitive abilities ranging from perception to reasoning, planning, and communication. Our work ranges from basic science analyzing (behaviorally or representationally) the structure of what AI models learn in controlled experiments, to driving the development of new approaches for training and using foundation models.

About Us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

The Role

Research Scientists at Google DeepMind lead our efforts in developing novel algorithmic architecture towards the end goal of solving and building Artificial General Intelligence.

This role will involve conducting cutting-edge foundational and applied research related to topics of learning and generalization, such as:

  • Data: examining how properties of training data (or interactive training, such as RL) shape the learning and generalization of AI systems. 
  • Behavioral analysis: studying empirically how and when foundation models or agents generalize well or fail to perform robustly, e.g. during in-context reasoning.
  • Representations, interpretability, and controllability: studying how internal representations of AI systems reflect their learning and generalization, and how they can be used to interpret the systems or improve their reliability.
  • Improving AI models, and our methods of interacting with them: leverage insights from the research above to build better foundation models, or achieve better performance through novel methods of interacting with existing models.

Key responsibilities

  • Conduct novel research and analysis on the learning and generalization of AI systems.
  • Identify key opportunities and missing pieces where research could yield impactful contributions.
  • Design and implement experiments to address these research opportunities.
  • Communicate research insights internally and collaborate with partner teams to translate insights into impact.
  • Engage with the external research community by publishing, attending conferences, giving external talks, etc.

 

About You

In order to set you up for success as a Research Scientist at Google DeepMind,  we look for the following skills and experience: 

  • PhD in Machine Learning, Natural Language Processing, Cognitive Neuroscience or relevant experience.
  • Experience studying how data shapes learning and generalization..

In addition, the following would be an advantage:  

  • A proven track record of publications and relevant experience in one or more areas such as training or evaluation of foundation models, cognitive analysis of AI systems, science of data, natural language processing, interpretability, etc.
  • Experience in model interpretability or explainability
  • Experience with training, evaluating, or interpreting large foundation models.
  • A real passion for AI!

 

The US base salary range for this full-time position is between $136,000 - $210,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.

 

Last updated on Aug 22, 2024

See more

About the company

More jobs at deepmind

Analyzing

London, England

 · 

30+ days ago

Mountain View, California

 · 

30+ days ago

Mountain View, California

 · 

30+ days ago

New York, New York

 · 

30+ days ago

Developed by Blake and Linh in the US and Vietnam.
We're interested in hearing what you like and don't like! Live chat with our founder or join our Discord
Changelog
🚀 LaunchpadNov 27
Create a site and sell services based on your CV.
🔥 Job search dashboardNov 13
Revamped job search UI with a sortable grid, live filtering, bookmarks, and application tracking.
🫡 Cover letter instructionsSep 27
New Studio settings give you control over AI output.
✨ Cover Letter StudioAug 9
Automatically generate cover letters for any job.
🎯 Suggested filtersAug 6
Copilot suggests additional filters above the results.
⚡️ Quick applicationsAug 2
Apply to jobs using info from your CV. Initial coverage of ~200k jobs in Spain, Germany, Austria, Switzerland, France, and the Netherlands.
🧠 Job AnalysisJul 12
Have Copilot read job descriptions and extract out key info you want to know. Click "Analyze All" to try it out. Click on the Copilot's gear icon to customize the prompt.
© 2024 RemoteAmbitionAffiliate · Privacy · Terms · Sitemap · Status