At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Snapshot |
Our team is responsible for enabling AI systems to reliably work as intended, including identifying potential risks from current and future AI systems, and conducting technical research to mitigate them. On this team, you will discover and evaluate vulnerabilities in our frontier AI systems, enabling other teams to implement approaches that mitigate the risks. |
|
About us |
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. Conducting research into any transformative technology comes with responsibility to build mechanisms for safe and reliable development and deployment at every step. Technical safety research at Google DeepMind investigates questions related to evaluations, reward learning, fairness, interpretability, robustness, and generalization in machine learning systems. Proactive research in these areas is essential to the fulfillment of the long-term goal of Google DeepMind: to build safe and socially beneficial AI systems. |
|
The role |
This team aims to work on the forefront of technical approaches to designing state-of-the-art LLM attack simulations to identify unknown vulnerabilities and threats that can bypass implemented guardrails. We’re seeking to build a team of creative problem solvers to provide the most complete AI risk position. Combined with Google DeepMind’s existing AI systems, these experts will analyze, prompt, and optimize internal models to accelerate the future of AGI safely and responsibly. Key Responsibilities:
|
|
About you |
We seek out individuals who thrive in ambiguity and who are willing to help with whatever moves prototypes forward. We regularly need to invent novel solutions to problems, and often change course if our ideas don’t work, so flexibility and adaptability to work on any project is a must. In order to set you up for success in this role at Google DeepMind, we are looking for the following skills and experience:
In addition, the following would be an advantage:
|
What we offer |
At Google DeepMind, we want employees and their families to live happier and healthier lives, both in and out of work, and our benefits reflect that. Some select benefits we offer: enhanced maternity, paternity, adoption, and shared parental leave, private medical and dental insurance for yourself and any dependents, and flexible working options. We strive to continually improve our working environment, and provide you with excellent facilities such as healthy food, an on-site gym, faith rooms, terraces etc. We are also open to relocating candidates to a core GDM location and offer a bespoke service and immigration support to make it as easy as possible (depending on eligibility). The US base salary range for this full-time position is between $136,000 - $245,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process. Application deadline: 12pm GMT Thursday 7th September 16 2024 Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy. |
Last updated on Aug 23, 2024
London, England
·30+ days ago
London, England
·30+ days ago
Mountain View, California
·30+ days ago
Mountain View, California
·30+ days ago
New York, New York
·30+ days ago