AI Evaluation Engineer (i.AI)
Posting date: | 19 February 2025 |
---|---|
Salary: | £55,403 to £70,219 per year |
Additional salary information: | National £55,403 - £61,793. London £61,005 - £70,219 - Offers above the band minimum are subject to our assessment of your skills and experience as demonstrated at interview. Salaries over the band minimum will be paid as a non-pensionable allowance. |
Hours: | Full time |
Closing date: | 09 March 2025 |
Location: | Manchester |
Company: | Government Recruitment Service |
Job type: | Permanent |
Job reference: | 391416/3 |
Summary
Government must seize the opportunity of AI to drive outcomes in the public interest. This is of critical importance now. Government needs innovation and must not sleep on the opportunity presented by AI to drive better services for taxpayers and citizens.
In November 2023, the creation of the incubator for AI (i.AI) was announced, and following early successes, the team was expanded in March 2024. Our mission is to harness the opportunity of AI to improve lives, drive growth, and deliver better public services. This is an AI product team that focuses on delivery of technical solutions to public service challenges, responding to ministerial steers about priorities and driving impact out into departments from the centre.
i.AI delivers high impact products and is mission-led, delivering value and innovation within government. We are able to move fast and build things, and are set up specifically to pivot quickly towards priority use cases, and re-use technologies for future impact.
You can see more about our work on ai.gov.uk and on LinkedIn. We work in the open and our code can be found here: https://github.com/i-dot-ai
The Incubator for Artificial Intelligence (i.AI) will be moving to the Department for Science, Innovation and Technology (DSIT) to form part of the new digital centre of government. If offered the position, you will be on-boarded to, and initially employed by, the Cabinet Office, but will move with us to DSIT under the machinery of government change. This is expected to happen on 1st June 2025, but this date is subject to change. If shortlisted, the hiring panel will be happy to answer any questions you might have. You're also welcome to reach out to us at i-dot-ai-recruitment@cabinetoffice.gov.uk
Our team is based across Bristol, Manchester and London, and we work in a hybrid manner as default. A minimum 60% of your working time should be spent at your principal workplace, although requirements to attend other locations for official business will also count towards this level of attendance. We will consider part-time and flexible working arrangements - we encourage you to discuss your needs with the hiring manager if you are offered the role.
About the role
i.AI’s Impact and Evaluation team build their work into each stage of product development, from scoping, to incubation, and onwards to scaling. Team members embed into product development teams, designing and delivering robust evaluations at each stage of the product journey to inform their design and, ultimately, understand their impact on the quality and efficiency of public service delivery.
The team is now recruiting for someone with a technical engineering background, to help incorporate evaluation into the back-end of models and support on technical tasks. This will include model testing to assess factors including a model’s precision, accuracy or bias. This person will be part of the wider Impact and Evaluation team and expected to up-skill on impact evaluation methods, but previous experience is not a requirement.
Information session
To give you an idea of working in i.AI and to answer any questions you might have, we encourage you to attend our information session on Tuesday 25th February, 11:00-12:00. You can join the session using this link: meet.google.com/ify-xksp-sfm
Role Responsibilities
- Embedding impact and evaluation into products from the very start and throughout the development cycle.
- Designing and maintaining software tools and scripts to facilitate model testing, safety assessments, and evaluation processes, ensuring robustness and accuracy.
- Conducting data analysis, drawing insights and recommendations and presenting them to stakeholders.
- Collaborating extensively with engineering teams to ensure that evaluation protocols are seamlessly integrated into the software development life-cycle.
- We will teach you to apply your skills to deliver broader evaluation of the AI tools throughout the development cycle. These are new and emerging methods that you would be key in forging practical delivery of: safety assessment, algorithmic transparency reporting, red teaming to assess AI bias and safety.
Proud member of the Disability Confident employer scheme