Warning
This job advert has expired and applications have closed.
Research Fellow
Posting date: | 21 May 2025 |
---|---|
Salary: | £35,116.00 to £45,413.00 per year |
Hours: | Full time |
Closing date: | 15 June 2025 |
Location: | Warwick, Warwickshire |
Remote working: | On-site only |
Company: | University of Warwick |
Job type: | Temporary |
Job reference: | 110530-0525 |
Summary
For informal enquiries, please contact Markus Brill (Associate Professor) markus.brill@warwick.ac.uk
We are seeking a Postdoctoral Research Fellow (PDRF) to join our interdisciplinary research project "Aggregating Safety Preferences for AI Systems: A Social Choice Approach" (ASPAI) [1], funded by the Advanced Research + Invention Agency (ARIA) under their Safeguarded AI programme [2].
The ASPAI project aims to develop formal methods for eliciting and aggregating safety specifications and risk thresholds for AI systems, with a particular focus on mechanisms that provide axiomatic guarantees, are algorithmically tractable, and show good performance in computational experiments. The project's research outcomes are expected to contribute to the design of safeguarded AI systems that reflect diverse stakeholder preferences.
The project is led by the University of Warwick in collaboration with the University of Oxford, HPI Potsdam, University of Groningen, and UC Berkeley. You will join a dynamic team of researchers across multiple institutions, working on cutting-edge problems at the intersection of computational social choice and AI safety.
Our research focuses on developing mathematical frameworks for aggregating human attitudes toward safety specifications for AI systems. The project addresses key challenges in collective decision-making about AI safety, exploring how to combine diverse stakeholder views into coherent safety criteria and how to handle uncertainty in both the elicitation and representation of safety preferences. The mathematical focus of the project includes the formal modelling of stakeholder preferences in the form of approval sets over probability distributions, developing axiomatic frameworks for aggregating these preferences, and addressing challenges of incomplete information and ambiguity. The research draws upon and combines expertise from (computational) social choice, judgment aggregation, (algorithmic) game theory, logic, decision theory, and probability theory.
The position is expected to begin around October 2025, with flexibility in the start date. The position is for 12 months.
Suitable candidates will have (or soon obtain) a PhD in Computer Science, Mathematics, Economics, or a related field. We are looking for candidates with a strong background in one or more of the following areas: (computational) social choice, judgment aggregation, mechanism design, (algorithmic) game theory, logic, or decision theory.
For further information regarding the skills required for this role please see the personal specification section of the attached job description.
We are seeking a Postdoctoral Research Fellow (PDRF) to join our interdisciplinary research project "Aggregating Safety Preferences for AI Systems: A Social Choice Approach" (ASPAI) [1], funded by the Advanced Research + Invention Agency (ARIA) under their Safeguarded AI programme [2].
The ASPAI project aims to develop formal methods for eliciting and aggregating safety specifications and risk thresholds for AI systems, with a particular focus on mechanisms that provide axiomatic guarantees, are algorithmically tractable, and show good performance in computational experiments. The project's research outcomes are expected to contribute to the design of safeguarded AI systems that reflect diverse stakeholder preferences.
The project is led by the University of Warwick in collaboration with the University of Oxford, HPI Potsdam, University of Groningen, and UC Berkeley. You will join a dynamic team of researchers across multiple institutions, working on cutting-edge problems at the intersection of computational social choice and AI safety.
Our research focuses on developing mathematical frameworks for aggregating human attitudes toward safety specifications for AI systems. The project addresses key challenges in collective decision-making about AI safety, exploring how to combine diverse stakeholder views into coherent safety criteria and how to handle uncertainty in both the elicitation and representation of safety preferences. The mathematical focus of the project includes the formal modelling of stakeholder preferences in the form of approval sets over probability distributions, developing axiomatic frameworks for aggregating these preferences, and addressing challenges of incomplete information and ambiguity. The research draws upon and combines expertise from (computational) social choice, judgment aggregation, (algorithmic) game theory, logic, decision theory, and probability theory.
The position is expected to begin around October 2025, with flexibility in the start date. The position is for 12 months.
Suitable candidates will have (or soon obtain) a PhD in Computer Science, Mathematics, Economics, or a related field. We are looking for candidates with a strong background in one or more of the following areas: (computational) social choice, judgment aggregation, mechanism design, (algorithmic) game theory, logic, or decision theory.
For further information regarding the skills required for this role please see the personal specification section of the attached job description.