Saturday, October 18, 2025

AI and Behavioural Applications

As part of our initiative on ethical behavioural science in industry application, I have been doing a lot of background research and workshops on emerging issues in the use of AI in behavioural applications. I gave a talk recently to an industry audience led by Capco, and spoke in a Chatham house style session at the Institute of Chartered Accounts looking at the intersection of regulation and productivity, with a particular focus on AI applications. I have been mostly making the point that we need to bring AI innovations under the scrutiny of sober evaluation and ethical frameworks and in particular to examine end-points where the outputs of AI influence real-world consumer and citizen behaviour in consequential ways. 

Below are some useful links and resources that have been helping me think through some of the issues: 

We did a really interesting session on this that I chaired in LSE last October. The video of the event is available here. 

Artificial Intelligence (AI) is transforming various aspects of behavioural science. For example, AI-driven models are being used to predict human behaviour and decision-making, and to design personalized behavioural interventions. AI can also be used to generate artificial research participants on whom behavioural interventions can be tested instead of on humans. AI is creating many new opportunities and challenges in behavioural science, disrupting the discipline to the degree that researchers, practitioners, and any behavioural science enthusiasts are trying to keep up with the new developments and understand how to best navigate the rapidly changing landscape. In this public event, speakers who are associated with pioneering work on AI in relation to behavioural science, as part of their own research or organisational initiatives, will discuss their views on how AI will change and is already changing behavioural science. This will involve touching upon topics such as the implications of AI for behavioural scientists in academia, public, and private sectors, new skills that will be required by behavioural scientists of the future, and impact on behavioural science education. Speakers: Alexandra Chesterfield Elisabeth Costa Professor Oliver Hauser Dr Dario Krpan Professor Susan Michie Professor Robert West Chair: Liam Delaney

2. We also had a talk last year by Cass Sunstein on his forthcoming book Imperfect Oracle. This book is really useful for getting an understanding of the limits of AI understanding. He also gave a talk last week in LSE on his other new book Manipulation that has several really interesting insights on the extent to which different type of AI influence could be considered manipulative. 

3. My colleague Ben Tappin was one of a team of authors that recently published a fascinating paper on the extent to which LLMS are becoming better at being persuasive across a range of issues. An example of the work he and colleagues have been doing is here. This is quite nuanced work painting a complex picture of how persuasive LLMS are becoming. 

4. I am on the scientific board of Behavioural Research UK and there is quite a bit of discussion through that group on the potential for AI to inform the production of systematic reviews in behavioural science. This is still ongoing and I will post about it at a later stage but there are clearly several groups working to produce principles that will allow AI to be used in a way that is transparent and accountable. 

5. Stuart Mills in Leeds is always good to read and talk to about issues at the intersection of AI, ethics, and consumers. One example is the following paper. Mills S, Costa S, Sunstein CR. 2023. AI, Behavioural Science, and Consumer Welfare. Journal of Consumer Policy. 387-400+. More generally, he has been writing very interesting work on the implications of very direct personalised AI influence.

6. Against the uncritical adoption of AI in academia is a comment piece by a collective of academics that is stimulating to read. 

No comments: