AI can create a reasonable facsimile of a person’s personality after two-hour interview

by Bob Yirka , Tech Xplore

Using AI to create a reasonable facsimile of a person's personalityThe interview interface. a) The main interview interface: A 2D sprite representing the AI interviewer agent is displayed in a white circle that pulsates to match the level of the audio, visualizing the interviewer agent’s speech during the AI interviewer’s turn. b) Participant’s response: The 2D sprite of the AI interviewer agent changes into a microphone emoji when it is the participant’s turn to respond, with the white circle pulsating to match the level of the participant’s audio being captured. c) Progress bar and subtitles: A 2D sprite map shows the participant’s visual avatar traveling from one end point to the other in a straight line, indicating progress. The interface also features options to display subtitles or pause the interview. Credit: arXiv (2024). DOI: 10.48550/arxiv.2411.10109

A small team of computer scientists and sociologists, working with Google DeepMind, has developed an AI application that can generate a simulation of a person’s personality after interviewing them for just two hours. The group has written a paper describing their work and where they believe such efforts are heading; it is available on the arXiv preprint server.

As scientists continue to push the boundaries of AI research, they continue to find new and interesting applications. In this latest effort, the research team used the LLM model ChatGPT as a basis for creating a new model that can be used to learn enough about a person being interviewed and to accurately give answers that the same person would have given if asked.

The model works by asking a string of questions to a given person and listening to the answers. After two hours, the model stops and processes what it has heard. It then generates a simulated personality for the person who was just interviewed.

It is then tested by asking the simulated personality many different questions, then asking the original person the same questions and comparing the answers they gave. Thus far, the researchers have found an accuracy rate of approximately 85%.

The research team notes that their aim is not to make people redundant but to make sociology research easier. The main tool available to sociologists is surveys. Developing them, writing them up, giving them, and then analyzing them to learn something about a given group of people is time-consuming and expensive.

The researchers on this new effort wonder if it might be possible to capture how people view things or their opinion on them and then use those captured views as the basis for surveys—if so, they would be far less expensive. And that would allow for many more surveys to be conducted, greatly expanding understanding of the biggest issues facing a given society.

The team developed their model by paying a thousand people to sit for interviews with their system. They call the simulated personalities “agents,” which they note are nothing like the AI agents (or assistants) currently used to help people get their work done. It is notable, however, that similar models could be used to vastly improve the utility of work agents or, someday, perhaps, to help personal robots better interact with their human companions.

More information: Joon Sung Park et al, Generative Agent Simulations of 1,000 People, arXiv (2024). DOI: 10.48550/arxiv.2411.10109

Journal information:arXiv

© 2024 Science X Network


Explore further

Scientists find ChatGPT is inaccurate when answering computer programming questions

ReplyReply to allForward

Leave a Reply

Your email address will not be published.