A study of 11 leading LLMs finds the models more agreeable than humans when giving interpersonal advice, affirming users' behavior even when harmful or illegal Stanford University: Stanford University 2026-03-29 Stanford University