Theory of Mind (ToM) can be used to assess the capabilities of Large Language Models (LLMs) in complex scenarios where social reasoning is required. While the research community has proposed many ToM benchmarks, their hardness varies greatly, and their complexity is not well defined. This work proposes a framework inspired by cognitive load theory to measure the complexity of ToM tasks. We quantify a problem's complexity as the number of states necessary to solve it correctly. Our complexity measure also accounts for spurious states of a ToM problem designed to make it apparently harder. We use our method to assess the complexity of five widely adopted ToM benchmarks. On top of this framework, we design a prompting technique that augments the information available to a model with a description of how the environment changes with the agents' interactions. We name this technique Discrete World Models (DWM) and show how it elicits superior performance on ToM tasks.
Discrete World Models (DWM) splits a prompt into chunks and queries an LLM, at each timestep, to provide a concise representation of the environment.
DWM makes crucial information in a prompt explicit
The complexity framework fits in Sweller's Cognitive Load theory
We show that the complexity correlates with the error rate of the models on the ToM tasks.
@article{huang2024notion,
title={A Notion of Complexity for Theory of Mind via Discrete World Models},
author={X. Angelo Huang and Emanuele La Malfa and Samuele Marro and Andrea Asperti and Anthony Cohn and Michael Wooldridge},
year={2024},
eprint={2406.11911},
archivePrefix={arXiv},
}