Modern large language models (LLMs) show good performance in the zero-shot or few-shot learning. This ability is a significant result even on tasks for which the models have not been trained is in part due to the fact that by learning from textual internet-scale data, such models build a semblance of a model of the world. However, the question of whether the entities on which that the model operates are concepts in the psychological sense remains open. Relying on conceptual reasoning schemes allows to increase the safety of models in solving complex problems. To address this question, we propose to use standard psychodiagnostic techniques to assess the quality of conceptual thinking of models. We test this hypothesis, by conducting experiments on a dataset adapted for LLMs from the psychological techniques of Kettel and Rubinstein and comparing the effectiveness of each of them. In this paper, we have shown that it is possible to distinguish several types of model errors in incorrect answers to standard tasks on conceptual thinking and to evaluate the type according to the classifications of distortions of conceptual thinking adopted in cultural and historical approaches in psychology. This makes it possible to use the tool of psychodiagnostic techniques not only to evaluate the effectiveness of models, but also to develop training procedures based on such tasks.
Chuganskaya, A. A., Kovalev, A. K., Panov, A. (2023). The Problem of Concept Learning and Goals of Reasoning in Large Language Models // In: García Bringas, P., et al. Hybrid Artificial Intelligent Systems. HAIS 2023. Lecture Notes in Computer Science, vol. 14001, pp. 661–672.