In the rapidly developing artificial intelligence, the explainability of the proposed hypotheses and confidence in the outstanding solutions remain important problem areas. The article discusses various approaches to explainability for users of the recommendations of computer systems that they receive. The differences in the concepts of transparency and explainability are pointed out. The concepts of interpretation of results in a formal form and meaningful explanation are compared. Particular attention is paid to the need for a directed explanation for users of different levels of decision-making. The problem of trust in artificial intelligence systems is presented from various positions, which should collectively formulate the integral trust of users to the solutions obtained. Briefly, promising areas of development of artificial intelligence discussed at a Russian conference on artificial intelligence are indicated.
Kobrinskii, B. A. Artificial Intelligence: Problems, Solutions, and Prospects // Pattern Recognition and Image Analysis, 33, 217–220 (2023).