Can computers think? Can AI models be aware? These and similar questions often appear in discussions about AI's recent progress, achieved by the GPT-3, Lamda and other transformers natural language models. However, they are still controversial and on the edge of the paradox, because there are usually many hidden assumptions and misunderstandings about how the brain works and what thinking means. There is no other way, but a clear disclosure of these assumptions, and then examining how human information processing can be repeated by machines.
Recently, a team of scientists AI undertook an interesting experiment. Using the popular GPT-3 neural model by Open AI, they have refined him to the full corps of Daniel Dennett, an American philosopher, writer and cognitive scientist, whose research focuses on the philosophy of mind and science. The goal, as scientists said, was to check whether the AI model could answer philosophical questions, just like the philosopher himself, would answer these questions. Dennett himself took part in the experiment and answered ten philosophical questions, which were then introduced into the refined version of the GPT-3 transformer model.
The configuration of the experiment was simple and simple. Ten questions were asked to both the philosopher and the computer. An example of questions is: “Do human beings have free will? What types of freedom are worth?” AI was asked to have the same questions extended in context, assuming that questions about the interview with Dennett. The answers from the computer were then filtered with the following algorithm: 1) the answer was cut approximately the same length as the human response; 2) answers containing words (such as “interview”) have been abandoned. Each question was obtained by four answers generated by AI and no choice or editing was made.
How was the results assessed? The reviewers were presented to a quiz, and the goal was to choose a “correct” answer from the five Party, where the remaining four came from artificial intelligence. The quiz is available online so that everyone can try their detective skills and we recommend trying it to see if you can better than experts:
https://ucriverside.az1.qualtrics.com/jfe/form/sv_9hme3gzwiVsstk
The result was not completely unexpected. “Even competent philosophers who are experts at Dan Dennett's work have significant difficulty distinguishing the answers resulting from this language generation program from Dennett's own answers,” said the research leader. The participant's answers were not much higher than randomly guessing some questions and slightly better on others.
What observations can we get from these tests? Does this mean that models similar to GPT are able to soon replace people in many fields? Does it have something to do with thinking, understanding of natural language and artificial general intelligence? Will machine learning bring results at human level and when? These are important and interesting questions and we are still far from the final answers.