In his stance against 'strong AI,' John Searle presented what he named 'The Chinese Room Thought Experiment'. This experiment presents the situation in which a monolingual English speaker is secluded in a room and is given some writings that are written in Chinese along with some Chinese script and a set of rules in English to aid the English speaker to be able to compare the two sets of Chinese writings from each other. Then providing the English speaker with a third selection of writings along with more instructions in English for deciphering, the monolingual English speaker is now enabled to prepare a response to the questions asked in the script.
To compare and create an analogy which can further the understanding of this experiment can be one being secluded and being presented with two sets of hieroglyphics from Ancient Egypt and being given a key in English which allows them to understand the hieroglyphics and use that to answer the third set of hieroglyphics which are in the form of a question. The analogy works well in comparison to the Chinese scripts since most have experience in school being presented with a similar and relatable situation with hieroglyphics.
After the secluded monolingual English speaker uses all of the scripts and guides presented to him to answer the questions and return them to those outside the room, his answers are read by native Chinese speakers. The person inside the room becomes so well versed in following the instructions that are presented to them that they are able to seamlessly respond to the questions and the work that they created is seemingly "indistinguishable from those of Chinese speakers."
When they read and look at the answers provided by the English speaker, the Chinese speakers would not be able to come to the conclusion that the person that is inside the room and responding to their questions is not a native Chinese speaker. By being able to produce these answers by just decoding uninterpreted symbols using a code, it can be said that the person just following the instructions is "simply behaving like a computer." Searle uses this computer example to relate to the "Script Applier Mechanism" (SAM), story understanding program, created by Schank and Abelson in 1977.
In order to reach his conclusion for "The Chinese Room Thought Experiment," Searle decides to consider the situation as if he were the monolingual English speaker placed in the secluded room. In his own perspective, he believes that it is quite clear that he does not understand any bit of the Chinese stories. He states that he receives the same content in the writings that would be seamless for the native Chinese speaker to understand and regardless of the extent to which how extensive the deciphering codes are in the end, he as a monolingual English speaker understands nothing. Searle then takes this conclusion that he comes to and makes the follow-up conclusion that Schank's computer does not understand any of the stories either.
The computer is also able to use all three sets of the Chinese writing as well as the deciphering codes in English to come up with a response in Chinese just as the well-trained English speaker is able to do. Since Searle claims that he is able to "understand nothing" and is able to produce a response to the questions, he claims that Schank and Abelson's computer also does not understand anything as it is simply able to reproduce the exact same thing that the English speaker was able to do. Expanding the conclusions that he made further, Searle then states that the ability of Schank and Abelson's computer to follow the set of rules in order to respond to the questions is not something that can be considered inherently special or unique to their specific computer.
It is something that can be programmed into any computer or taught to any human being so it is not unique and the theory can then apply to any simulation. This works to support Searle's task of refuting strong AI, by stating that the computers ability to decipher the three sets of Chinese script and use the English codes, it is not considered intelligent no matter how intelligent it may seem.
The programming inserted into the computers which cause the symbols to be processed, it is not intelligent because it is just executing the functions it is being told to do and the symbols are meaningless and the computer itself is not doing anything that could be considered intelligent. With this lack of semantics and thinking, it can be stated that it does not have any meaningful mental states further supporting Searle's argument.