Alex Traub is a reporter for The Times who writes obituaries.See more on: New York Review of BooksShare full article
People outside the room pass more Chinese documents inside, and Professor Searle sends other documents back, following the rulebook’s instructions. The people passing him documents call them “questions.” The symbols he gives back they call “answers.” The rulebook they call “the program.” And Professor Searle they call “the computer.”That situation is equivalent to the workings of A.I., he said. Both involve manipulating formal symbols to simulate understanding.“No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down,” Professor Searle wrote in his first paper on the subject, published in 1980. “Why on earth would anyone suppose that a computer simulation of understanding actually understood anything?”Professor Searle concluded that psychological states could never be attributed to computer programs, and that it was wrong to compare the brain to hardware or the mind to software.According to the Stanford Encyclopedia of Philosophy, an internet reference source, the Searle thought experiment “has probably been the most widely discussed philosophical argument in cognitive science to appear since the Turing Test,” the mathematician and computer scientist Alan Turing’s 1950 procedure for determining machine intelligence.
Nevertheless, some comments of Professor Searle’s read differently in the current era of newly advanced A.I.In a 2014 New York Review article, he argued that “superintelligent computers rising up and killing us, all by themselves, is not a real danger,” because A.I. has “no intelligence, no motivation, no autonomy and no agency.”But could A.I. lack consciousness, as Professor Searle argued, while still exhibiting autonomy or agency? In June, the A.I. firm Anthropic released a report describing “agentic misalignment”: the tendency, in testing, of models from every major A.I. developer to engage in blackmail or worse when their goals or existence seem threatened.We will never know what Professor Searle’s biting rejoinder to the word “agentic” would have been. A sexual-harassment scandal ended his public career shortly before A.I.’s recent breakthroughs.
John R. Searle, an uncompromising and wide-ranging philosopher who was best known for a thought experiment he formulated, decades before the rise of ChatGPT, to disprove that a computer program by itself could ever achieve consciousness, died on Sept. 16 in Safety Harbor, Fla., west of Tampa. He was 93.
Professor Searle, who taught at the University of California, Berkeley, for 60 years, was the rare philosopher who could proudly declare, “I’m not subtle.”
He brought ironic humor and bluntness to subjects as diverse as the politics of higher education, the nature of consciousness and the merits of textual deconstruction as a philosophical style. In a 1999 profile, The Los Angeles Times called him “the Sugar Ray Robinson of philosophers,” after the boxer who fought in different weight classes.
Professor Searle’s most prominent intellectual battleground was The New York Review of Books, to which he contributed from 1972 to 2014. It was where he labeled one book, by the esteemed philosopher David J. Chalmers, “a mass of confusions.” In a debate on another subject — the argument that computer programs function like minds — he described his aim as “the relentless exposure of its preposterousness.”
At least 15 books by other authors have been devoted to Professor Searle’s work and its critics. Informed once that the listing of an introductory philosophy course featured pictures of René Descartes, David Hume and himself, Professor Searle replied, “Who are those other two guys?”
Professor Searle sought to solve the long-running debate over the division between the mind and the body by dispensing with the duality altogether. He argued that mental experiences like pain, ecstasy and drunkenness were all neurobiological phenomena, caused by firing neurons. Consciousness is not, he said, a separate substance of its own: It is a state the brain is in, like liquidity is the state of the molecules in a glass of water.
Suppose, Professor Searle wrote, that he, who did not know a word of Chinese, was locked in a room with boxes full of documents in Chinese script as well as a rulebook, in English, explaining how to match the various Chinese symbols together. It does not teach Chinese; it just says, in effect, “squiggle-squiggle” goes with “squoggle-squoggle.”
People outside the room pass more Chinese documents inside, and Professor Searle sends other documents back, following the rulebook’s instructions. The people passing him documents call them “questions.” The symbols he gives back they call “answers.” The rulebook they call “the program.” And Professor Searle they call “the computer.”
That situation is equivalent to the workings of A.I., he said. Both involve manipulating formal symbols to simulate understanding.
“No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down,” Professor Searle wrote in his first paper on the subject, published in 1980. “Why on earth would anyone suppose that a computer simulation of understanding actually understood anything?”
Professor Searle concluded that psychological states could never be attributed to computer programs, and that it was wrong to compare the brain to hardware or the mind to software.
Nevertheless, some comments of Professor Searle’s read differently in the current era of newly advanced A.I.
In a 2014 New York Review article, he argued that “superintelligent computers rising up and killing us, all by themselves, is not a real danger,” because A.I. has “no intelligence, no motivation, no autonomy and no agency.”
But could A.I. lack consciousness, as Professor Searle argued, while still exhibiting autonomy or agency? In June, the A.I. firm Anthropic released a report describing “agentic misalignment”: the tendency, in testing, of models from every major A.I. developer to engage in blackmail or worse when their goals or existence seem threatened.
We will never know what Professor Searle’s biting rejoinder to the word “agentic” would have been. A sexual-harassment scandal ended his public career shortly before A.I.’s recent breakthroughs.
John Rogers Searle was born in Denver on July 31, 1932. His mother, Hester (Beck) Searle, was a pediatric doctor. His father, George, was an electrical engineer and executive at AT&T.
At 19, Mr. Searle earned a Rhodes scholarship and transferred from the University of Wisconsin, Madison, to the University of Oxford in England, where he received bachelor’s and master’s degrees as well as a doctorate in philosophy.
In 1958, he married a fellow young philosopher, Dagmar Carboch. She later worked as a lawyer and closely edited her husband’s work. Nearly every book he wrote bears a page that reads, “For Dagmar.”
Professor Searle joined the Berkeley philosophy department in 1959. He was an early supporter of the campus protests of the 1960s but soon turned against student radicals, determining that their “moral outrage” was “essentially a middle-class luxury,” as he later told Mr. Kreisler of Berkeley.
The Searle Center quickly closed. The lawsuit against Professor Searle was settled in 2018. The next year, Berkeley announced that he would be stripped of his emeritus status, following a determination that he had violated university policies against sexual harassment and retaliation in the case involving his former research assistant. The statement did not mention other incidents.
After Professor Searle’s death, Jennifer Hudin, the former director of the Searle Center, stated publicly that she had faced related accusations, but that both she and Professor Searle were innocent of all charges. A spokeswoman for Berkeley declined to comment on the post.
Professor Searle’s wife died in 2017. In addition to his son Tom, he is survived by another son, Mark; a half sister, Melanie Searle; a granddaughter; a step-granddaughter; and a great-granddaughter.
Professor Searle often said that “self-confidence” was necessary for philosophical argumentation, and for universities to uphold scholarly elitism as their mission. In one 1990 article on the subject for The New York Review, he wrote proudly of an intellectual buccaneer among his forebears who had “set out on his horse for what was then Indian Territory, carrying Milton’s ‘Paradise Lost’ and the Bible in his saddle bags.”
ImageProfessor Searle delivered a lecture at Christ Church, Oxford, in 2005. At least 15 books by other authors have been devoted to his work and its critics. Credit…Matthew Breindel, via Wikipedia Commons
ImageProfessor Searle delivered a lecture at Christ Church, Oxford, in 2005. At least 15 books by other authors have been devoted to his work and its critics. Credit…Matthew Breindel, via Wikipedia Commons
ImageProfessor Searle in his office at the University of California, Berkeley, in 1969. He joined the Berkeley philosophy department in 1959 and remained there for 60 years. Credit…Sam Falk/The New York Times
ImageProfessor Searle in his office at the University of California, Berkeley, in 1969. He joined the Berkeley philosophy department in 1959 and remained there for 60 years. Credit…Sam Falk/The New York Times
ImageProfessor Searle in 2017. He had been granted his own campus institute a year earlier, but sexual-harassment allegations that year led to the end of his career.Credit…Leonardo Cendamo/Getty Images
ImageProfessor Searle in 2017. He had been granted his own campus institute a year earlier, but sexual-harassment allegations that year led to the end of his career.Credit…Leonardo Cendamo/Getty Images
People outside the room pass more Chinese documents inside, and Professor Searle sends other documents back, following the rulebook’s instructions. The people passing him documents call them “questions.” The symbols he gives back they call “answers.” The rulebook they call “the program.” And Professor Searle they call “the computer.”That situation is equivalent to the workings of A.I., he said. Both involve manipulating formal symbols to simulate understanding.“No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down,” Professor Searle wrote in his first paper on the subject, published in 1980. “Why on earth would anyone suppose that a computer simulation of understanding actually understood anything?”Professor Searle concluded that psychological states could never be attributed to computer programs, and that it was wrong to compare the brain to hardware or the mind to software.According to the Stanford Encyclopedia of Philosophy, an internet reference source, the Searle thought experiment “has probably been the most widely discussed philosophical argument in cognitive science to appear since the Turing Test,” the mathematician and computer scientist Alan Turing’s 1950 procedure for determining machine intelligence.



You must be logged in to post a comment.