好英语网好英语网

好英语网 - www.HaoEnglish.com
好英语网一个提供英语阅读,双语阅读,双语新闻的英语学习网站。

谷歌工程师称AI“有意识、有灵魂”,遭公司停职

Google Sidelines Engineer Who Claims Its A.I. Is Sentient
谷歌工程师称AI“有意识、有灵魂”,遭公司停职

SAN FRANCISCO — Google placed an engineer on paid leave recently after dismissing his claim that its artificial intelligence is sentient, surfacing yet another fracas about the company’s most advanced technology.

旧金山——最近,谷歌驳回了一名工程师关于该公司的人工智能(简称AI)有知觉力的说法,然后让其带薪休假,这一事件再次显露出围绕着谷歌最先进技术的争论。

Blake Lemoine, a senior software engineer in Google’s Responsible A.I. organization, said in an interview that he was put on leave Monday. The company’s human resources department said he had violated Google’s confidentiality policy. The day before his suspension, Mr. Lemoine said, he handed over documents to a U.S. senator’s office, claiming they provided evidence that Google and its technology engaged in religious discrimination.

布雷克·勒穆瓦纳是谷歌“负责任的AI技术”(Responsible A.I.)部门高级软件工程师,他在接受采访时表示,他已于周一开始休假。公司人力资源部称他违反了谷歌的保密政策。勒穆瓦纳说,他在被停职的前一天把一些文件交给了一名美国参议员的办公室,他说这些文件提供了谷歌及其技术有宗教歧视的证据。
 

一些人工智能研究者已乐观地声称,这种技术很快会达到有知觉力的水平,但许多人立即反驳了这些说法。

Google said that its systems imitated conversational exchanges and could riff on different topics, but did not have consciousness. “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our A.I. Principles and have informed him that the evidence does not support his claims,” Brian Gabriel, a Google spokesman, said in a statement. “Some in the broader A.I. community are considering the long-term possibility of sentient or general A.I., but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.” The Washington Post first reported Mr. Lemoine’s suspension.

谷歌表示,公司的AI系统模仿人们的对话交流,能对不同的话题进行复述,但没有意识。“我们的团队——包括伦理学家和技术专家——已根据我们的AI原则对布莱克的担忧进行了核查,并告知他,证据不支持他的说法,”谷歌发言人布莱恩·加布里埃尔在一份声明中说。“在更广泛的人工智能界,有些人正在仔细考虑有知觉力AI或通用AI的长远可能性,但是,通过将目前建立在对话模型上的AI拟人化来实现这种可能性是讲不通的,因为这些模型没有知觉。”《华盛顿邮报》首先报道了勒穆瓦纳暂被停职的消息。

For months, Mr. Lemoine had tussled with Google managers, executives and human resources over his surprising claim that the company’s Language Model for Dialogue Applications, or LaMDA, had consciousness and a soul. Google says hundreds of its researchers and engineers have conversed with LaMDA, an internal tool, and reached a different conclusion than Mr. Lemoine did. Most A.I. experts believe the industry is a very long way from computing sentience.

几个月来,勒穆瓦纳一直与谷歌的经理、高管和人力资源部门争吵,因为他令人吃惊地声称,谷歌的对话应用语言模型(简称LaMDA)有意识,有灵魂。谷歌表示,公司的数百名研究员和工程师与内部使用的LaMDA工具进行对话后,得出了与勒穆瓦纳不同的结论。大多数人工智能专家认为,该行业距离计算机知觉还有很长的路要走。

Some A.I. researchers have long made optimistic claims about these technologies soon reaching sentience, but many others are extremely quick to dismiss these claims. “If you used these systems, you would never say such things,” said Emaad Khwaja, a researcher at the University of California, Berkeley, and the University of California, San Francisco, who is exploring similar technologies.

一些AI研究者很早以前就已乐观地声称,人工智能技术很快会达到有知觉力的水平,但许多人立即反驳了这些说法。“如果你用过这些系统,你永远也不会说这种话,”在加州大学伯克利分校和加州大学旧金山分校任职的研究员伊马德·赫瓦贾说道,他正在探索类似的技术。

While chasing the A.I. vanguard, Google’s research organization has spent the last few years mired in scandal and controversy. The division’s scientists and other employees have regularly feuded over technology and personnel matters in episodes that have often spilled into the public arena. In March, Google fired a researcher who had sought to publicly disagree with two of his colleagues’ published work. And the dismissals of two A.I. ethics researchers, Timnit Gebru and Margaret Mitchell, after they criticized Google language models, have continued to cast a shadow on the group.

在争当人工智能先锋的同时,谷歌的研究部门也在过去几年里陷入了丑闻与争议。该部门的科学家和其他员工经常在技术和人事问题上争吵不休,这些争吵有时会进入公众领域。今年3月,谷歌解雇了一名研究员,因为此人曾试图对两名同事已发表的研究结果公开表示不同意。在批评了谷歌的语言模型后,两名研究AI伦理的研究员——蒂姆尼特·加布鲁和玛格丽特·米切尔被解雇,让该部门进一步蒙上阴影。

Mr. Lemoine, a military veteran who has described himself as a priest, an ex-convict and an A.I. researcher, told Google executives as senior as Kent Walker, the president of global affairs, that he believed LaMDA was a child of 7 or 8 years old. He wanted the company to seek the computer program’s consent before running experiments on it. His claims were founded on his religious beliefs, which he said the company’s human resources department discriminated against.

勒穆瓦纳是一名退伍军人,他把自己描述为一名牧师,曾是服刑囚犯,也是一名AI研究员。勒穆瓦纳对谷歌的高管们(最高是负责全球事务的总裁肯特·沃克)说,他认为LaMDA是一个七八岁的孩子。他希望公司在对计算机程序进行实验前征得其同意。他的宣称是建立在其宗教信仰基础上的,他认为公司的人力资源部门对他的宗教信仰有歧视。

“They have repeatedly questioned my sanity,” Mr. Lemoine said. “They said, ‘Have you been checked out by a psychiatrist recently?’” In the months before he was placed on administrative leave, the company had suggested he take a mental health leave.

“他们一再怀疑我是否神志正常,”勒穆瓦纳说。“他们说,‘你最近看过精神科医生吗?’”公司在安排他带薪休假前的几个月里,曾建议他请心理健康假。

Yann LeCun, the head of A.I. research at Meta and a key figure in the rise of neural networks, said in an interview this week that these types of systems are not powerful enough to attain true intelligence.

Meta的人工智能研究主管、在神经网络兴起中起关键作用的扬恩·莱坎本周在接受采访时说,这类系统还没有强大到足以获得真正智能的程度。

Google’s technology is what scientists call a neural network, which is a mathematical system that learns skills by analyzing large amounts of data. By pinpointing patterns in thousands of cat photos, for example, it can learn to recognize a cat.

谷歌的这项技术被科学家称为神经网络,是一个通过分析大量数据来学习技能的数学模型。例如,通过确定几千张猫照片中的模式,它可以学会识别猫。

Over the past several years, Google and other leading companies have designed neural networks that learned from enormous amounts of prose, including unpublished books and Wikipedia articles by the thousands. These “large language models” can be applied to many tasks. They can summarize articles, answer questions, generate tweets and even write blog posts.

在过去几年里,谷歌和其他领先公司已设计了从数量巨大的文章中学习的神经网络,它们用的数据包括未出版的书籍和维基百科上成千上万篇文章。这些“大型语言模型”能用在许多任务上。它们能对文章进行总结、回答问题、生成推文,甚至能写博客文章。

But they are extremely flawed. Sometimes they generate perfect prose. Sometimes they generate nonsense. The systems are very good at recreating patterns they have seen in the past, but they cannot reason like a human.

但它们也存在巨大的缺陷。它们有时能写出完美的散文,有时却生成毫无意义的话。这些系统非常擅长将它们以前遇到过的模式再现出来,但不能像人类那样思考。
赞一下
上一篇: 数百起车祸背后,特斯拉等先进驾驶系统汽车安全性引担忧
下一篇: 返回列表

相关推荐

隐藏边栏