手机APP下载

您现在的位置: 首页 > 在线广播 > 科学美国人 > 科学美国人技术系列 > 正文

机器人模仿人类时表现会超越人类

来源:可可英语 编辑:aimee   可可英语APP下载 |  可可官方微信:ikekenet
  
  • This is Scientific American's 60-second Science, I'm Christopher Intagliata.
  • 这里是科学美国人——60秒科学系列,我是克里斯托弗·因塔格里塔。
  • Last year, Google unveiled Duplex, its artificial-intelligence-powered assistant.
  • 去年,谷歌推出了人工智能助手Duplex。
  • "How can I help you?"
  • “有什么需要我帮忙的?”
  • "Hi, I'm calling to book a women's haircut for a client. Um, I'm looking for something on May 3rd."
  • “你好,我打电话来为一名女客户预约理发。嗯,我想订5月3日那天。”
  • That's a robot.
  • 这是机器人在说话。
  • "Sure, give me one second."
  • “好的,请稍等。”
  • "Mm-hmm."
  • “嗯,嗯。”
  • "For what time are you looking for around?"
  • “您想预约几点理发?”
  • The machine assistant never identified itself as a bot in the demo. And Google got a lot of flak for that.
  • 在演示中,机器助手从头至尾都未表明自已是机器人。谷歌因此饱受指责。
  • They later clarified that they would only launch the tech with "disclosure built in."
  • 之后,他们澄清说,在安装“内置披露”后才会发布这项技术。
  • But therein lies a dilemma, because a new study in the journal Nature Machine Intelligence
  • 但这里存在一个两难局面,因为《自然机器智能》期刊上的一项新研究表明,
  • suggests that a bot is most effective when it hides its machine identity.
  • 当机器人隐藏其机器身份时,其效率最高。
  • "That is, if it is allowed to pose as human."
  • “也就是说,它被允许模仿人类的时候。”
  • Talal Rahwan is a computational social scientist at New York University's campus in Abu Dhabi.
  • 纽约大学阿布扎比分校的计算社会科学家泰拉尔·拉万说到。
  • His team recruited nearly 700 online volunteers to play the prisoner's dilemma—
  • 他的团队招募了近700名网络志愿者,与人类或机器人玩“囚徒困境”,
  • a classic game of negotiation, trust and deception—against either humans or bots.
  • 这是涉及谈判、信任和欺骗的经典博弈。
  • Half the time, the human players were told the truth about who they were matched up against.
  • 一半时间里,志愿者被告知对手的真实信息。
  • The other half, they were told they were playing a bot when they were actually playing a human
  • 另一半时间里,对阵角色是真人时,志愿者会被告知对方是机器人;
  • or that they were battling a human when, in fact, it was only a bot.
  • 或者当他们被告知与真人对弈时,其实对面只是个机器人。
  • And the scientists found that bots actually did remarkably well in this game of negotiation—if they impersonated humans.
  • 科学家发现,如果机器人模仿人类,那它们在这场谈判博弈中的表现会非常出色。
  • "When the machine is reported to be human, it outperforms humans themselves.
  • “当机器被谎报为人类时,它的表现要优于人类。
  • It's more persuasive; it's able to induce cooperation and persuade the other opponent to cooperate more than humans themselves."
  • 机器人比人类本身更具说服力;它能够引导合作,说服对手与自已合作。”
  • But whenever the bots' true nature was disclosed, their superiority vanished.
  • 但一旦机器人的真实身份暴露,那其优越性就会消失。
  • And Rahwan says that points to a fundamental conundrum.
  • 拉万说这指向了一个根本难题。
  • We can now build really efficient bots—that perform tasks even better than we can—
  • 我们现在可以制造真正高效的机器人,执行任务能力甚至超越人类,
  • but their efficiency may be linked to their ability to hide their identity—which, you know, feels ethically problematic.
  • 但机器人的高效可能与它们隐藏身份的能力相关,这可能存在伦理问题。
  • "Those very humans who will be deceived by the machine, they are the ones who ultimately have to make that choice.
  • “那些会被机器人欺骗的人,正是最终不得不做出选择的人。
  • Otherwise it would violate fundamental values of autonomy, respect and dignity for humans."
  • 否则,就会违反人类自主、尊重和尊严的基本价值。”
  • It's not realistic to ask people for consent before every bot-human interaction. That would, of course, reveal the bots' true identity.
  • 在机器人与人类每次互动前征求人类的同意,这并不现实。当然,这会暴露机器人的真实身份。
  • So we, as a society, will have to figure out if making our lives a bit easier is worth interacting with bots that pretend to be human.
  • 因此,我们作为群体必须要弄清楚:为了让我们的生活更轻松一点,与假装人类的机器人互动是否值得。
  • "Mm-hmm."
  • “嗯,嗯。”
  • Thanks for listening for Scientific American's 60-second Science. I'm Christopher Intagliata.
  • 谢谢大家收听科学美国人——60秒科学。我是克里斯托弗·因塔利亚塔。


扫描二维码进行跟读打分训练
vG,M1&IA@zd1tLL

Ia,PR.A%+FbwZ=8-lm5

This is Scientific American's 60-second Science, I'm Christopher Intagliata.
Last year, Google unveiled Duplex, its artificial-intelligence-powered assistant.
"How can I help you?"
"Hi, I'm calling to book a women's haircut for a client. Um, I'm looking for something on May 3rd."
That's a robot.
"Sure, give me one second."
"Mm-hmm."
"For what time are you looking for around?"
The machine assistant never identified itself as a bot in the demo. And Google got a lot of flak for that. They later clarified that they would only launch the tech with "disclosure built in."
But therein lies a dilemma, because a new study in the journal Nature Machine Intelligence suggests that a bot is most effective when it hides its machine identity.
"That is, if it is allowed to pose as human."
Talal Rahwan is a computational social scientist at New York University's campus in Abu Dhabi. His team recruited nearly 700 online volunteers to play the prisoner's dilemmaa classic game of negotiation, trust and deceptionagainst either humans or bots. Half the time, the human players were told the truth about who they were matched up against. The other half, they were told they were playing a bot when they were actually playing a human or that they were battling a human when, in fact, it was only a bot.

td1W3nVY8ZKP[IlHI

机器人.jpg
And the scientists found that bots actually did remarkably well in this game of negotiationif they impersonated humans.
"When the machine is reported to be human, it outperforms humans themselves. It's more persuasive; it's able to induce cooperation and persuade the other opponent to cooperate more than humans themselves."
But whenever the bots' true nature was disclosed, their superiority vanished. And Rahwan says that points to a fundamental conundrum. We can now build really efficient botsthat perform tasks even better than we canbut their efficiency may be linked to their ability to hide their identitywhich, you know, feels ethically problematic.
"Those very humans who will be deceived by the machine, they are the ones who ultimately have to make that choice. Otherwise it would violate fundamental values of autonomy, respect and dignity for humans."
It's not realistic to ask people for consent before every bot-human interaction. That would, of course, reveal the bots' true identity. So we, as a society, will have to figure out if making our lives a bit easier is worth interacting with bots that pretend to be human.
"Mm-hmm."
Thanks for listening for Scientific American's 60-second Science. I'm Christopher Intagliata.

sBgE*|A=[Wh-yL8yn

eaA80ot)m.W%5W.)#ZAzP.Ud1)LW(BFN]=j[Pwx

重点单词   查看全部解释    
induce [in'dju:s]

想一想再看

vt. 引起,引诱,导致

联想记忆
deception [di'sepʃən]

想一想再看

n. 骗局,诡计,欺诈

 
flak [flæk]

想一想再看

n. 高射炮,对空炮火,抨击,指责 =flack

联想记忆
fundamental [.fʌndə'mentl]

想一想再看

adj. 基本的,根本的,重要的
n. 基本原

 
persuasive [pə'sweisiv]

想一想再看

adj. 有说服力的,令人信服的

 
figure ['figə]

想一想再看

n. 图形,数字,形状; 人物,外形,体型
v

联想记忆
dignity ['digniti]

想一想再看

n. 尊严,高贵,端庄

联想记忆
persuade [pə'sweid]

想一想再看

vt. 说服,劝说

联想记忆
identity [ai'dentiti]

想一想再看

n. 身份,一致,特征

 
violate ['vaiəleit]

想一想再看

vt. 违犯,亵渎,干扰,侵犯,强奸

 

发布评论我来说2句

    最新文章

    可可英语官方微信(微信号:ikekenet)

    每天向大家推送短小精悍的英语学习资料.

    添加方式1.扫描上方可可官方微信二维码。
    添加方式2.搜索微信号ikekenet添加即可。