手机APP下载

您现在的位置: 首页 > 双语阅读 > 双语新闻 > 科技新闻 > 正文

无人驾驶汽车如何处置险情

来源:可可英语 编辑:shaun   可可英语APP下载 |  可可官方微信:ikekenet

Today I have been both murderous and merciful.

今天,我既凶残又仁慈。

I have deliberately mown down pensioners and a pack of dogs.

我故意杀死了领取养老金者和几条狗。

I have ploughed into the homeless, slain a couple of athletes and run over the obese.

我撞了无家可归者,杀死了两名运动员,轧过了肥胖者。

But I have always tried to save the children.

但是,我始终努力救孩子。

As I finish my session on the Moral Machine — a public experiment being run by the Massachusetts Institute of Technology — I learn that my moral outlook is not universally shared.

我在道德机器(Moral Machine)——麻省理工学院(MIT)运行的一项公开实验——上完成测试后发现,我的道德观跟很多人不一样。

Some argue that aggregating public opinions on ethical dilemmas is an effective way to endow intelligent machines, such as driverless cars, with limited moral reasoning capacity.

有些人辩称,在道德困境上把公众意见汇集到一起,是向无人驾驶汽车等智能机器赋予有限道德推理能力的有效手段。

Yet after my experience, I am not convinced that crowdsourcing is the best way to develop what is essentially the ethics of killing people.

然而,在测试之后,我不相信众包是形成杀戮道德(本质上就是这么回事)的最佳途径。

The question is not purely academic: Tesla is being sued in China over the death of a driver of a car equipped with its semi-autonomous autopilot.

这个问题并不单纯是学术层面的:一辆配备半自动式Autopilot的特斯拉(Tesla)汽车的驾车者死亡,导致该公司在中国被起诉。

Tesla denies the technology was at fault.

特斯拉否认那起事故的过错在于该项技术。

Anyone with a computer and a coffee break can contribute to MIT’s mass experiment, which imagines the brakes failing on a fully autonomous vehicle.

任何人只要有台电脑,利用咖啡时间就可以参加麻省理工学院的大众实验。

The vehicle is packed with passengers, and heading towards pedestrians.

该实验想象一辆全自动驾驶汽车的刹车失灵。这辆车载满了乘客,正朝行人开过去。

The experiment depicts 13 variations of the trolley problem — a classic dilemma in ethics that involves deciding who will die under the wheels of a runaway tram.

实验给出了这一无轨电车难题的13个版本。这是一个经典的道德难题,需要决定谁将死于一辆失控电车的车轮之下。

In MIT’s reformulation, the runaway is a self-driving car that can keep to its path or swerve; both mean death and destruction.

在麻省理工学院的重新设计中,失控的是一辆自动驾驶汽车,它既可以按原来路线行驶,也可以急转弯;两种情形都会造成死亡和破坏。

The choice can be between passengers and pedestrians, or two sets of pedestrians.

被选对象可以是乘客或行人,或者两组行人。

Calculating who should perish involves pitting more lives against less, young against old, professionals against the homeless, pregnant women against athletes, humans against pets.

计算谁应送命,需要在较多生命和较少生命之间、年轻人和老年人之间、专业人士和无家可归者之间、怀孕女性和运动员之间,以及人类和宠物之间做出抉择。

At heart, the trolley problem is about deciding who lives, who dies — the kind of judgment that truly autonomous vehicles may eventually make.

电车难题的核心是决定谁生、谁死——这正是真正自动驾驶的汽车最终或许要做出的那种判断。

My preferences are revealed afterwards: I mostly save children and sacrifice pets.

我的偏好在实验后被披露出来:基本上,我会救孩子,牺牲宠物。

Pedestrians who are not jaywalking are spared and passengers expended.

没有乱穿马路的行人得以幸免,而乘客被牺牲了。

It is obvious: by choosing to climb into a driverless car, they should shoulder the burden of risk.

很明显:选择上一辆无人驾驶汽车的人,应当分担一部分风险。

As for my aversion to swerving, should caution not dictate that driverless cars are generally programmed to follow the road?

至于我不愿急转弯,难道谨慎没有意味着无人驾驶汽车的程序指令通常是沿道路行驶吗?

It is illuminating — until you see how your preferences stack up against everyone else.

这很有启发意义——直到你看到自己的偏好跟其他所有人有多么不同。

In the business of life-saving, I fall short — especially when it comes to protecting car occupants.

我在救命这件事上做得不够好——尤其是在保护汽车乘员方面。

Upholding the law and not swerving seem more important to me than to others; the social status of my intended victims much less so.

相比其他事项,守法和避免急转弯似乎对我更重要一些;我选择的受害人的社会地位对我完全不重要。

We could argue over the technical aspects of dishing out death judiciously.

我们可能对于明智而审慎地分发死亡的技术方面争论不休。

For example, if we are to condemn car occupants, would we go ahead regardless of whether the passengers are children or criminals?

例如,如果我们宣判汽车乘员死刑,那么无论乘客是孩子还是罪犯,我们都会照做不误吗?

But to fret over such details would be pointless.

但是,为此类细节烦恼将是毫无意义的。

If anything, this experiment demonstrates the extreme difficulty of reaching a consensus on the ethics of driverless cars.

如果说有任何收获的话,那就是这个实验证明,要在无人驾驶汽车的道德上达成共识是极其困难的。

Similar surveys show that the utilitarian ideal of saving the greatest number of lives works pretty well for most people as long as they are not the roadkill.

类似调查显示,对大多数人而言,救下最多条命这个功利主义观念合情合理——只要他们自己不在车轮下丧生。

I am pessimistic that we can simply pool our morality and subscribe to a norm — because, at least for me, the norm is not normal.

我对于只是把大家的道德集合到一起、然后遵守一个规范感到很悲观,因为,至少在我看来,这个规范不是正常的。

This is the hurdle faced by makers of self-driving cars, which promise safer roads overall by reducing human error: who will buy a vehicle run on murderous algorithms they do not agree with, let alone a car programmed to sacrifice its occupants?

这是自动驾驶汽车厂商面临的障碍。他们承诺通过减少人类过错来提高整体道路安全,但是谁会购买一辆由他本人并不认可的杀戮算法操控的汽车呢?更别提程序设定牺牲车上乘客的汽车了。

It is the idea of premeditated killing that is most troubling.

最令人不安的正是这种预谋杀戮的构想。

That sensibility renders the death penalty widely unpalatable, and ensures abortion and euthanasia remain contentious areas of regulation.

那种敏感性让死刑普遍难以接受,并确保堕胎和安乐死仍是引起争议的监管领域。

Most of us, though, grudgingly accept that accidents happen.

不过,我们大多数人咬牙接受事故可能发生。

Even with autonomous cars, there may be room for leaving some things to chance.

即便是自动驾驶汽车,或许也应该留下让某些事情听天由命的空间。

重点单词   查看全部解释    
pessimistic [.pesi'mistik]

想一想再看

adj. 悲观的,悲观主义的

 
ethical ['eθikəl]

想一想再看

adj. 道德的,伦理的,民族的

 
obvious ['ɔbviəs]

想一想再看

adj. 明显的,显然的

联想记忆
dictate [dik'teit]

想一想再看

vi. 听写
vt. 口述,口授
n

联想记忆
swerve [swə:v]

想一想再看

vi. 突然转向,转弯,偏离方向
vt. 使突

联想记忆
unpalatable [ʌn'pælətəbl]

想一想再看

adj. 不适口的,不好吃的,让人不快的

联想记忆
technical ['teknikəl]

想一想再看

adj. 技术的,工艺的

 
aversion [ə'və:ʃən]

想一想再看

n. 嫌恶,憎恨

联想记忆
dilemma [di'lemə]

想一想再看

n. 困境,进退两难

联想记忆
effective [i'fektiv]

想一想再看

adj. 有效的,有影响的

联想记忆


关键字: 无人驾驶 处置险情

发布评论我来说2句

    最新文章

    可可英语官方微信(微信号:ikekenet)

    每天向大家推送短小精悍的英语学习资料.

    添加方式1.扫描上方可可官方微信二维码。
    添加方式2.搜索微信号ikekenet添加即可。