手机APP下载

您现在的位置: 首页 > 英语听力 > 国外媒体资讯 > 经济学人 > 经济学人科技系列 > 正文

制造虚假信息变得越来越容易

来源:可可英语 编辑:Alisa   可可英语APP下载 |  可可官方微信:ikekenet
  


扫描二维码进行跟读打分训练

When it comes to disinformation, "social media took the cost of distribution to zero, and generative AI takes the cost of generation to zero," says Renee DiResta of the Stanford Internet Observatory.

斯坦福大学网络观察室的蕾妮·迪雷斯塔(Renee DiResta)表示,当谈到虚假信息时,“社交媒体将传播成本降至零,而生成式人工智能则将生成成本降至零”。

Large language models such as GPT-4 make it easy to produce misleading news articles or social-media posts in huge quantities.

GPT-4 等大语言模型很容易生成大量误导性新闻文章或社交媒体帖子。

And AI can produce more than text. Cloning a voice using AI used to require minutes, or even hours, of sample audio.

人工智能可以生成的不仅仅是文本。使用人工智能克隆声音过去需要几分钟甚至几小时的样本音频。

Last year, however, researchers at Microsoft unveiled VALL-E, an AI model that is able to clone a person's voice from just a three-second clip of them speaking, and make it say any given text.

然而,去年,微软的研究人员推出了VALL-E,这是一种人工智能模型,能够从一个人说话的三秒片段中克隆出他的声音,并让它说出任何给定的文本。

OpenAl, the American company behind GPT-4, has developed a similar tool, Voice Engine, which can convincingly clone any voice from a 15-second clip.

GPT-4背后的美国公司OpenAl开发了一款类似的工具Voice Engine,它可以从15秒的音频片段中克隆任何语音,以假乱真。

It has not yet released it, recognising "serious risks, which are especially top of mind in an election year".

该模型尚未发布,因为OpenAl认识到“其存在严重风险,这在选举年尤其受到关注”。

Similarly, Sora, from OpenAl, can produce surprisingly realistic synthetic videos, in response to text prompts, of up to a minute in length.

同样,来自OpenAl的Sora可以根据文本提示词生成逼真的合成视频,长度可达一分钟,令人叹为观止。

OpenAI has yet to release Sora to the public, partly on the ground that it could be used to create disinformation.

OpenAI尚未公开发布Sora,部分原因是它可能被用来制造虚假信息。

As well as providing new ways to discredit or misrepresent politicians, AI tools also raise the spectre of personalised disinformation, generated to appeal to small groups (think soccer moms in a specific town).

除了提供抹黑或歪曲政客的新方法外,人工智能工具还引发了对个性化虚假信息的担忧,这些虚假信息是为了吸引小群体而产生的(想想某个城镇的足球妈妈)。

It may even be possible to "microtarget" individuals with disinformation, based on knowledge of their preferences, biases and concerns.

AI甚至可以根据个人的偏好、偏见和担忧,用虚假信息“微观定位”个人。

Though all of this is worrying, it is worth remembering that not all aspects of the technology are negative.

所有这些都令人忧心忡忡,但值得记住的是,并非该技术的所有方面都是负面的。

Al, it turns out, can be used for fighting disinformation as well as producing it.

事实证明,人工智能既可以用来打击虚假信息,也可以用来制造虚假信息。

重点单词   查看全部解释    
sample ['sæmpl]

想一想再看

n. 样品,样本
vt. 采样,取样

联想记忆
misleading [mis'li:diŋ]

想一想再看

adj. 令人误解的

 
produce [prə'dju:s]

想一想再看

n. 产品,农作物
vt. 生产,提出,引起,

联想记忆
election [i'lekʃən]

想一想再看

n. 选举

联想记忆
observatory [əb'zə:və.tri]

想一想再看

n. 天文台,气象台,了望台

联想记忆
clip [klip]

想一想再看

n. 夹子,钳,回形针,弹夹
n. 修剪,(羊

 
negative ['negətiv]

想一想再看

adj. 否定的,负的,消极的
n. 底片,负

联想记忆
discredit [dis'kredit]

想一想再看

vt. 使 ... 不可信,怀疑,损害 ... 的信用

联想记忆
realistic [riə'listik]

想一想再看

adj. 现实的,现实主义的

 
appeal [ə'pi:l]

想一想再看

n. 恳求,上诉,吸引力
n. 诉诸裁决

联想记忆

发布评论我来说2句

    最新文章

    可可英语官方微信(微信号:ikekenet)

    每天向大家推送短小精悍的英语学习资料.

    添加方式1.扫描上方可可官方微信二维码。
    添加方式2.搜索微信号ikekenet添加即可。