And Steve and I envision that such proof checkers get built into all our compute hardware, so it just becomes impossible to run very unsafe code.
史蒂夫和我设想这样的校验器会内置于我们所有的计算机硬件中,因此绝无可能运行非常不安全的代码。
What if the AI, though, isn't able to write that AI tool for you?
但是,如果AI无法为你编写那个AI工具呢?
Then there's another possibility.
那就还有另一种可能性。
You train an AI to first just learn to do what you want and then you use a different AI to extract out the learned algorithm and knowledge for you, like an AI neuroscientist.
你训练AI先学会做你想做的事,然后再换一个AI为你提取出所学的算法和知识,就像一个AI神经科学家。
This is in the spirit of the field of mechanistic interpretability, which is making really impressive rapid progress.
这就是机械可解释性领域的精髓,该领域正在取得惊艳的快速进步。
Provably safe systems are clearly not impossible.
可证明安全的系统显然不是不可能的。
Let's look at a simple example of where we first machine-learn an algorithm from data and then distill it out in the form of code that provably meets spec, OK?
我们来看一个简单的例子,首先,基于数据,用计算机学习一个算法,然后以可证明符合规范的代码形式提炼出算法。
Let’s do it with an algorithm that you probably learned in first grade, addition, where you loop over the digits from right to left, and sometimes you do a carry.
我们就拿你一年级或许就学会的算法为例,加法,你从右到左遍历数位,有时还会进位。
We'll do it in binary, as if you were counting on two fingers instead of ten.
我们用二进制来做,如同用两根手指,而不是十个手指数数。
And we first train a recurrent neural network, never mind the details, to nail the task.
我们训练了一个循环神经网络,不用管细节,来完成这个任务。
So now you have this algorithm that you don't understand how it works in a black box defined by a bunch of tables of numbers that we, in nerd speak, call parameters.
你现在有了这个算法,你也不知道它在黑箱中是怎么运作的,黑箱由一大堆数字定义,用“技术宅”的话来说,就是“参数”。
Then we use an AI tool we built to automatically distill out from this the learned algorithm in the form of a Python program.
然后,我们用构建的AI工具以Python程序的形式自动从中提炼出所学算法。
And then we use the formal verification tool known as Daphne to prove that this program correctly adds up any numbers, not just the numbers that were in your training data.
然后,我们使用名为Daphne的形式验证工具证明该程序可以正确地将任意数字相加,而不仅仅是训练数据中的数字。
So in summary, provably safe AI, I'm convinced is possible, but it's going to take time and work.
总而言之,可证明安全的AI,我坚信这是可能的,但这需要时间和努力。
And in the meantime, let's remember that all the AI benefits that most people are excited about actually don't require super-intelligence.
同时,请记住,各种AI的益处,很多人为之兴奋的益处,其实并不需要超级智能。
We can have a long and amazing future with AI.
我们可以和AI共同拥有漫长且奇妙的未来。
So let's not pause AI.
所以请不要暂停AI。
Let's just pause the reckless race to super-intelligence.
而是暂停追求超级智能的无脑竞争。
Let's stop obsessively training ever-larger models that we don't understand.
停止执着于训练越来越大又无法理解的模型。
Let's heed the warning from ancient Greece and not get hubris, like in the story of Icarus.
让我们听从古希腊的警告,不要像伊卡洛斯的故事那样自负。
Because artificial intelligence is giving us incredible intellectual wings with which we can do things beyond our wildest dreams if we stop obsessively trying to fly to the sun.
AI为我们插上了神奇的智慧之翼,如果我们不再执迷于飞向太阳,我们就能用它天马行空地飞翔。
Thank you.
谢谢。