全球領(lǐng)先的人工智能科學(xué)家呼吁各國(guó)政府合作監(jiān)管人工智能技術(shù),否則可能為時(shí)已晚。
三位圖靈獎(jiǎng)(Turing Award)得主(相當(dāng)于計(jì)算機(jī)科學(xué)領(lǐng)域的諾貝爾獎(jiǎng))以及來(lái)自世界各地的十多名頂尖科學(xué)家,共同發(fā)表了一封公開信,呼吁為推進(jìn)人工智能的發(fā)展建立更好的安全保障措施。這三位圖靈獎(jiǎng)得主曾幫助推動(dòng)人工智能的研發(fā)。
這些科學(xué)家們表示,隨著人工智能技術(shù)的快速發(fā)展,任何錯(cuò)誤或?yàn)E用可能給整個(gè)人類帶來(lái)嚴(yán)重的后果。
科學(xué)家們?cè)诠_信中表示:“人類失去對(duì)人工智能系統(tǒng)的控制,或者系統(tǒng)被惡意利用,可能給全人類帶來(lái)災(zāi)難性的后果?!彼麄冞€警告,隨著人工智能技術(shù)的快速發(fā)展,這些“災(zāi)難性后果”隨時(shí)可能出現(xiàn)。
科學(xué)家們提出了下列措施,以立即著手解決人工智能被惡意利用的風(fēng)險(xiǎn):
政府人工智能安全機(jī)構(gòu)
政府需要合作執(zhí)行人工智能安全預(yù)防措施??茖W(xué)家們提出的一些觀點(diǎn)包括鼓勵(lì)各國(guó)建立人工智能主管部門,以應(yīng)對(duì)本國(guó)發(fā)生的人工智能“事件”和風(fēng)險(xiǎn)。在理想情況下,這些部門將相互合作,而且從長(zhǎng)遠(yuǎn)來(lái)看,應(yīng)該創(chuàng)建一個(gè)新國(guó)際機(jī)構(gòu),以防止人工智能模型的開發(fā)給全球帶來(lái)風(fēng)險(xiǎn)。
公開信中寫道:“這個(gè)國(guó)際機(jī)構(gòu)將確保各國(guó)采用并執(zhí)行最低限度的有效安全防范措施,包括模型注冊(cè)、披露和預(yù)警機(jī)制等?!?/p>
開發(fā)者的人工智能安全承諾
科學(xué)家們提出的另外一種觀點(diǎn)是,要求開發(fā)者有意識(shí)地保證其模型的安全性,并承諾他們不會(huì)跨越紅線。正如去年在北京召開的會(huì)議上,頂尖科學(xué)家們的聲明中所說(shuō)的那樣,開發(fā)者需要承諾不會(huì)創(chuàng)建“能夠自主復(fù)制或改進(jìn)、尋求權(quán)力或欺騙開發(fā)者、協(xié)助制造大規(guī)模殺傷性武器和發(fā)送網(wǎng)絡(luò)攻擊”的人工智能。
對(duì)人工智能的獨(dú)立研究和技術(shù)檢查
另外一項(xiàng)建議是,創(chuàng)建一系列全球人工智能安全與驗(yàn)證基金,由政府、慈善家和企業(yè)提供資金,贊助獨(dú)立研究,以幫助開發(fā)更好的人工智能技術(shù)核查方法。
呼吁各國(guó)政府在人工智能安全性方面采取行動(dòng)的專家包括三位圖靈獎(jiǎng)得主,他們分別是:中國(guó)多位最成功的科技創(chuàng)業(yè)者的導(dǎo)師姚期智、全世界被引用次數(shù)最多的計(jì)算機(jī)科學(xué)家之一約書亞·本吉奧和OpenAI聯(lián)合創(chuàng)始人兼前首席科學(xué)家伊利亞·蘇特斯威夫的老師、曾在谷歌(Google)從事十多年機(jī)器學(xué)習(xí)研究的杰弗里·辛頓。
合作和人工智能道德規(guī)范
科學(xué)家們?cè)诠_信中對(duì)當(dāng)前人工智能領(lǐng)域的國(guó)際合作大加稱贊,例如今年5月,中美兩國(guó)領(lǐng)導(dǎo)人在日內(nèi)瓦就人工智能風(fēng)險(xiǎn)展開的討論。但他們認(rèn)為各國(guó)需要開展更多合作。
科學(xué)家們認(rèn)為,人工智能的發(fā)展應(yīng)該有工程師道德規(guī)范,類似于醫(yī)生或律師的道德規(guī)范。政府應(yīng)該把人工智能視為一種全球公共利益,而不僅僅是一項(xiàng)令人激動(dòng)的新技術(shù)。
公開信中寫道:“我們必須共同努力,準(zhǔn)備應(yīng)對(duì)隨時(shí)可能到來(lái)的災(zāi)難性后果?!保ㄘ?cái)富中文網(wǎng))
譯者:劉進(jìn)龍
審校:汪皓
全球領(lǐng)先的人工智能科學(xué)家呼吁各國(guó)政府合作監(jiān)管人工智能技術(shù),否則可能為時(shí)已晚。
三位圖靈獎(jiǎng)(Turing Award)得主(相當(dāng)于計(jì)算機(jī)科學(xué)領(lǐng)域的諾貝爾獎(jiǎng))以及來(lái)自世界各地的十多名頂尖科學(xué)家,共同發(fā)表了一封公開信,呼吁為推進(jìn)人工智能的發(fā)展建立更好的安全保障措施。這三位圖靈獎(jiǎng)得主曾幫助推動(dòng)人工智能的研發(fā)。
這些科學(xué)家們表示,隨著人工智能技術(shù)的快速發(fā)展,任何錯(cuò)誤或?yàn)E用可能給整個(gè)人類帶來(lái)嚴(yán)重的后果。
科學(xué)家們?cè)诠_信中表示:“人類失去對(duì)人工智能系統(tǒng)的控制,或者系統(tǒng)被惡意利用,可能給全人類帶來(lái)災(zāi)難性的后果?!彼麄冞€警告,隨著人工智能技術(shù)的快速發(fā)展,這些“災(zāi)難性后果”隨時(shí)可能出現(xiàn)。
科學(xué)家們提出了下列措施,以立即著手解決人工智能被惡意利用的風(fēng)險(xiǎn):
政府人工智能安全機(jī)構(gòu)
政府需要合作執(zhí)行人工智能安全預(yù)防措施??茖W(xué)家們提出的一些觀點(diǎn)包括鼓勵(lì)各國(guó)建立人工智能主管部門,以應(yīng)對(duì)本國(guó)發(fā)生的人工智能“事件”和風(fēng)險(xiǎn)。在理想情況下,這些部門將相互合作,而且從長(zhǎng)遠(yuǎn)來(lái)看,應(yīng)該創(chuàng)建一個(gè)新國(guó)際機(jī)構(gòu),以防止人工智能模型的開發(fā)給全球帶來(lái)風(fēng)險(xiǎn)。
公開信中寫道:“這個(gè)國(guó)際機(jī)構(gòu)將確保各國(guó)采用并執(zhí)行最低限度的有效安全防范措施,包括模型注冊(cè)、披露和預(yù)警機(jī)制等。”
開發(fā)者的人工智能安全承諾
科學(xué)家們提出的另外一種觀點(diǎn)是,要求開發(fā)者有意識(shí)地保證其模型的安全性,并承諾他們不會(huì)跨越紅線。正如去年在北京召開的會(huì)議上,頂尖科學(xué)家們的聲明中所說(shuō)的那樣,開發(fā)者需要承諾不會(huì)創(chuàng)建“能夠自主復(fù)制或改進(jìn)、尋求權(quán)力或欺騙開發(fā)者、協(xié)助制造大規(guī)模殺傷性武器和發(fā)送網(wǎng)絡(luò)攻擊”的人工智能。
對(duì)人工智能的獨(dú)立研究和技術(shù)檢查
另外一項(xiàng)建議是,創(chuàng)建一系列全球人工智能安全與驗(yàn)證基金,由政府、慈善家和企業(yè)提供資金,贊助獨(dú)立研究,以幫助開發(fā)更好的人工智能技術(shù)核查方法。
呼吁各國(guó)政府在人工智能安全性方面采取行動(dòng)的專家包括三位圖靈獎(jiǎng)得主,他們分別是:中國(guó)多位最成功的科技創(chuàng)業(yè)者的導(dǎo)師姚期智、全世界被引用次數(shù)最多的計(jì)算機(jī)科學(xué)家之一約書亞·本吉奧和OpenAI聯(lián)合創(chuàng)始人兼前首席科學(xué)家伊利亞·蘇特斯威夫的老師、曾在谷歌(Google)從事十多年機(jī)器學(xué)習(xí)研究的杰弗里·辛頓。
合作和人工智能道德規(guī)范
科學(xué)家們?cè)诠_信中對(duì)當(dāng)前人工智能領(lǐng)域的國(guó)際合作大加稱贊,例如今年5月,中美兩國(guó)領(lǐng)導(dǎo)人在日內(nèi)瓦就人工智能風(fēng)險(xiǎn)展開的討論。但他們認(rèn)為各國(guó)需要開展更多合作。
科學(xué)家們認(rèn)為,人工智能的發(fā)展應(yīng)該有工程師道德規(guī)范,類似于醫(yī)生或律師的道德規(guī)范。政府應(yīng)該把人工智能視為一種全球公共利益,而不僅僅是一項(xiàng)令人激動(dòng)的新技術(shù)。
公開信中寫道:“我們必須共同努力,準(zhǔn)備應(yīng)對(duì)隨時(shí)可能到來(lái)的災(zāi)難性后果。”(財(cái)富中文網(wǎng))
譯者:劉進(jìn)龍
審校:汪皓
The world’s leading AI scientists are urging world governments to work together to regulate the technology before it’s too late.
Three Turing Award winners—basically the Nobel Prize of computer science—who helped spearhead the research and development of AI, joined a dozen top scientists from across the world in signing an open letter that called for creating better safeguards for advancing AI.
The scientists claimed that as AI technology rapidly advances, any mistake or misuse could bring grave consequences for the human race.
“Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity,” the scientists wrote in the letter. They also warned that with the rapid pace of AI development, these “catastrophic outcomes,” could come any day.
Scientists outlined the following steps to start immediately addressing the risk of malicious AI use:
Government AI safety bodies
Governments need to collaborate on AI safety precautions. Some of the scientists’ ideas included encouraging countries to develop specific AI authorities that respond to AI “incidents” and risks within their borders. Those authorities would ideally cooperate with each other, and in the long term, a new international body should be created to prevent the development of AI models that pose risks to the world.
“This body would ensure states adopt and implement a minimal set of effective safety preparedness measures, including model registration, disclosure, and tripwires,” the letter read.
Developer AI safety pledges
Another idea is to require developers to be intentional about guaranteeing the safety of their models, promising that they will not cross red lines. Developers would vow not to create AI, “that can autonomously replicate, improve, seek power or deceive their creators, or those that enable building weapons of mass destruction and conducting cyberattacks,” as laid out in a statement by top scientists during a meeting in Beijing last year.
Independent research and tech checks on AI
Another proposal is to create a series of global AI safety and verification funds, bankrolled by governments, philanthropists and corporations that would sponsor independent research to help develop better technological checks on AI.
Among the experts imploring governments to act on AI safety were three Turing award winners including Andrew Yao, the mentor of some of China’s most successful tech entrepreneurs, Yoshua Bengio, one of the most cited computer scientists in the world, and Geoffrey Hinton, who taught the cofounder and former OpenAI chief scientist Ilya Sutskever and who spent a decade working on machine learning at Google.
Cooperation and AI ethics
In the letter, the scientists lauded already existing international cooperation on AI, such as a May meeting between leaders from the U.S. and China in Geneva to discuss AI risks. Yet they said more cooperation is needed.
The development of AI should come with ethical norms for engineers, similar to those that apply to doctors or lawyers, the scientists argue. Governments should think of AI less as an exciting new technology, and more as a global public good.
“Collectively, we must prepare to avert the attendant catastrophic risks that could arrive at any time,” the letter read.