成人小说亚洲一区二区三区,亚洲国产精品一区二区三区,国产精品成人精品久久久,久久综合一区二区三区,精品无码av一区二区,国产一级a毛一级a看免费视频,欧洲uv免费在线区一二区,亚洲国产欧美中日韩成人综合视频,国产熟女一区二区三区五月婷小说,亚洲一区波多野结衣在线

立即打開
人工智能容易被騙,分不清海龜和來復(fù)槍

人工智能容易被騙,分不清海龜和來復(fù)槍

Jonathan Vanian 2017年11月13日
當人工智能順利工作時,計算機可以迅速識別圖片中的貓。但是當它出錯時,連海龜和來復(fù)槍的圖片都能搞錯。

麻省理工學(xué)院(MIT)計算機科學(xué)和人工智能實驗室的研究人員找到了辦法來欺騙自動識別圖片中物體的谷歌(Google)軟件。他們創(chuàng)造了一個算法,略微改動了海龜?shù)膱D片,就可以讓谷歌的識圖軟件將它視作一把來復(fù)槍。特別值得一提的是,麻省理工學(xué)院的團隊3D打印了這只海龜后,谷歌的軟件依舊認為它是一把武器,而不是一只爬行動物。

這樣的混淆,意味著罪犯最終也可能利用到識圖軟件的缺陷。隨著這類軟件越來越滲透到人們的日常生活之中,情況會更為凸顯。由于科技公司和他們的客戶日益依賴人工智能來處理重要工作,他們必須考慮這個問題。

例如,機場掃描設(shè)備可能有一天會采用識別技術(shù),自動探測旅客行李中的武器。不過罪犯可能會試圖改造炸彈等危險品,欺騙探測器而讓它們無法被檢測到。

麻省理工學(xué)院的研究者、計算機科學(xué)博士生、實驗的共同領(lǐng)導(dǎo)者阿尼什?阿塔耶解釋道,麻省理工學(xué)院的研究人員對海龜圖像所做的一切改變,都是人眼無法識別的。

在起初的海龜圖片測試后,研究人員把這只爬行動物重制成了一個物體,看看修改后的形象能否繼續(xù)欺騙谷歌的計算機。隨后,他們對3D打印的海龜進行了攝影和錄像,并將數(shù)據(jù)輸入谷歌的識圖軟件。

果然,谷歌的軟件認為這些海龜就是來復(fù)槍。

麻省理工學(xué)院上周發(fā)表了一篇關(guān)于本實驗的學(xué)術(shù)論文。這篇論文以之前幾次測試人工智能的研究為基礎(chǔ),作者已經(jīng)將其提交,供即將舉辦的人工智能會議作進一步審閱。

能夠自動識別圖中物體的計算機,都依賴于神經(jīng)網(wǎng)絡(luò),這種軟件會大致模仿人類大腦學(xué)習(xí)的方式。如果研究人員給神經(jīng)網(wǎng)絡(luò)提供了足夠的貓類圖片,它們就能識別這些圖片的模式,最終在沒有人類幫助的情況下認出圖片中的貓類。

不過,如果這些神經(jīng)網(wǎng)絡(luò)學(xué)習(xí)的圖片照明效果不好或是物體被遮擋,有時就會犯錯。阿塔耶解釋道,神經(jīng)網(wǎng)絡(luò)的工作方式仍然有些難以理解,研究人員還不清楚它們?yōu)槭裁纯梢曰驘o法準確識別某物。

麻省理工學(xué)院團隊的算法創(chuàng)造了所謂的對抗樣本,它們本質(zhì)上是計算機修改的圖片,專用于迷惑識別物體的軟件。阿塔耶表示,盡管海龜?shù)膱D像在人類眼里是一只爬行動物,但算法改變了圖片,讓它與來復(fù)槍的圖像共享了某些未知的特征。這種算法還會考慮照明效果不加或色彩變換的情況,從而會導(dǎo)致谷歌的軟件識別失敗。在3D打印之后,谷歌軟件仍然識別錯誤,表明算法產(chǎn)生的對抗特性在物質(zhì)世界依舊存在。

阿塔耶表示,盡管論文重點討論了谷歌的人工智能軟件,但微軟(Microsoft)和牛津大學(xué)(University of Oxford)開發(fā)的類似識圖軟件也會出錯。他推測,由Facebook和亞馬遜(Amazon)等公司開發(fā)的其他大多數(shù)識圖軟件也很可能失誤,因為它們的機制大體相同。

阿塔耶解釋道,除了機場掃描儀之外,依賴深度學(xué)習(xí)技術(shù)識別特定圖像的家庭安全系統(tǒng)也可能被欺騙。

想象一下,假如越來越多的攝像頭只在注意到物體運動時才開始錄像。那么為了避免被過路汽車之類的無害行為干擾,攝像頭可能會接受訓(xùn)練,忽視那些汽車。而利用這一點,罪犯就可以穿著專門設(shè)計的T恤,讓計算機誤以為它們只是看到了卡車,而不是人。果真如此的話,竊賊就能輕易通過安全系統(tǒng)。

阿塔耶承認,這些當然都只是推測。不過考慮到黑客事件頻繁出現(xiàn),這樣的情形值得深思。阿塔耶表示,他希望測試自己的想法,并最終制造出有能力“迷惑安全攝像頭”的“對抗T恤”。

谷歌和Facebook等其他公司意識到,黑客正在試圖欺騙他們的系統(tǒng)。多年來,谷歌都在研究阿塔耶和他的麻省理工學(xué)院團隊制造的這類威脅。谷歌的一位發(fā)言人拒絕就麻省理工學(xué)院的項目發(fā)表評論,不過他指出,谷歌最近的兩篇論文體現(xiàn)了公司在應(yīng)對對抗技術(shù)上的工作。

阿塔耶表示:“有許多聰明人都在努力工作,讓(類似谷歌軟件這樣的)分類器更加完善?!?(財富中文網(wǎng))

譯者:嚴匡正

Researchers from MIT’s computer science and artificial intelligence laboratory have discovered how to trick Google’s (GOOG, +0.66%)software that automatically recognizes objects in images. They created an algorithm that subtly modified a photo of a turtle so that Google’s image-recognition software thought it was a rifle. What’s especially noteworthy is that when the MIT team created a 3D printout of the turtle, Google’s software still thought it was a weapon rather than a reptile.

The confusion highlights how criminals could eventually exploit image-detecting software, especially as it becomes more ubiquitous in everyday life. Technology companies and their clients will have to consider the problem as they increasingly rely on artificial intelligence to handle vital jobs.

For example, airport scanning equipment could one day be built with technology that automatically identifies weapons in passenger luggage. But criminals could try to fool the detectors by modifying dangerous items like bombs so they are undetectable.

All the changes the MIT researchers made to the turtle image were unrecognizable to the human eye, explained Anish Athalye, an MIT researcher and PHD candidate in computer science who co-led the experiment.

After the original turtle image test, the researchers reproduced the reptile as a physical object to see if the modified image would still trick Google’s computers. The researchers then took photos and video of the 3-D printed turtle, and fed that data into Google’s image-recognition software.

Sure enough, Google’s software thought the turtles were rifles.

MIT publicized an academic paper about the experiment last week. The authors are submitting the paper, which builds on previous studies testing artificial intelligence, for further review at an upcoming AI conference.

Computers designed to automatically spot objects in images are based on neural networks, software that loosely imitates how the human brain learns. If researchers feed enough images of cats into these neural networks, they learn to recognize patterns in those images so they can eventually spot felines in photos without human help.

But these neural networks can sometimes stumble if they are fed certain types of pictures with bad lighting and obstructed objects. The way these neural networks work is still somewhat mysterious, Athalye explained, and researchers still don’t know why they may or may not accurately recognize something.

The MIT team’s algorithm created what are known as adversarial examples, essentially computer-manipulated images that were crafted to fool software that recognize objects. While the turtle image may resemble a reptile to humans, the algorithm morphed it so that it shares unknown characteristics with an image of a rifle. The algorithm also took in account conditions like poor lighting or miscoloration that could have caused Google’s image-recognition software to misfire, Athalye said. The fact that Google’s software still mislabeled the turtle after it was 3D printed shows that the adversarial qualities embedded from the algorithm are still retained in the physical world.

Although the research paper focuses on Google’s AI software, Athalye said that similar image-recognition tools from Microsoft(MSFT, +0.46%) and the University of Oxford also stumbled. Most other image-recognition software from companies like Facebook (FB, -0.40%) and Amazon (AMZN, +0.86%) would also likely blunder, he speculates, because of their similarities.

In addition to airport scanners, home security systems that rely on deep learning to recognize certain images may also be vulnerable to being fooled, Athalye explained.

Consider cameras that are increasingly set up to only record when they notice movement. To avoid being tripped by innocuous activity like cars driving by, cameras could be trained to ignore automobiles. To take advantage, however, criminals could wear t-shirts that have been specially designed to fool computers into thinking they see trucks instead of people. If so, burglars could easily bypass the security system.

Of course, this is all speculation, Athalye concedes. But, considering the frequency of hacking, it’s something worth considering. Athalye said he wants to test his idea and eventually make “adversarial t-shirts” that have the ability to “mess up a security camera.”

Google and other companies like Facebook are aware that hackers are trying to figure out ways to spoof their systems. For years, Google has been studying the kind of threats that Athalye and his MIT team produced. A Google spokesperson declined to comment on the MIT project, but pointed to two recent Google research papersthat highlight the company’s work on combating the adversarial techniques.

“There are a lot of smart people working hard to make classifiers [like Google’s software] more robust,” Athalye said.

掃碼打開財富Plus App
乱无码伦视频在线观看| 看一级特黄a大片国产| 日日摸夜夜添夜夜添高潮喷水| 国产乱码精品一区二区三区香蕉| 亚洲AV理论在线电影网| 欧美xxxxx做受vr| 国产精品污www一区二区三区乱伦| 麻豆映画传媒新剧免费观看| 成人免费毛片AAAAAA片| 欧美黑人粗暴多交高潮水最多| 最新欧美日韩一区二区三区| 亚洲AV无码一区二区一二区| 久久国产精品国产精品| 亚洲不卡av不卡一区二区| 国产超碰人人做人人爱ⅴA| 久久婷婷五月综合97色| 国产一级a毛一级a看免费视频视频一区二区三区| 亚洲最新av片不卡无码久久| 野花香日本大全免费观看| 乱辈通奷欧美系列视频| 一级做a爰片久久毛片看看| 亚洲高清在不卡一区二区三区| 亚洲成AV人片一区二区密柚| 欧美与黑人午夜性猛交久| 一级a性色生活片久久无少妇| 一级毛片免费在线播放| 亚洲美女搞黄视频| 99久久国产综合麻豆| 日本免费高清视频二区| 夜夜躁日日躁狠狠久久AV| 久久99精品国产99久久| 无码精油按摩潮喷在播放| 精品久久久无码中文字幕一| 伊人久久大香线焦AV综合影院| 97精品国产一区二区三区| 久久久国产精华液| 免费无码又爽又刺激网站| 丁香综合五月天亚洲日韩欧美| 原神优菈开襟乳液狂飙触站 | 亚洲AV人无码综合在线观看| 国产美女裸体丝袜喷水视频|