成人小说亚洲一区二区三区,亚洲国产精品一区二区三区,国产精品成人精品久久久,久久综合一区二区三区,精品无码av一区二区,国产一级a毛一级a看免费视频,欧洲uv免费在线区一二区,亚洲国产欧美中日韩成人综合视频,国产熟女一区二区三区五月婷小说,亚洲一区波多野结衣在线

立即打開
假指紋因人工智能越發(fā)猖獗,你要當心了

假指紋因人工智能越發(fā)猖獗,你要當心了

Jonathan Vanian 2018年12月03日
依靠人工智能技術開發(fā)的偽造數(shù)字指紋可以騙過智能手機上的指紋識別器。

一項新研究顯示,依靠人工智能技術開發(fā)的偽造數(shù)字指紋可以騙過智能手機上的指紋識別器,黑客利用漏洞潛入受害者網(wǎng)上銀行偷錢的風險已然提升。

最近,紐約大學和密歇根州立大學的研究人員聯(lián)合發(fā)表了一篇論文,詳細介紹了如何使用深度學習技術削弱生物識別安全系統(tǒng)的防護功能。該研究由美國國家科學基金會資助,今年10月在一個生物識別和網(wǎng)絡安全論壇上榮獲最佳論文獎。

蘋果和三星等智能手機制造商通常在手機中使用生物識別技術,這樣人們可以使用指紋輕松解鎖設備,不用再輸入密碼。富國銀行之類的大銀行為了提升便利性,也越來越多地讓客戶使用指紋訪問賬戶。

雖然指紋識別很方便,但研究人員發(fā)現(xiàn)系統(tǒng)背后的軟件存在被騙過的可能性。這一發(fā)現(xiàn)非常重要,因為犯罪分子也可能利用領先的人工智能技術繞過傳統(tǒng)的網(wǎng)絡安全手段。

最新發(fā)表的論文主要基于去年紐約大學和密歇根州立大學的同一撥研究人員的相關研究。之前發(fā)表的論文稱,使用數(shù)字修改過的真實指紋或指紋部分圖像可以騙過一些指紋安全系統(tǒng)。他們將偽造指紋稱為“指紋大師”,可以騙過只辨別部分指紋圖像而不是完整指紋的生物安全系統(tǒng)。

具有諷刺意味的是,人類看到指紋大師生成的指紋會立刻發(fā)現(xiàn)是假的,因為都只有部分指紋。然而軟件卻識別不出來。

新發(fā)表的論文里,研究人員使用數(shù)據(jù)訓練的基礎軟件,即神經(jīng)網(wǎng)絡生成看起來可信度非常高的數(shù)字指紋,表現(xiàn)比之前研究使用的圖像還要好。偽造的指紋不僅看起來很真實,還帶有人眼無法察覺的隱藏屬性,從而迷惑指紋識別器。?

Fake digital fingerprints created by artificial intelligence can fool fingerprint scanners on smartphones, according to new research, raising the risk of hackers using the vulnerability to steal from victims’ online bank accounts.

A recent paper by New York University and Michigan State University researchers detailed how deep learning technologies could be used to weaken biometric security systems. The research, supported by a United States National Science Foundation grant, won a best paper award at a conference on biometrics and cybersecurity in October.

Smartphone makers like Apple and Samsung typically use biometric technology in their phones so that people can use fingerprints to easily unlock their devices instead of entering a passcode. Hoping to add some of that convenience, major banks like Wells Fargo are increasingly letting customers access their checking accounts using their fingerprints.

But while fingerprint scanners may be convenient, researchers have found that the software that runs these systems can be fooled. The discovery is important because it underscores how criminals can potentially use cutting-edge AI technologies to do an end run around conventional cybersecurity.

The latest paper about the problem builds on previous research published last year by some of the same NYU and Michigan State researchers. The authors of that paper discovered that they could fool some fingerprint security systems by using either digitally modified or partial images of real fingerprints. These so-called MasterPrints could trick biometric security systems that only rely on verifying certain portions of a fingerprint image rather than the entire print.

One irony is that humans who inspect MasterPrints could immediately likely tell they were fake because they contained only partial fingerprints. Software, it turns out, could not.

In the new paper, the researchers used neural networks—the foundational software for data training—to create convincing looking digital fingerprints that performed even better than the images used in the earlier study. Not only did the fake fingerprints look real, they contained hidden properties undetectable by the human eye that could confuse some fingerprint scanners.

左側為真指紋的示例,右側為人工智能生成的假指紋圖像。

論文的作者之一是紐約大學計算機科學副教授朱利安·托格流斯,他表示團隊使用改編的神經(jīng)網(wǎng)絡技術,即“生成對抗網(wǎng)絡”(GAN)生成假指紋,取名為“深度指紋大師”,他說過去兩年里這一系列假指紋“橫掃了人工智能世界”。

研究人員還可以使用GAN生成看似真實、其實虛假的照片和視頻,稱為“深度偽造”,一些國會議員擔心,可能有人用此類照片和視頻制作讓公眾信以為真的虛假視頻和宣傳。舉例來說,一些研究人員介紹了如何利用人工智能技術來制作虛假的美國前總統(tǒng)巴拉克·奧巴馬的演講視頻,還有其他應用方式。

人工智能修改的照片也能騙過計算機,去年麻省理工學院的研究人員介紹了案例,他們用一張烏龜圖像成功迷惑了谷歌的圖像識別軟件。谷歌的圖像識別技術將烏龜錯誤識別為步槍,因為烏龜圖像中嵌有類似步槍圖片的隱藏元素,人眼根本無法察覺。

研究人員可使用GAN結合兩種神經(jīng)網(wǎng)絡,協(xié)同工作生成嵌入神秘屬性,可以騙過圖像識別軟件的仿真圖像。研究人員使用數(shù)千個公開的指紋圖像,訓練神經(jīng)網(wǎng)絡識別真實指紋圖像,同時訓練另一個神經(jīng)網(wǎng)絡生成假指紋。

紐約大學計算機科學博士候選人菲利普·邦特拉格解釋說,之后將第二個神經(jīng)網(wǎng)絡生成的假指紋圖像輸入第一個神經(jīng)網(wǎng)絡,測試是否成功。他也參與了撰寫該論文。隨著時間推移,第二個神經(jīng)網(wǎng)絡學會生成逼真的指紋圖像,騙過其他神經(jīng)網(wǎng)絡。

隨后研究人員用假指紋圖像測試Innovatrics和Neurotechnology等科技公司銷售的指紋掃描軟件,檢測能否騙過。每當假指紋圖像成功騙過商業(yè)系統(tǒng)時,研究人員就能改進技術生成更逼真的假指紋。

負責生成假圖像的神經(jīng)網(wǎng)絡嵌入了一組隨機的計算機代碼,邦特拉格稱之為“噪聲數(shù)據(jù)”,這些數(shù)據(jù)可以欺騙指紋圖像識別軟件。雖然研究人員能用所謂的進化算法校正“噪聲數(shù)據(jù)”以迷惑指紋軟件,但目前還不清楚此類代碼對圖像的影響,因為人類看不出來。

可以肯定的是,犯罪分子想破解指紋識別儀會面臨許多障礙。邦特拉格解釋說,許多指紋系統(tǒng)配有其他安全檢查手段,例如可檢測人類手指的熱傳感器。

但新開發(fā)的深度指紋大師軟件起碼可證明,人工智能技術可用于不良用途。網(wǎng)絡安全行業(yè)、銀行業(yè)、智能手機制造商和其他采用生物識別技術的公司要不斷改進系統(tǒng),跟上人工智能的快速發(fā)展。

托格流斯表示,在該論文發(fā)表之前,研究人員并沒有考慮人工智能生成的虛假圖像是否可能對生物識別系統(tǒng)構成“嚴重威脅”。但他表示,在論文發(fā)表后,已經(jīng)有某些“大公司”聯(lián)系他,想深入了解虛假指紋可能存在的安全威脅。

指紋傳感器軟件制造商Neurotechnology的研發(fā)經(jīng)理胡斯塔斯·克蘭瑙斯卡斯博士通過電郵告訴《財富》雜志,最近騙過指紋識別器的研究論文“觸及”了關鍵點。但他指出,研究人員沒有考慮公司同時使用的其他安全手段,而他認為,其他安全手段可確?!皩嶋H應用中錯誤識別幾率極低”。

克蘭瑙斯卡斯還表示,Neurotechnology已建議企業(yè)客戶應用指紋掃描軟件時,將安全級別設置為高于研究人員在論文中使用的安全級別。

然而,研究人員邦特拉杰指出,指紋安全級別越高,使用起來越不方便,因為公司通常會留出一些自由空間,不想讓客戶反復按手指實現(xiàn)準確讀取。

“很明顯,如果將安全性設置調高,(欺騙攻擊)成功率會降低,” 邦特拉杰表示。 “但也不太方便?!彼a充道。(財富中文網(wǎng))

譯者:Charlie

審校:夏林

Julian Togelius, one of the paper’s authors and an NYU associate computer science professor, said the team created the fake fingerprints, dubbed DeepMasterPrints, using a variant of neural network technology called “generative adversarial networks (GANs),” which he said “have taken the AI world by storm for the last two years.”

Researchers have used GANs to create convincing-looking but fabricated photos and videos known as “deep fakes,” which some lawmakers worry could be used to create fake videos and propaganda that the general public would think was true. For example, several researchers have described how they could use AI techniques to create fabricated videos of former President Barack Obama giving speeches that never took place, among other things.

AI-altered photos are also fooling computers, as MIT researchers showed last year when they created an image of a turtle that confused Google’s image-recognition software. The technology mistook the turtle for a rifle because it identified hidden elements embedded in the image that shared certain properties with an image of a gun, all of which were unnoticeable by the human eye.

With GANs, researchers typically use a combination of two neural networks that work together to create realistic images embedded with mysterious properties that can fool image-recognition software. Using thousands of publicly available fingerprint images, the researchers trained one neural network to recognize real fingerprint images, and trained the other to create its own fake fingerprints.

They then fed the second neural network’s fake fingerprint images into the first neural network to test how effective they were, explained Philip Bontrager, a NYU PhD candidate in computer science who also worked on the paper. Over time, the second neural network learned to generate realistic-looking fingerprint images that could trick the other neural network.

The researchers then fed the fake fingerprint images into fingerprint-scanning software sold by tech companies like Innovatrics and Neurotechnology to see if they could be fooled. Each time a fake fingerprint image tricked one of the commercial systems, the researchers were able to improve their technology to produce more convincing fakes.

The neural network responsible for creating the bogus images embeds a random set of computer code that Bontrager referred to as “noisy data” that can fool fingerprint image recognition software. Although the researchers were able to calibrate this “noisy data” to trip the fingerprint software using what’s known as an evolutionary algorithm, it’s unclear what this code does to the image, since humans are unable to see its impact.

To be sure, criminals face a number of obstacles cracking fingerprint scanners. For one, many fingerprint systems rely on other security checks like heat sensors that are used to detect human fingers, Bontrager explained.

But, these newly developed DeepMasterPrints show that AI technology can be used for nefarious purposes, which means that cybersecurity, banks, smartphone makers and other firms using biometric technology must constantly improve their systems to keep up with the rapid AI advances.

Togelius said that prior to the paper, researchers didn’t consider the possibility of AI-created fake images to be a “serious threat to biometric systems.” After its publication, he said unspecified “l(fā)arge companies” are contacting him to learn more about the possible security threats of fake fingerprints.

Dr. Justas Kranauskas, a research and development manager for Neurotechnology, the maker of fingerprint sensor software, told Fortune in an email that the recent research paper about fooling fingerprint readers “touched” on an important point. But he pointed out that his company uses other kinds of security that the researchers did not incorporate into their study that would, as he put it, ensure a “very low false acceptance risk in real applications.”

Kranauskas also said that the Neurotechnology recommends that its corporate customers set their fingerprint scanning software at a higher security level than the levels that researchers used in their paper.

Bontrager, the researcher, noted, however, that the higher the fingerprint security level, the less convenient it is for users, because companies typically want some leeway so that customers don’t have to repeatedly press their fingers on scanners to get accurate reads.

“So obviously, if you choose a high security setting, [spoofing attacks] are less successful,” Bontrager said. “But then it is less convenient,” he added.

掃碼打開財富Plus App
亚洲国产欧美中日韩成人综合视频| 国模无码视频一区二区三区| 国产精品人人做人人爽人人添 | 久久亚洲男人第一av网站久久| 国内精品人妻无码久久久影院导航| 男生被男人CAO屁股的后果| 老司国产精品免费视频 | 国产精品亚洲片夜色在线| 国产AV综合第一页一个的一区免费影院黑人| 国产精品一区二区久久宅男宅女| 人人超碰人人爱超碰国产| 亚洲人成自拍网站在线观看| 午夜福利无码国产精品中文| 日本中文一区二区三区亚洲| 欧美日韩中文国产一区发布| 中文字幕天天躁日日躁狠狠躁免费| √天堂资源在线中文8在线最新| 国产成人精品自产拍在线观看| 中文字幕精品一区二区三区| 欧美日韩一区二区无线码| 亚洲一区二区三区爽爽爽| 亚洲av无码av在线播放| 亚洲伊人一本大道中文字幕| 久久精品成人亚洲另类| 国产AV无码专区亚洲AV手机麻豆| 亚洲中文字幕AⅤ无码性色| 国产一区二区在线影院欧美超级乱婬视频播放| 人妻中文字系列无码专区| 一级做a爰片久久毛片美女| 在线精品国精品国产尤物| 无码人妻AⅤ一区二区三区日本| 久久久不卡国产精品一区二区欧美| 欧美一区二区三区爽大粗| 久久国产乱子伦免费无码| 免费无码不卡视频在线观看| 亚洲日本一线产区和二线| 无码专区狠狠躁躁天天躁| 亚洲欧美人成综合在线在线a | 亚亚洲国产精品va在线观看香蕉| 欧美日本一区二区欧美专区一区| 亚洲无码视频一区二区三区|