成人小说亚洲一区二区三区,亚洲国产精品一区二区三区,国产精品成人精品久久久,久久综合一区二区三区,精品无码av一区二区,国产一级a毛一级a看免费视频,欧洲uv免费在线区一二区,亚洲国产欧美中日韩成人综合视频,国产熟女一区二区三区五月婷小说,亚洲一区波多野结衣在线

立即打開(kāi)
阻止網(wǎng)絡(luò)暴民,人工智能可以做到么?

阻止網(wǎng)絡(luò)暴民,人工智能可以做到么?

Jeff John Roberts 2017-02-22
你有沒(méi)有在社交媒體被網(wǎng)絡(luò)暴民圍攻過(guò)?戰(zhàn)勝他們或許很困難,只要有點(diǎn)仇恨和謾罵的火種,社交媒體上的人們只能棄械投降。但是,現(xiàn)在下悲觀結(jié)論可能為時(shí)過(guò)早。一項(xiàng)新策略有望解決這個(gè)問(wèn)題,恢復(fù)互聯(lián)網(wǎng)文明討論的氛圍。

圖片提供 Rebecca Greenfield

?

你有沒(méi)有在社交媒體被網(wǎng)絡(luò)暴民圍攻過(guò)?我有過(guò)。去年12月我發(fā)了一條推特諷刺白人優(yōu)越論者大衛(wèi)·杜克,結(jié)果他的支持者一擁而上,把我的推特生生變成了骯臟的下水道,充斥著納粹般瘋狂又難聽(tīng)的人身攻擊言論,而且持續(xù)了好幾天。

沒(méi)人能戰(zhàn)勝互聯(lián)網(wǎng)上的暴民。只要有點(diǎn)仇恨和謾罵的火種,社交媒體上的人們只能棄械投降,網(wǎng)站也只好撤下評(píng)論功能。誰(shuí)想生活在全是馬屁精和瘋子的網(wǎng)絡(luò)社區(qū)里呢?

幸運(yùn)的是,現(xiàn)在下悲觀結(jié)論可能為時(shí)過(guò)早。一項(xiàng)新策略有望解決暴民問(wèn)題,恢復(fù)互聯(lián)網(wǎng)文明討論的氛圍。這個(gè)項(xiàng)目由谷歌母公司Alphabet旗下的智庫(kù)Jigsaw主導(dǎo),主要依靠人工智能手段,可以解決以往無(wú)法審核海量評(píng)論的頭疼問(wèn)題。

為了解釋Jigsaw的具體做法,首席研究科學(xué)家盧卡斯·迪克森將網(wǎng)絡(luò)暴民問(wèn)題與所謂的拒絕服務(wù)攻擊相比較,拒絕服務(wù)供給是指攻擊者故意用垃圾信息淹沒(méi)網(wǎng)站,導(dǎo)致服務(wù)器過(guò)載最后下線。

“網(wǎng)絡(luò)暴民的區(qū)別是不會(huì)用垃圾信息攻擊網(wǎng)站,而是攻擊評(píng)論區(qū)或是社交媒體賬戶或話題標(biāo)簽,結(jié)果是其他人一句話都插不上,暴民掌握全部話語(yǔ)權(quán)。”迪克森表示。

大量惡意評(píng)論不僅會(huì)對(duì)個(gè)人造成困擾,對(duì)媒體公司和零售商也是威脅,因?yàn)楝F(xiàn)在很多商業(yè)模式都圍繞著網(wǎng)絡(luò)社區(qū)展開(kāi)。Jigsaw研究網(wǎng)絡(luò)暴民時(shí),已經(jīng)開(kāi)始量化損失。舉個(gè)例子,如果有維基百科(Wikipedia)的編輯受到人身攻擊,Jigsaw會(huì)測(cè)算其后該編輯在維基百科上貢獻(xiàn)詞條頻率與受攻擊之間的關(guān)系。

要解決當(dāng)前扭曲的在線討論氛圍,根源還在于海量數(shù)據(jù)和深度學(xué)習(xí),這也是人工智能領(lǐng)域發(fā)展迅速的一塊,主要目標(biāo)是模仿人體大腦的神經(jīng)網(wǎng)絡(luò)。近來(lái)深度學(xué)習(xí)已經(jīng)在谷歌翻譯工具上實(shí)現(xiàn)了了不起的突破。

說(shuō)到評(píng)論,Jigsaw讓機(jī)器學(xué)習(xí)《紐約時(shí)報(bào)》(New York Times)和維基百科里的上千萬(wàn)條評(píng)論,學(xué)會(huì)識(shí)別言辭中的攻擊性以及文不對(duì)題的發(fā)帖。直接影響是:《紐約時(shí)報(bào)》之類(lèi)的網(wǎng)站之前只有能力處理10%的文章評(píng)論,但在采用新算法后可以實(shí)現(xiàn)100%覆蓋。

雖然每家媒體評(píng)論區(qū)的調(diào)性和詞匯差別可能很大,但Jigsaw表示可以調(diào)整審核工具,適用各種網(wǎng)站。這就意味著即便是小博主或網(wǎng)絡(luò)零售商,也能放心放開(kāi)評(píng)論功能,不用擔(dān)心被網(wǎng)絡(luò)暴民攻陷。

技術(shù)愛(ài)好者都很關(guān)注Jigsaw的動(dòng)向。最近,《連線》雜志(Wired)上的一篇文章將Jigsaw的新項(xiàng)目稱為“互聯(lián)網(wǎng)正義聯(lián)盟”,還夸贊了谷歌旗下一系列行善的項(xiàng)目。

但也有些專家表示,Jigsaw團(tuán)隊(duì)可能低估了問(wèn)題的難度。

最近比較高調(diào)的機(jī)器學(xué)習(xí)項(xiàng)目主要關(guān)注點(diǎn)在圖片識(shí)別和翻譯文本上。但互聯(lián)網(wǎng)的對(duì)話經(jīng)常很看語(yǔ)境:舉例來(lái)說(shuō),很明顯應(yīng)該讓機(jī)器學(xué)習(xí)項(xiàng)目從所有評(píng)論中屏蔽“賤貨”,但有時(shí)人們用到這個(gè)詞并無(wú)惡意,卻同樣會(huì)被算法屏蔽,比如有人會(huì)說(shuō)“生活就像個(gè)賤貨。”或“其實(shí)我本來(lái)不想抱怨工作的,但是……”想教會(huì)機(jī)器從模糊的語(yǔ)境中辨清真實(shí)意思其實(shí)并不容易。

“機(jī)器學(xué)習(xí)能學(xué)會(huì)語(yǔ)言規(guī)范,但沒(méi)法理解文字背后的語(yǔ)境和感情,尤其是像推特這么簡(jiǎn)短的文字。這是人類(lèi)終其一生才能學(xué)會(huì)的東西?!鼻肮雀柢浖こ處煷笮l(wèi)·奧爾巴哈表示。他補(bǔ)充說(shuō),Jigsaw的項(xiàng)目可以向《紐約時(shí)報(bào)》之類(lèi)的網(wǎng)站提供更好的審核工具,但到了推特和Reddit等更自由的論壇,能發(fā)揮的作用就不大了。

種種質(zhì)疑并未讓Jigsaw的迪克森退縮。他指出,網(wǎng)絡(luò)暴民跟拒絕服務(wù)攻擊一樣都是永遠(yuǎn)無(wú)法徹底解決的問(wèn)題,但其影響是可以減弱的。迪克森相信,Jigsaw利用機(jī)器學(xué)習(xí)技術(shù)方面的最新成果可以控制網(wǎng)絡(luò)暴民的威力,讓和平討論重獲優(yōu)勢(shì)。

Jigsaw的研究人員還指出,看起來(lái)像暴民團(tuán)伙的攻擊——即突然跳出來(lái)一起罵臟話的一群人經(jīng)常是個(gè)人行為,有時(shí)是某些組織設(shè)置的自動(dòng)程序模仿暴民團(tuán)伙。Jigsaw的識(shí)別工具正飛速學(xué)習(xí)迅速識(shí)別并阻止這些行為。

此外,有人質(zhì)疑道高一尺魔高一丈,網(wǎng)絡(luò)暴民會(huì)根據(jù)審核工具的特點(diǎn)調(diào)整謾罵方式,從而避開(kāi)屏蔽,迪克森對(duì)此也有解釋。

“審核工具越多,攻擊的花招必然也會(huì)越多,”迪克森表示。“理想情況是攻擊方式花哨到?jīng)]人看得懂,沒(méi)人能懂也就沒(méi)效果,那么攻擊自然會(huì)停止。”

那些被社交媒體暴民趕走的人們

2015年到2016年

從NPR到路透,越來(lái)越多的大眾媒體網(wǎng)站和博客關(guān)停了評(píng)論功能。

Have you ever been attacked by trolls on social media? I have. In December a mocking tweet from white supremacist David Duke led his supporters to turn my Twitter account into an unholy sewer of Nazi ravings and disturbing personal abuse. It went on for days.

We’re losing the Internet war with the trolls. Faced with a torrent of hate and abuse, people are giving up on social media, and websites are removing comment features. Who wants to be part of an online community ruled by creeps and crazies?

Fortunately, this pessimism may be premature. A new strategy promises to tame the trolls and reinvigorate civil discussion on the Internet. Hatched by Jigsaw, an in-house think tank at Google’s parent company, Alphabet, the tool relies on artificial intelligence and could solve the once-impossible task of vetting floods of online comments.

To explain what Jigsaw is up against, chief research scientist Lucas Dixon compares the troll problem to so-called denial-of-service attacks in which attackers flood a website with garbage traffic in order to knock it off-line.

“Instead of flooding your website with traffic, it’s flooding the comment section or your social media or hashtag so that no one else can have a word, and basically control the conversation.” says Dixon.

Such surges of toxic comments are a threat not only to individuals, but also to media companies and retailers—many of whose business models revolve around online communities. As part of its research on trolls, Jigsaw is beginning to quantify the damage they do. In the case of Wikipedia, for instance, Jigsaw can measure the correlation between a personal attack on a Wikipedia editor and the subsequent frequency the editor will contribute to the site in the future.

The solution to today’s derailed online discourse lies in reams of data and deep learning, a fast-evolving subset of artificial intelligence that mimics the neural networks of the brain. Deep learning gave rise to recent and remarkable breakthroughs in Google’s translation tools。

In the case of comments, Jigsaw is using millions of comments from the New York Times and Wikipedia to train machines to recognize traits like aggression and irrelevancy. The implication: A site like the Times, which has the resources to moderate only about 10% of its articles for comments, could soon deploy algorithms to expand those efforts 10-fold.

While the tone and vocabulary on one media outlet comment section may be radically different from another’s, Jigsaw says it will be able to adapt its tools for use across a wide variety of websites. In practice, this means a small blog or online retailer will be able to turn on comments without fear of turning a site into a vortex of trolls.

Technophiles seem keen on what Jigsaw is doing. A recent Wired feature dubbed the unit the “Internet Justice League” and praised its range of do-gooder projects.

But some experts say that the Jigsaw team may be underestimating the challenge.

Recent high-profile machine learning projects focused on identifying images and translating text. But Internet conversations are highly contextual: While it might seem obvious, for example, to train a machine learning program to purge the word “bitch” from any online comment, the same algorithm might also flag posts in which people are using the term more innocuously—as in, “Life’s a bitch.” or “I hate to bitch about my job, but?…” Teaching a computer to reliably catch the slur won’t be easy.

“Machine learning can understand style but not context or emotion behind a written statement, especially something as short as a tweet. This is stuff it takes a human a lifetime to learn.” says David Auerbach, a former Google software engineer. He adds that the Jigsaw initiative will lead to better moderation tools for sites like the New York Times but will fall short when it comes to more freewheeling forums like Twitter and Reddit.

Such skepticism doesn’t faze Jigsaw’s Dixon. He points out that, like denial-of-service attacks, trolls are a problem that will never be solved but their effect can be mitigated. Using the recent leaps in machine learning technology, Jigsaw will tame the trolls enough to let civility regain the upper hand, Dixon believes.

Jigsaw researchers also point out that gangs of trolls—the sort that pop up and spew vile comments en masse—are often a single individual or organization deploying bots to imitate a mob. And Jigsaw’s tools are rapidly growing adept at identifying and stifling such tactics.

Dixon also has an answer to the argument that taming trolls won’t work because the trolls will simply adapt their insults whenever a moderating tool catches on to them.

“The more we introduce tools, the more creative the attacks will be,” Dixon says. “The dream is the attacks at some level get so creative no one understands them anymore and they stop being attacks.”?

***

Driven from social media by trolls

2015–16

Increasingly, popular media sites and blogs, from NPR to Reuters, are eliminating comments from their pages.

2015年7月

Reddit爆發(fā)了一場(chǎng)用臨時(shí)首席執(zhí)行官愛(ài)倫·鮑的話說(shuō)“史上最嚴(yán)重的網(wǎng)絡(luò)暴民攻擊”,隨后愛(ài)倫·鮑宣布辭職。

July 2015

Ellen Pao, interim CEO of Reddit, resigns in the wake of what she calls “one of the largest trolling attacks in history.”

?

2016年7月

網(wǎng)絡(luò)暴民在電影演員萊斯利·瓊斯的推特賬號(hào)下發(fā)了一大堆種族歧視和色情圖片,之后萊斯利宣布退出推特。她在最后幾條推文里寫(xiě)道,“你們想象不到有多么惡毒。”

本文另一版本刊發(fā)于2017年2月1日出版的《財(cái)富》雜志上,標(biāo)題為《網(wǎng)絡(luò)暴民獵手》。 (財(cái)富中文網(wǎng))

譯者:夏林

?

July 2016

Movie actress Leslie Jones quits Twitter after trolls send a barrage of racist and sexual images. In one of her final tweets, she writes, “You won’t believe the evil.”

***

A version of this article appears in the February 1, 2017 issue of Fortune with the headline "Troll Hunters."

掃描二維碼下載財(cái)富APP
内地欧美日韩亚洲美女激情爽爽| 一卡二卡亚洲乱码一卡二卡| 午夜欧美精品久久久久久久| 久久一二日韩欧美综合网| 满少妇高潮惨叫久久久| 国产三级精品三级男人的天堂| 色婷婷亚洲一区二区三区| 免费毛片手机在线播放| 欧美高清在线视频一区二区| 中文字幕被公侵犯的漂亮人妻| 国产免费无码一区二区视频| AV一区二区三区人妻少妇| 久久精品国产亚洲a∨麻豆| 免费网站看V片在线18禁无码| 成人爽a毛片一区二区免费| 日韩国产欧美一区二区三区| 国产精品香港三级在线| 国产av人人夜夜澡人人爽麻豆| 激情内射亚洲一区二区三区爱妻| 国产精品亚洲精品日韩已方| 国产超碰人人爽人人做人人添| 亚洲熟妇AV一区二区三区漫画| 久久天天躁狠狠躁夜夜2020| 国产亚洲蜜臀AV在线播放| 久久久国产精品萌白酱免费| 欧美最猛黑人xxxx黑人猛交| 人妻换人妻AA视频麻豆| 亚洲国产精品无码久久| 人妻在线日韩免费视频| 久久99人妻无码精品一区二区| 国产WW久久久久久久久久| 久久人爽人人爽人人片| 黄色网站在线观看视频| 久久久精品国产亚洲成人满18免费网站| 久久午夜无码鲁丝片秋霞| 久久婷婷五月综合色奶水99啪| 欧美国产在线观看综合| 亚洲一区不卡免费在线观看| 欧美日韩动漫国产在线播放| 天天天天躁天天爱天天碰2018| 中文字幕有码无码在线观看|