成人小说亚洲一区二区三区,亚洲国产精品一区二区三区,国产精品成人精品久久久,久久综合一区二区三区,精品无码av一区二区,国产一级a毛一级a看免费视频,欧洲uv免费在线区一二区,亚洲国产欧美中日韩成人综合视频,国产熟女一区二区三区五月婷小说,亚洲一区波多野结衣在线

立即打開
谷歌和Facebook最大的難題不在于控制自身平臺,而在于管理公眾期望

谷歌和Facebook最大的難題不在于控制自身平臺,而在于管理公眾期望

Christopher Koopman, Megan Hansen 2018-12-26
盡管公眾很容易就能指責這些公司存在偏見,但這種做法并不正確。

2018年12月11日,谷歌的首席執(zhí)行官桑德爾·皮查伊在美國眾議院司法委員會面前作證。考慮到內(nèi)容審核的規(guī)模和復(fù)雜性,顯然呈現(xiàn)的結(jié)果會大不相同。這并不意味著谷歌和Facebook存在偏見。圖片來源:Alex Wong—Getty Images

谷歌(Google)的首席執(zhí)行官桑德爾·皮查伊在12月11日出席美國眾議院司法委員會(House Judiciary Committee)的聽證會,只不過是科技公司被迫就偏見問題進行回應(yīng)的最新例證。顯然,皮查伊耗費了大量時間來保護公司免受這類針對谷歌和YouTube搜索結(jié)果的指控,不過他并不孤單。例如,F(xiàn)acebook就因為“迎合保守派”和作為“極左自由主義意識形態(tài)的溫床”而飽受指責。

盡管公眾很容易就能指責這些公司存在偏見,但這種做法并不正確。

正如加利福尼亞州民主黨議員佐伊·洛夫格倫在皮查伊的聽證會上準確指出的那樣:“這不是某個小人坐在幕布后面,指揮(這些公司)給用戶顯示什么結(jié)果?!毕喾?,這些公司——以及其中的員工——的任務(wù)是審核全球數(shù)十億用戶創(chuàng)造的內(nèi)容,與此同時滿足廣大群眾和毫不害怕濫用職權(quán)的心存抵觸的立法者。此外,這些公司在進行這項幾乎不可能完成的審核任務(wù)時,還要不斷以中立的意識形態(tài)過濾內(nèi)容。而在大多數(shù)情況下,他們的工作都值得贊揚。

考慮到這項任務(wù)的規(guī)模與復(fù)雜程度,我們對于呈現(xiàn)結(jié)果的差異不應(yīng)感到驚訝。正如皮查伊指出,去年谷歌提供了超過3萬億次搜索,而谷歌接受的日常搜索中,有15%的詞條是之前從未出現(xiàn)過的。計算一下,就意味著谷歌去年搜索了約4,500億次全新的詞條。

不可避免的,許多人會不滿于自己喜歡的評論員和意識形態(tài)的觀點在那些搜索結(jié)果中呈現(xiàn)的方式,以及在其他平臺中被審核的方式。錯誤會出現(xiàn),權(quán)衡也在所難免,而對于內(nèi)容審核充滿偏見和敵意的言論總會不斷涌現(xiàn)。

科技公司試圖一次性實現(xiàn)許多不同——有時甚至是互相沖突——的目標。他們盡力限制裸露和暴力,控制假新聞,屏蔽仇恨言論,保障所有人的網(wǎng)絡(luò)安全。這樣一張任務(wù)清單導(dǎo)致我們很難給成功設(shè)定一個標準,而要實現(xiàn)成功則難上加難。當這些目標與美國人神圣不可侵犯的言論自由原則和尊重不同觀點的愛好相抵觸時,情況尤其如此。

一旦這些價值觀產(chǎn)生沖突,誰來決定屏蔽哪些言論,又允許哪些言論?

由于Facebook不斷擴張并受到超過20億用戶的喜愛,公司的內(nèi)容審核系統(tǒng)也進行了升級。公司如今在全球11個辦事處配備了由律師、政策專家和公關(guān)專家組成的團隊,他們的任務(wù)是出臺“社區(qū)標準”,決定怎樣審核內(nèi)容。

近幾個月里,F(xiàn)acebook對于這些規(guī)則如何出臺并執(zhí)行變得更加開明。今年春天,公司的全球政策管理主管莫妮卡·比克特撰文闡述了Facebook安全、發(fā)聲和平等三大原則,并“努力把這些標準持續(xù)、公平地應(yīng)用到所有的社區(qū)與文化中。”

哪個標準可以持續(xù)應(yīng)用到每天以超過100種不同語言發(fā)布的數(shù)十億篇文章中?人工智能和機器學(xué)習很擅長過濾裸露照片、垃圾信息、虛假賬號和影像暴力。但內(nèi)容取決于上下文,相較之下總是更加棘手,因此平臺必須通過人工版主來處理每篇可能違規(guī)的文章。

Facebook和其他平臺的運營不可能讓任何政治派別滿意,撇開這個事實不談,他們嚴肅認真地履行了自己的義務(wù)來保護用戶。畢竟,每個平臺都有很強的經(jīng)濟動機來取悅用戶,避免出現(xiàn)對某種政治理念的傾向。因此,創(chuàng)造可以持續(xù)遵守的中立規(guī)則,不考慮政治立場,是符合平臺自身利益的。

不過,看看內(nèi)容審核的實現(xiàn)方式,你會很明顯地發(fā)現(xiàn)人類的判斷在其中起到了很大作用。Facebook關(guān)于仇恨言論判定的政策是由人類制定的,最終也由人類執(zhí)行。無論這些人多么心懷善意,他們來自不同背景,存在不同傾向,對主題也有不同理解。因此如果最后呈現(xiàn)的結(jié)果矛盾而混亂,讓保守主義者和自由主義者都大為不滿,我們也不用吃驚。這并不意味著科技公司存在政治傾向,只表明他們的工作實在是難以置信的困難。(財富中文網(wǎng))

作者克里斯托弗·庫普曼是猶他州立大學(xué)增長與機遇中心的戰(zhàn)略與研究高級主管,梅根·漢森是該機構(gòu)的研究主管。

譯者:嚴匡正

Google CEO Sundar Pichai’s testimony before the House Judiciary Committee on Dec 11 is just the latest example of a tech company having to respond to accusations of bias. While Pichai obviously spent much of his time defending Google from such allegations in search results on Google and YouTube, he isn’t alone. Platforms like Facebook, for instance, are being blamed of both “catering to conservatives,” and acting as a network of “incubators for far-left liberal ideologies.”

While accusing these companies of bias is easy, it’s also wrong.

As Rep. Zoe Lofgren (D-CA) correctly pointed out during Pichai’s testimony, “It’s not some little man sitting behind the curtain figuring out what [companies] are going to show the users.” Instead, these companies—and the people who work there—have been tasked with moderating content created by billions of users across the globe while also having to satisfy both the broader public and competing lawmakers who aren’t afraid to throw their weight around. Moreover, these companies are taking on this impossible task of moderating while also filtering content in a consistent and ideologically neutral way. And, for the most part, they are doing an admirable job.

Given the complexity and scale of the task, we shouldn’t be surprised that results vary. As Pichai noted, Google served over 3 trillion searches last year, and 15% of the searches Google sees per day have never been entered before on the platform. To do the math, that means that somewhere around 450 billion of the searches Google served last year were brand new inquiries.

Inevitably, many people will be left unsatisfied with how their preferred commentators and ideological views are returned in those searches, or moderated on other platforms. Mistakes will occur, trade-offs will be made, and there will always be claims that content moderation is driven by bias and animus.

Tech companies are attempting to achieve many different—sometimes conflicting—goals at once. They are working to limit nudity and violence, control fake news, prevent hate speech, and keep the internet safe for all. Such a laundry list makes success hard to define—and even harder to achieve. This is especially the case when these goals are pitted against the sacrosanct American principle of free speech, and a desire (if not a business necessity) to respect differing viewpoints.

When these values come into conflict, who decides what to moderate, and what to allow?

As it has expanded and welcomed in more than 2 billion users, Facebook has upped its content moderation game as well. The company now has a team of lawyers, policy professionals, and public relations experts in 11 offices across the globe tasked with crafting “community standards” that determine how to moderate content.

In recent months, Facebook has been more open about how these rules are developed and employed. This spring, Monika Bickert, the platform’s head of global policy management, wrote about Facebook’s three principles of safety, voice, and equity, and the “aim to apply these standards consistently and fairly to all communities and cultures.”

Can any standard be consistently applied to billions of posts made every single day in more than 100 different languages? Artificial intelligence and machine learning are very good at filtering out nudity, spam, fake accounts, and graphic violence. But for content that is dependent on context—which has always been the thornier issue—platforms must rely on human moderators to sort through each and every post that might violate its rules.

Putting aside the fact that they have not been able to satisfy those operating on either side of the political spectrum, Facebook and other platforms have taken their obligation to protect users seriously. After all, each faces a strong financial incentive to keep their users happy, and to avoid the appearance of favoring one set of political beliefs over another. Thus, creating neutral rules that can be consistently applied, regardless of political affiliation, is in a platform’s self-interest.

But when you look at how content moderation actually gets done, it’s clear that discretion by human beings plays a very large role. Facebook’s policies on what constitutes hate speech are written by human beings, and ultimately are enforced by human beings who—no matter how well-meaning they are—have different backgrounds, biases, and understandings of the subject matter. We shouldn’t be surprised when the results are inconsistent, messy, and end up leaving both conservatives and liberals unhappy. This doesn’t mean tech companies are politically biased—it means their job is incredibly difficult.

Christopher Koopman is the senior director of strategy and research and Megan Hansen is the research director for the Center for Growth and Opportunity at Utah State University.

掃描二維碼下載財富APP
免费无遮挡无码永久视频| 国模大尺度视频一区二区| 强奷乱码欧妇女中文字幕熟女| 国产熟女高潮一区二区| 国产一级一级理论片A片一区二区| 猛男的大粗鳮巴1久久精品综合热久久| 中文字幕无码制服丝袜视频| 久久久久精品无码一区二区三| 天天摸天天做天天爽2020| 无码精品国产一区二区三区免费| 亚洲AV无码专区一区二区| 久久久久国产精品无套专区| 特黄熟妇丰满人妻无码| 久久精品国产亚洲AV无码娇色| 国产精品亚洲成在人线| 精品动漫在线一区二区在线| 国产mv动漫精品一区二区三区| 精品无码人妻一区二区三| 怡红院成永久免费人视频新的| 国产一级视频在线观看免费| 午夜A级理论片在线播放韩国| 日韩精品爆乳高清在线观看视频| 亚洲男人的天堂久久无在线观看免费黄视频| 国产一级a毛一级a看免费视频| 亚洲av无码一区东京热不卡| 成全免费看高清电影| 波多野结衣多次高潮三个老人| 97久久久久久精品无码毛片| 国产精午夜一区二区三区不卡久免費資訊| 免费爽A片高清无打码在线观看| 亚洲第一a在线观看网站| 久久精品国产一区二区三区| 久99久视频免费观看视频| 国产在线拍揄自揄视频不卡99| 日韩一区二区三区无码免费视频| 亚洲AV在线无码播放毛片一线天| 久久精品人妻一区二区蜜桃| 国产精品嫩草影院一二三区入口| 日本欧美视频在线观看三区| 亚洲精品中文字幕无码视频| 欧美日韩亚洲综合一区二区三区激情|