成人小说亚洲一区二区三区,亚洲国产精品一区二区三区,国产精品成人精品久久久,久久综合一区二区三区,精品无码av一区二区,国产一级a毛一级a看免费视频,欧洲uv免费在线区一二区,亚洲国产欧美中日韩成人综合视频,国产熟女一区二区三区五月婷小说,亚洲一区波多野结衣在线

首頁 500強 活動 榜單 商業(yè) 科技 領導力 專題 品牌中心
雜志訂閱

面對人工智能,企業(yè)要三思而行

Jonathan Vanian
2021-02-07

機器學習技術(shù)既能幫助公司在Facebook和推特上投放線上廣告,也能成為不良分子的宣傳工具,散播不實信息。

文本設置
小號
默認
大號
Plus(0條)

亞歷克斯?斯皮內(nèi)利是商業(yè)軟件制造商LivePerson的首席技術(shù)專家,他認為美國近期的國會暴亂事件揭示了人工智能的潛在危險,雖然這項技術(shù)通常與親特朗普的暴徒無關(guān)。

機器學習技術(shù)既能幫助公司在Facebook和推特上投放線上廣告,也能成為不良分子的宣傳工具,散播不實信息。

舉個例子,2016年有人在Facebook上分享虛假新聞,平臺的人工智能系統(tǒng)隨后將這些文章推送給了用戶。最近,F(xiàn)acebook的人工智能技術(shù)還推薦用戶加入討論QAnon陰謀論的群組,平臺最終屏蔽了這一話題。

斯皮內(nèi)利談及親特朗普的暴徒時表示:“他們生活的世界充滿了不實信息和謊言?!?/p>

人工智能不僅可以用來散播不實信息,它在隱私和面部識別等領域也存在問題,這讓不少企業(yè)在應用這項技術(shù)時三思而行。一些公司非常擔心人工智能相關(guān)的倫理問題,于是取消了與其相關(guān)的項目,或者根本就不啟動。

斯皮內(nèi)利表示,他已經(jīng)取消LivePerson和以前所在公司的一些人工智能項目。出于對人工智能的擔憂,他沒有透露這些公司的名字。這位專家之前就職于亞馬遜、廣告巨頭麥肯世界集團和湯森路透。

根據(jù)他的說法,這些項目涉及機器學習,通過分析客戶數(shù)據(jù)來預測用戶行為。隱私維權(quán)人士經(jīng)常表達對這類項目的擔憂,因為它們依賴大量的個人信息。

斯皮內(nèi)利說道:“從哲學上講,我堅信如果要使用一個人的數(shù)據(jù),對方必須知情同意?!?/p>

企業(yè)人工智能的道德問題

人工智能可以預測銷售業(yè)績,解讀法律文件,并讓客服機器人更加真實,因此過去幾年一直受到企業(yè)的支持。不過,與其相關(guān)的負面頭條新聞也是源源不斷。

去年,IBM、微軟和亞馬遜禁止警方使用他們的面部識別軟件,因其總是錯認女性和有色人種。微軟和亞馬遜都希望能繼續(xù)向警方銷售軟件,但他們呼吁聯(lián)邦政府規(guī)范執(zhí)法部門對該項技術(shù)的使用。

IBM首席執(zhí)行官阿爾溫德?克里希納更進一步表示,公司會永久暫停面部識別軟件業(yè)務,稱其反對任何“用于大規(guī)模監(jiān)視、種族側(cè)寫、侵犯基本人權(quán)和自由”的技術(shù)。

2018年,知名人工智能研究人員蒂姆尼特?格布魯和喬伊?布奧拉姆維尼發(fā)表了一篇研究論文,強調(diào)了面部識別軟件存在的偏見問題。魯曼?喬杜里是初創(chuàng)公司Parity AI的CEO,此前曾負責埃森哲咨詢公司的人工智能團隊,她表示一些化妝品公司因此暫停了人工智能項目,這些項目可以呈現(xiàn)化妝品在不同膚色的上妝效果,但這些公司擔心會造成對黑人女性的歧視。

“很多公司對面部識別技術(shù)的熱情在這個時候冷卻下來,”喬杜里說道。“我和化妝品行業(yè)的客戶開了會,所有的項目都停了?!?/p>

谷歌最近出現(xiàn)的問題也促使企業(yè)反思人工智能。最近,上文提到的人工智能研究人員格布魯在離開谷歌后表示,這家公司對她的研究進行了審查。這份研究關(guān)注谷歌人工智能軟件的兩個問題,一個是它可以理解人類語言但會因此產(chǎn)生偏見,另一個是它在訓練中消耗大量電力會破壞環(huán)境。

這對谷歌造成了不良影響,因為這家搜索巨頭以前也曾遇到過偏見問題。谷歌標榜自己是環(huán)境管理員,但它的Google Photos應用將黑人誤認為大猩猩。

格布魯離職不久,谷歌便禁止另一位人工智能倫理研究員訪問公司的電腦系統(tǒng),后者一直對這家公司持批評態(tài)度。谷歌一位發(fā)言人拒絕對研究人員或公司在道德層面的失誤發(fā)表評論,他反而引用首席執(zhí)行官桑達爾?皮查伊和高管杰夫?迪恩的說法,稱公司正在評估格布魯離職的相關(guān)情況,并將繼續(xù)進行人工智能倫理研究。

米里亞姆?沃格爾曾是美國司法部律師,如今擔任非營利組織EqualAI的負責人,這家組織幫助一些公司處理人工智能的偏見問題。她透露道,許多公司和人工智能研究人員正在密切關(guān)注谷歌的問題,其中一些人擔心,這會讓人以后不熱衷于研究與雇主商業(yè)利益無關(guān)的課題。

“這件事引起了每個人的關(guān)注,”沃格爾如此評價格布魯?shù)碾x職?!耙粋€在這個領域倍受贊賞和尊敬的領袖也會面臨失業(yè)風險,這一事實讓不少人心里一涼?!?/p>

谷歌一向把自己定位為人工智能倫理的領頭人,但這次失誤證明其德不配位。沃格爾希望其他公司不要過度反應,解雇或噤聲質(zhì)疑人工智能項目道德的員工。

沃格爾說道:“希望這些公司不要覺得在公司內(nèi)設立倫理部門就會制造緊張氣氛,導致事態(tài)升級到目前這個水平?!?/p>

人工智能道德在向前發(fā)展

阿比謝克?古普塔在微軟專注機器學習方面的工作,他也是蒙特利爾人工智能倫理研究所的創(chuàng)始人兼首席研究員。據(jù)他所言,與幾年前相比,現(xiàn)在的公司會更多地考慮人工智能的倫理問題,情況已經(jīng)有所改善。

而且,大家并不認為公司應該完全停止使用人工智能。舊金山附近的圣克拉拉大學有一間馬庫拉應用倫理學中心,其技術(shù)倫理主任布萊恩?格林表示,人工智能已是一項舉足輕重的技術(shù),無法舍棄。

“人們對歇業(yè)的恐懼要超過對歧視的恐懼,”格林說道。

雖然LivePerson的斯皮內(nèi)利對人工智能的一些用途感到擔心,但他的公司仍在大量投資自然語言處理等分支領域,讓電腦學習理解語言。他希望通過公開公司在人工智能和道德方面的立場,讓客戶相信LivePerson正在努力將危害降到最低。

LivePerson和專業(yè)服務巨頭高知特以及保險公司哈門那都是EqualAI組織的成員,它們已經(jīng)公開承諾將測試和監(jiān)控其人工智能系統(tǒng),以發(fā)現(xiàn)涉及偏見的問題。

斯皮內(nèi)利說道:“如果我們做得不好,請站出來質(zhì)疑我們?!保ㄘ敻恢形木W(wǎng))

譯者:秦維奇

亞歷克斯?斯皮內(nèi)利是商業(yè)軟件制造商LivePerson的首席技術(shù)專家,他認為美國近期的國會暴亂事件揭示了人工智能的潛在危險,雖然這項技術(shù)通常與親特朗普的暴徒無關(guān)。

機器學習技術(shù)既能幫助公司在Facebook和推特上投放線上廣告,也能成為不良分子的宣傳工具,散播不實信息。

舉個例子,2016年有人在Facebook上分享虛假新聞,平臺的人工智能系統(tǒng)隨后將這些文章推送給了用戶。最近,F(xiàn)acebook的人工智能技術(shù)還推薦用戶加入討論QAnon陰謀論的群組,平臺最終屏蔽了這一話題。

斯皮內(nèi)利談及親特朗普的暴徒時表示:“他們生活的世界充滿了不實信息和謊言?!?/p>

人工智能不僅可以用來散播不實信息,它在隱私和面部識別等領域也存在問題,這讓不少企業(yè)在應用這項技術(shù)時三思而行。一些公司非常擔心人工智能相關(guān)的倫理問題,于是取消了與其相關(guān)的項目,或者根本就不啟動。

斯皮內(nèi)利表示,他已經(jīng)取消LivePerson和以前所在公司的一些人工智能項目。出于對人工智能的擔憂,他沒有透露這些公司的名字。這位專家之前就職于亞馬遜、廣告巨頭麥肯世界集團和湯森路透。

根據(jù)他的說法,這些項目涉及機器學習,通過分析客戶數(shù)據(jù)來預測用戶行為。隱私維權(quán)人士經(jīng)常表達對這類項目的擔憂,因為它們依賴大量的個人信息。

斯皮內(nèi)利說道:“從哲學上講,我堅信如果要使用一個人的數(shù)據(jù),對方必須知情同意?!?/p>

企業(yè)人工智能的道德問題

人工智能可以預測銷售業(yè)績,解讀法律文件,并讓客服機器人更加真實,因此過去幾年一直受到企業(yè)的支持。不過,與其相關(guān)的負面頭條新聞也是源源不斷。

去年,IBM、微軟和亞馬遜禁止警方使用他們的面部識別軟件,因其總是錯認女性和有色人種。微軟和亞馬遜都希望能繼續(xù)向警方銷售軟件,但他們呼吁聯(lián)邦政府規(guī)范執(zhí)法部門對該項技術(shù)的使用。

IBM首席執(zhí)行官阿爾溫德?克里希納更進一步表示,公司會永久暫停面部識別軟件業(yè)務,稱其反對任何“用于大規(guī)模監(jiān)視、種族側(cè)寫、侵犯基本人權(quán)和自由”的技術(shù)。

2018年,知名人工智能研究人員蒂姆尼特?格布魯和喬伊?布奧拉姆維尼發(fā)表了一篇研究論文,強調(diào)了面部識別軟件存在的偏見問題。魯曼?喬杜里是初創(chuàng)公司Parity AI的CEO,此前曾負責埃森哲咨詢公司的人工智能團隊,她表示一些化妝品公司因此暫停了人工智能項目,這些項目可以呈現(xiàn)化妝品在不同膚色的上妝效果,但這些公司擔心會造成對黑人女性的歧視。

“很多公司對面部識別技術(shù)的熱情在這個時候冷卻下來,”喬杜里說道?!拔液突瘖y品行業(yè)的客戶開了會,所有的項目都停了。”

谷歌最近出現(xiàn)的問題也促使企業(yè)反思人工智能。最近,上文提到的人工智能研究人員格布魯在離開谷歌后表示,這家公司對她的研究進行了審查。這份研究關(guān)注谷歌人工智能軟件的兩個問題,一個是它可以理解人類語言但會因此產(chǎn)生偏見,另一個是它在訓練中消耗大量電力會破壞環(huán)境。

這對谷歌造成了不良影響,因為這家搜索巨頭以前也曾遇到過偏見問題。谷歌標榜自己是環(huán)境管理員,但它的Google Photos應用將黑人誤認為大猩猩。

格布魯離職不久,谷歌便禁止另一位人工智能倫理研究員訪問公司的電腦系統(tǒng),后者一直對這家公司持批評態(tài)度。谷歌一位發(fā)言人拒絕對研究人員或公司在道德層面的失誤發(fā)表評論,他反而引用首席執(zhí)行官桑達爾?皮查伊和高管杰夫?迪恩的說法,稱公司正在評估格布魯離職的相關(guān)情況,并將繼續(xù)進行人工智能倫理研究。

米里亞姆?沃格爾曾是美國司法部律師,如今擔任非營利組織EqualAI的負責人,這家組織幫助一些公司處理人工智能的偏見問題。她透露道,許多公司和人工智能研究人員正在密切關(guān)注谷歌的問題,其中一些人擔心,這會讓人以后不熱衷于研究與雇主商業(yè)利益無關(guān)的課題。

“這件事引起了每個人的關(guān)注,”沃格爾如此評價格布魯?shù)碾x職。“一個在這個領域倍受贊賞和尊敬的領袖也會面臨失業(yè)風險,這一事實讓不少人心里一涼?!?/p>

谷歌一向把自己定位為人工智能倫理的領頭人,但這次失誤證明其德不配位。沃格爾希望其他公司不要過度反應,解雇或噤聲質(zhì)疑人工智能項目道德的員工。

沃格爾說道:“希望這些公司不要覺得在公司內(nèi)設立倫理部門就會制造緊張氣氛,導致事態(tài)升級到目前這個水平?!?/p>

人工智能道德在向前發(fā)展

阿比謝克?古普塔在微軟專注機器學習方面的工作,他也是蒙特利爾人工智能倫理研究所的創(chuàng)始人兼首席研究員。據(jù)他所言,與幾年前相比,現(xiàn)在的公司會更多地考慮人工智能的倫理問題,情況已經(jīng)有所改善。

而且,大家并不認為公司應該完全停止使用人工智能。舊金山附近的圣克拉拉大學有一間馬庫拉應用倫理學中心,其技術(shù)倫理主任布萊恩?格林表示,人工智能已是一項舉足輕重的技術(shù),無法舍棄。

“人們對歇業(yè)的恐懼要超過對歧視的恐懼,”格林說道。

雖然LivePerson的斯皮內(nèi)利對人工智能的一些用途感到擔心,但他的公司仍在大量投資自然語言處理等分支領域,讓電腦學習理解語言。他希望通過公開公司在人工智能和道德方面的立場,讓客戶相信LivePerson正在努力將危害降到最低。

LivePerson和專業(yè)服務巨頭高知特以及保險公司哈門那都是EqualAI組織的成員,它們已經(jīng)公開承諾將測試和監(jiān)控其人工智能系統(tǒng),以發(fā)現(xiàn)涉及偏見的問題。

斯皮內(nèi)利說道:“如果我們做得不好,請站出來質(zhì)疑我們?!保ㄘ敻恢形木W(wǎng))

譯者:秦維奇

Alex Spinelli, chief technologist for business software maker LivePerson, says the recent U.S. Capitol riot shows the potential dangers of a technology not usually associated with pro-Trump mobs: artificial intelligence.

The same machine-learning tech that helps companies target people with online ads on Facebook and Twitter also helps bad actors distribute propaganda and misinformation.

In 2016, for instance, people shared fake news articles on Facebook, whose A.I. systems then funneled them to users. More recently, Facebook's A.I. technology recommended that users join groups focused on the QAnon conspiracy, a topic that Facebook eventually banned.

“The world they live in day in and day out is filled with disinformation and lies,” says Spinelli about the pro-Trump rioters.

A.I.'s role in disinformation, and problems in other areas including privacy and facial recognition, are causing companies to think twice about using the technology. In some cases, businesses are so concerned about ethics related to A.I. that they are killing projects involving A.I. or never starting them to begin with.

Spinelli says that he has canceled some A.I. projects at LivePerson and at previous employers that he declined to name because of concerns about A.I. He previously worked at Amazon, advertising giant McCann Worldgroup, and Thomson Reuters.

The projects, Spinelli says, involved machine learning analyzing customer data in order to predict user behavior. Privacy advocates often raise concerns about such projects, which rely on huge amounts of personal information.

"Philosophically, I’m a big believer in the use of your data being approved by you,” Spinelli says.

Ethical problems in corporate A.I.

Over the past few years, artificial intelligence has been championed by companies for its ability to predict sales, interpret legal documents, and power more realistic customer chatbots. But it's also provided a steady drip of unflattering headlines.

Last year, IBM, Microsoft, and Amazon barred police use of their facial recognition software because it more frequently misidentifies women and people of color. Microsoft and Amazon both want to continue selling the software to police, but they called for federal rules about how law enforcement can use the technology.

IBM CEO Arvind Krishna went a step further by saying his company would permanently suspend its facial recognition software business, saying that the company opposes any technology used "for mass surveillance, racial profiling, violations of basic human rights and freedoms."

In 2018, high-profile A.I. researchers Timnit Gebru and Joy Buolamwini published a research paper highlighting bias problems in facial recognition software. In reaction, some cosmetics companies paused A.I. projects that would determine how makeup products would look on certain people's skin, for fear the technology could discriminate against Black women, says Rumman Chowdhury, the former head of Accenture’s responsible A.I. team and now CEO of startup Parity AI.

“That was when lot of companies cooled down with how much they wanted to use facial recognition,” Chowdhury says. “I had meetings with clients in makeup, and all of it stopped.”

Recent problems at Google have also caused companies to rethink A.I. More recently, Gebru, the A.I. researcher, left Google and then claimed that the company had censored some of her research. That research focused on bias problems with the company's A.I. software that understands human language and the fact that the software used huge amounts of electricity in its training, which could harm the environment.

This reflected poorly on Google because the search giant has experienced bias problems in the past, when its Google Photos product misidentified Black people as gorillas, and the search giant champions itself as an environmental steward.

Shortly after Gebru's departure, Google suspended computer access to another of its A.I. ethics researchers who has been critical of the search giant. A Google spokesperson declined to comment about the researchers or the company's ethical blunders. Instead, he pointed to previous statements by Google CEO Sundar Pichai and Google executive Jeff Dean saying that the company is conducting a review of the circumstances of Gebru's departure and is committed to continuing its A.I. ethics research.

Miriam Vogel, a former Justice Department lawyer who now heads the EqualAI nonprofit, which helps companies address A.I. bias, says many companies and A.I. researchers are paying close attention to Google’s A.I. problems. Some fear that the problems may have a chilling impact on future research about topics that don't align with their employers' business interests.

“This issue has captured everyone’s attention,” Vogel says about Gebru leaving Google. “It took their breath away that someone who was so widely admired and respected as a leader in this field could have their job at risk.”

Although Google has positioned itself as a leader in A.I. ethics, the company's missteps point to a contradiction with that high-profile crown. Vogel hopes that companies don’t overreact by firing or silencing their own employees who question the ethics of certain A.I. projects.

“I would hope companies do not take fear that by having an ethical arm of their organization that they would create tensions that would lead to an escalation at this level,” Vogel says.

A.I. ethics going forward

Still, the fact that companies are thinking about A.I. ethics is an improvement from a few years ago, when they gave the issue relatively little thought, says Abhishek Gupta, who focuses on machine learning at Microsoft and is founder and principal researcher of the Montreal AI Ethics Institute.

And no one thinks companies will completely stop using A.I. Brian Green, the director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University, near San Francisco, says it's become too important of a tool to drop.

“The fear of going out of business trumps the fear of discrimination,” Green says.

And while LivePerson's Spinelli worries about some uses of A.I., his company is still heavily investing in its subsets like natural language processing, in which computers learn to understand language. He’s hoping that by being public about the company’s stance on A.I. and ethics, customers will trust that LivePerson is trying to minimize any harms.

LivePerson, along with professional services giant Cognizant and insurance firm Humana, are members of the EqualAI organization and have made public pledges that they will test and monitor their A.I. systems for problems involving bias.

Says Spinelli, “Call us out if we fail.”

財富中文網(wǎng)所刊載內(nèi)容之知識產(chǎn)權(quán)為財富媒體知識產(chǎn)權(quán)有限公司及/或相關(guān)權(quán)利人專屬所有或持有。未經(jīng)許可,禁止進行轉(zhuǎn)載、摘編、復制及建立鏡像等任何使用。
0條Plus
精彩評論
評論

撰寫或查看更多評論

請打開財富Plus APP

前往打開
熱讀文章
亚洲成A人V欧美综合天堂| 欧美性大战XXXXX久久久√| 人妻体体内射精一区二区| 在线观看国产色视频网站| WWW国产精品内射熟女| 欧美性猛交xxxx免费看| 日韩精品无码中文字幕电影| 少妇AV射精精品蜜桃专区| 无码人妻丰满熟妇区毛片| 国产午夜成人免费看片无遮挡| 亚洲欧美在线观看H片| 精品亚洲国产成AV人片传媒| 亚洲AV色香蕉一区二区三区| 成av人片一区二区三区久久| 久久综合亚洲色hezyo社区| 人妻少妇久久中文字幕| 亚洲αv久久久噜噜噜噜噜| 国产成人精品区在线观看| 国产精品初高中害羞小美女| 人妻精品久久无码区洗澡| 久久久久人妻精品区一| 国产狂喷水潮免费网站www| 色偷偷888欧美精品久久久| 中文字幕在线精品视频入口一区| 国产成人久久精品二三区麻豆| 国产乱子伦视频在线观看| 国内精品久久久久影院优| 国产成人V在线免播放| 久久中文精品无码中文字幕| 无码熟熟妇丰满人妻PORN| 亚洲国产精品综合久久网络| 精品免费久久久久久成人影院| 久久精品成人亚洲另类| 国产草草影院免费观看| 黄色网站在线观看视频| 欧美日韩国产成人高清视频 | 在线精品一区二区三区| 97se亚洲国产综合自在线尤物| 亚洲AV无码成人精品区在线播放| 五月丁香啪啪综合缴情| 国内精品国语自产拍在线观看91|