幾周前的一個周五傍晚,我在家鄉(xiāng)羅馬尼亞探親,參加一個葬禮,這時我發(fā)現(xiàn)自己在思考:是時候開始教我的孩子們說羅馬尼亞語了嗎?在過去的15年里,我一直在英國生活,我的孩子們也是在英國出生和長大的。他們喜愛身在羅馬尼亞的祖父母,但卻很難與他們進行交流,我想為此做點什么。
于是我開始尋找解決方案。我在網(wǎng)上搜索了大約一個小時,但沒有找到任何有用的東西,于是我繼續(xù)參加那晚的活動。
幾天后,當(dāng)我正在瀏覽Instagram時,出現(xiàn)了一個語言學(xué)習(xí)應(yīng)用程序的廣告。在一家社交媒體公司工作過的我知道發(fā)生了什么:這家公司追蹤了我的在線活動,發(fā)現(xiàn)我對語言學(xué)習(xí)應(yīng)用程序很感興趣,于是決定向我投放廣告。這并無大礙:我過去也有過類似的經(jīng)歷,甚至決定根據(jù)這種定向廣告購買產(chǎn)品。
在接下來的幾天里,我收到越來越多的來自同一語言應(yīng)用程序的廣告。但當(dāng)我開始仔細研究時,我意識到還有更加令人不安的事情。
雖然其中一些廣告里的真人興奮不已,鼓勵我下載這款應(yīng)用程序,并進行試用,而且強調(diào)使用過程沒有任何風(fēng)險,但其他廣告看起來卻異常熟悉。這些廣告的特點是有人直接用法語或中文跟我對話,聲稱由于該應(yīng)用程序的神奇功能,他們在短短幾周內(nèi)就掌握了一門外語。然而,真實發(fā)生的情況并非如此神奇,而是令人擔(dān)憂:這些視頻是通過深度偽造技術(shù)操縱的,可能并未得到視頻里的人物的同意。
雖然人工智能生成的媒體可以用于娛樂、教育或創(chuàng)意表達,并無惡意,但深度偽造卻有可能被用于惡意目的,例如傳播虛假信息、偽造證據(jù),或者在這種情況下實施詐騙。
因為我在人工智能領(lǐng)域工作了近十年,所以我很容易就能夠發(fā)現(xiàn)這些廣告中出現(xiàn)的人物實際上并不是真實的,他們的語言技能也不是真實的。相反,多虧了索菲亞·史密斯·加勒的一項調(diào)查,我才了解到,有人在未經(jīng)本人知情或許可的情況下,利用一款應(yīng)用程序克隆了真人,侵犯了他們的自主權(quán),并可能損害其聲譽。
這些深度偽造廣告令人擔(dān)憂的一點是,其在創(chuàng)作過程中沒有征得用戶的同意。該語言應(yīng)用程序很有可能使用了一家生成式人工智能公司開發(fā)的視頻克隆平臺的服務(wù),而這家公司在過去三年里四次更名,沒有采取任何措施來防止未經(jīng)授權(quán)的克隆人的出現(xiàn),而且顯而易見的是,也沒有建立任何機制以從數(shù)據(jù)庫中刪除某人的肖像。
這種利用行為不僅有違道德標(biāo)準(zhǔn),還破壞了人們對數(shù)字環(huán)境的信任,而數(shù)字環(huán)境本來就缺乏真實性和透明度。以烏克蘭學(xué)生奧爾加·洛伊克為例,她擁有一個與健康知識相關(guān)的YouTube頻道。最近,洛伊克的粉絲提醒她,她的視頻已經(jīng)出現(xiàn)在中國的社交媒體平臺上。在中國的互聯(lián)網(wǎng)上,洛伊克的肖像已經(jīng)變成了一名俄羅斯女人的頭像,正在尋求嫁給中國男人。她發(fā)現(xiàn)自己在YouTube上的內(nèi)容被輸入到視頻克隆平臺上(生成我在Instagram上看到的詐騙廣告的同一個平臺),一個與她相似的虛擬形象如今正在中國的社交媒體應(yīng)用程序上宣揚自己愛上了中國男人。由于俄烏沖突,這不僅在個人層面上冒犯了洛伊克,而且如果她有選擇權(quán)的話,她也絕不會同意參與這類內(nèi)容制作。
我聯(lián)系了洛伊克,想聽聽她對自己遭遇的看法。她是這樣表述的:“操縱我的形象來發(fā)表我絕不會寬恕的言論,這侵犯了我的個人自主權(quán),也意味著我們需要進行嚴(yán)格監(jiān)管來保護像我這樣的人免受身份盜用的影響。”
同意是一項基本原則,是我們在物理和數(shù)字領(lǐng)域進行互動的基礎(chǔ)。它是道德行為的基石,肯定了個人控制自己形象、聲音和個人數(shù)據(jù)的權(quán)利。未經(jīng)同意使用,就有可能侵犯他人的隱私、尊嚴(yán)和自主權(quán),為操縱、剝削和傷害敞開大門。
作為一家人工智能公司的企業(yè)事務(wù)主管,我曾經(jīng)參與過一項名為#我的形象我做主(#MyImageMyChoice)的活動,試圖提高人們對深度偽造應(yīng)用程序未經(jīng)同意生成的圖像如何毀掉成千上萬女孩和婦女的生活的認(rèn)識。在美國,每12個成年人中就有一個報告說她們是基于圖像的虐待的受害者。我讀過一些受害者的悲慘故事,她們分享了自己的生活是如何被人工智能應(yīng)用程序生成的圖像或視頻摧毀的。當(dāng)她們根據(jù)《數(shù)字千年版權(quán)法》(DMCA)試圖向這些應(yīng)用程序發(fā)出圖像刪除請求時,卻沒有得到任何回復(fù),或者被告知這些應(yīng)用程序背后的公司不受任何此類法律的約束。
我們正在進入一個互聯(lián)網(wǎng)時代,在這個時代里,我們看到的越來越多的內(nèi)容將由人工智能生成。在這個新世界中,同意變得更為重要。隨著人工智能的能力不斷提高,我們也必須健全道德框架和加強監(jiān)管保障。我們需要建立健全的機制,確保在創(chuàng)建和傳播人工智能生成的內(nèi)容時,征得個人的同意并尊重其意愿。這包括為面部和語音識別技術(shù)的使用提出明確的指南,并建立驗證數(shù)字媒體真實性的機制。
此外,我們必須追究那些試圖利用深度偽造技術(shù)達到欺詐或欺騙目的的人的責(zé)任,以及那些發(fā)布深度偽造應(yīng)用程序卻沒有采取適當(dāng)?shù)姆婪洞胧﹣肀苊獬霈F(xiàn)濫用現(xiàn)象的人的責(zé)任。這就需要科技公司、政策制定者和公民社會通力合作,制定并執(zhí)行法規(guī),阻止惡意行為者行惡,保護用戶免受現(xiàn)實世界的傷害,而不是只關(guān)注科幻電影里虛構(gòu)的世界末日場景。比如,我們不應(yīng)該允許視頻或語音克隆公司在未經(jīng)同意的情況下發(fā)布與個人相關(guān)的深度偽造產(chǎn)品。在征得同意的過程中,也許我們還應(yīng)該強制要求這些公司引入信息標(biāo)簽,告訴用戶他們的肖像將如何使用、存儲在哪里,以及存儲多長時間。許多消費者可能會瀏覽這些標(biāo)簽,但在俄羅斯或白俄羅斯等國家的服務(wù)器上存儲某人的深度偽造信息可能會產(chǎn)生嚴(yán)重后果,因為在這些國家,深度偽造技術(shù)濫用的受害者缺乏實際的追索權(quán)。最后,我們需要為人們提供相關(guān)機制,讓他們能夠做出選擇,不讓自己的肖像在網(wǎng)上被使用,尤其是在他們無法控制肖像使用方式的情況下。在洛伊克的案例中,當(dāng)記者向研發(fā)平臺(未經(jīng)其同意克隆洛伊克)背后的公司請求置評時,該公司沒有做出任何回應(yīng),也沒有采取任何行動。
在更健全的監(jiān)管措施出臺之前,我們需要加大努力,提高公眾意識和數(shù)字素養(yǎng),使個人能夠識別操縱行為并保護其在線生物識別數(shù)據(jù)。我們必須增強消費者的能力,讓他們在使用應(yīng)用程序和平臺時做出更明智的決定,并認(rèn)識到在數(shù)字空間以及與易受政府監(jiān)控或出現(xiàn)數(shù)據(jù)泄露的公司共享個人信息(尤其是生物識別數(shù)據(jù))的潛在后果。
生成式人工智能應(yīng)用程序具有不可否認(rèn)的吸引力,尤其是對年輕人而言。但是,當(dāng)人們在這些平臺上上傳包含自己肖像的圖片或視頻時,就會在不知不覺中把自己暴露在無數(shù)風(fēng)險里,包括隱私侵犯、身份盜用和潛在的剝削。
雖然我希望有一天我的孩子可以在實時機器翻譯的幫助下與他們的祖父母交流,但我對深度偽造技術(shù)對下一代的影響深感憂慮,尤其是當(dāng)我看到泰勒·斯威夫特的遭遇,或者#我的形象我做主活動分享自己故事的受害者的遭遇,或者其他無數(shù)遭受性騷擾和性虐待的女性被迫保持沉默的遭遇。
我的孩子們正在一個數(shù)字欺騙日益復(fù)雜的世界中成長。要幫助他們駕馭這一復(fù)雜局面,維護自主權(quán)和人格完整,就必須教給他們關(guān)于同意、批判性思維和媒體素養(yǎng)的知識。但這還不夠:我們需要讓開發(fā)這項技術(shù)的公司承擔(dān)責(zé)任。我們還必須推動各國政府快速采取行動。例如,英國將很快開始執(zhí)行《在線安全法案》(Online Safety Bill),該法案把深度偽造定為刑事犯罪,也將迫使科技平臺采取行動并刪除相關(guān)內(nèi)容。更多的國家應(yīng)該效仿英國的做法。
最重要的是,人工智能行業(yè)里的工作人士必須毫不畏懼地公開表達意見,并提醒我們的同行,這種隨意構(gòu)建生成式人工智能技術(shù)的方法是不可接受的。(財富中文網(wǎng))
亞歷山大·沃伊卡(Alexandru Voica)是Synthesia公司的企業(yè)事務(wù)和政策主管,也在穆罕默德·本·扎耶德人工智能大學(xué)(Mohamed bin Zayed University of Artificial Intelligence)擔(dān)任顧問。
Fortune.com上發(fā)表的評論文章中表達的觀點,僅代表作者本人的觀點,不代表《財富》雜志的觀點和立場。
譯者:中慧言-王芳
幾周前的一個周五傍晚,我在家鄉(xiāng)羅馬尼亞探親,參加一個葬禮,這時我發(fā)現(xiàn)自己在思考:是時候開始教我的孩子們說羅馬尼亞語了嗎?在過去的15年里,我一直在英國生活,我的孩子們也是在英國出生和長大的。他們喜愛身在羅馬尼亞的祖父母,但卻很難與他們進行交流,我想為此做點什么。
于是我開始尋找解決方案。我在網(wǎng)上搜索了大約一個小時,但沒有找到任何有用的東西,于是我繼續(xù)參加那晚的活動。
幾天后,當(dāng)我正在瀏覽Instagram時,出現(xiàn)了一個語言學(xué)習(xí)應(yīng)用程序的廣告。在一家社交媒體公司工作過的我知道發(fā)生了什么:這家公司追蹤了我的在線活動,發(fā)現(xiàn)我對語言學(xué)習(xí)應(yīng)用程序很感興趣,于是決定向我投放廣告。這并無大礙:我過去也有過類似的經(jīng)歷,甚至決定根據(jù)這種定向廣告購買產(chǎn)品。
在接下來的幾天里,我收到越來越多的來自同一語言應(yīng)用程序的廣告。但當(dāng)我開始仔細研究時,我意識到還有更加令人不安的事情。
雖然其中一些廣告里的真人興奮不已,鼓勵我下載這款應(yīng)用程序,并進行試用,而且強調(diào)使用過程沒有任何風(fēng)險,但其他廣告看起來卻異常熟悉。這些廣告的特點是有人直接用法語或中文跟我對話,聲稱由于該應(yīng)用程序的神奇功能,他們在短短幾周內(nèi)就掌握了一門外語。然而,真實發(fā)生的情況并非如此神奇,而是令人擔(dān)憂:這些視頻是通過深度偽造技術(shù)操縱的,可能并未得到視頻里的人物的同意。
雖然人工智能生成的媒體可以用于娛樂、教育或創(chuàng)意表達,并無惡意,但深度偽造卻有可能被用于惡意目的,例如傳播虛假信息、偽造證據(jù),或者在這種情況下實施詐騙。
因為我在人工智能領(lǐng)域工作了近十年,所以我很容易就能夠發(fā)現(xiàn)這些廣告中出現(xiàn)的人物實際上并不是真實的,他們的語言技能也不是真實的。相反,多虧了索菲亞·史密斯·加勒的一項調(diào)查,我才了解到,有人在未經(jīng)本人知情或許可的情況下,利用一款應(yīng)用程序克隆了真人,侵犯了他們的自主權(quán),并可能損害其聲譽。
這些深度偽造廣告令人擔(dān)憂的一點是,其在創(chuàng)作過程中沒有征得用戶的同意。該語言應(yīng)用程序很有可能使用了一家生成式人工智能公司開發(fā)的視頻克隆平臺的服務(wù),而這家公司在過去三年里四次更名,沒有采取任何措施來防止未經(jīng)授權(quán)的克隆人的出現(xiàn),而且顯而易見的是,也沒有建立任何機制以從數(shù)據(jù)庫中刪除某人的肖像。
這種利用行為不僅有違道德標(biāo)準(zhǔn),還破壞了人們對數(shù)字環(huán)境的信任,而數(shù)字環(huán)境本來就缺乏真實性和透明度。以烏克蘭學(xué)生奧爾加·洛伊克為例,她擁有一個與健康知識相關(guān)的YouTube頻道。最近,洛伊克的粉絲提醒她,她的視頻已經(jīng)出現(xiàn)在中國的社交媒體平臺上。在中國的互聯(lián)網(wǎng)上,洛伊克的肖像已經(jīng)變成了一名俄羅斯女人的頭像,正在尋求嫁給中國男人。她發(fā)現(xiàn)自己在YouTube上的內(nèi)容被輸入到視頻克隆平臺上(生成我在Instagram上看到的詐騙廣告的同一個平臺),一個與她相似的虛擬形象如今正在中國的社交媒體應(yīng)用程序上宣揚自己愛上了中國男人。由于俄烏沖突,這不僅在個人層面上冒犯了洛伊克,而且如果她有選擇權(quán)的話,她也絕不會同意參與這類內(nèi)容制作。
我聯(lián)系了洛伊克,想聽聽她對自己遭遇的看法。她是這樣表述的:“操縱我的形象來發(fā)表我絕不會寬恕的言論,這侵犯了我的個人自主權(quán),也意味著我們需要進行嚴(yán)格監(jiān)管來保護像我這樣的人免受身份盜用的影響?!?/p>
同意是一項基本原則,是我們在物理和數(shù)字領(lǐng)域進行互動的基礎(chǔ)。它是道德行為的基石,肯定了個人控制自己形象、聲音和個人數(shù)據(jù)的權(quán)利。未經(jīng)同意使用,就有可能侵犯他人的隱私、尊嚴(yán)和自主權(quán),為操縱、剝削和傷害敞開大門。
作為一家人工智能公司的企業(yè)事務(wù)主管,我曾經(jīng)參與過一項名為#我的形象我做主(#MyImageMyChoice)的活動,試圖提高人們對深度偽造應(yīng)用程序未經(jīng)同意生成的圖像如何毀掉成千上萬女孩和婦女的生活的認(rèn)識。在美國,每12個成年人中就有一個報告說她們是基于圖像的虐待的受害者。我讀過一些受害者的悲慘故事,她們分享了自己的生活是如何被人工智能應(yīng)用程序生成的圖像或視頻摧毀的。當(dāng)她們根據(jù)《數(shù)字千年版權(quán)法》(DMCA)試圖向這些應(yīng)用程序發(fā)出圖像刪除請求時,卻沒有得到任何回復(fù),或者被告知這些應(yīng)用程序背后的公司不受任何此類法律的約束。
我們正在進入一個互聯(lián)網(wǎng)時代,在這個時代里,我們看到的越來越多的內(nèi)容將由人工智能生成。在這個新世界中,同意變得更為重要。隨著人工智能的能力不斷提高,我們也必須健全道德框架和加強監(jiān)管保障。我們需要建立健全的機制,確保在創(chuàng)建和傳播人工智能生成的內(nèi)容時,征得個人的同意并尊重其意愿。這包括為面部和語音識別技術(shù)的使用提出明確的指南,并建立驗證數(shù)字媒體真實性的機制。
此外,我們必須追究那些試圖利用深度偽造技術(shù)達到欺詐或欺騙目的的人的責(zé)任,以及那些發(fā)布深度偽造應(yīng)用程序卻沒有采取適當(dāng)?shù)姆婪洞胧﹣肀苊獬霈F(xiàn)濫用現(xiàn)象的人的責(zé)任。這就需要科技公司、政策制定者和公民社會通力合作,制定并執(zhí)行法規(guī),阻止惡意行為者行惡,保護用戶免受現(xiàn)實世界的傷害,而不是只關(guān)注科幻電影里虛構(gòu)的世界末日場景。比如,我們不應(yīng)該允許視頻或語音克隆公司在未經(jīng)同意的情況下發(fā)布與個人相關(guān)的深度偽造產(chǎn)品。在征得同意的過程中,也許我們還應(yīng)該強制要求這些公司引入信息標(biāo)簽,告訴用戶他們的肖像將如何使用、存儲在哪里,以及存儲多長時間。許多消費者可能會瀏覽這些標(biāo)簽,但在俄羅斯或白俄羅斯等國家的服務(wù)器上存儲某人的深度偽造信息可能會產(chǎn)生嚴(yán)重后果,因為在這些國家,深度偽造技術(shù)濫用的受害者缺乏實際的追索權(quán)。最后,我們需要為人們提供相關(guān)機制,讓他們能夠做出選擇,不讓自己的肖像在網(wǎng)上被使用,尤其是在他們無法控制肖像使用方式的情況下。在洛伊克的案例中,當(dāng)記者向研發(fā)平臺(未經(jīng)其同意克隆洛伊克)背后的公司請求置評時,該公司沒有做出任何回應(yīng),也沒有采取任何行動。
在更健全的監(jiān)管措施出臺之前,我們需要加大努力,提高公眾意識和數(shù)字素養(yǎng),使個人能夠識別操縱行為并保護其在線生物識別數(shù)據(jù)。我們必須增強消費者的能力,讓他們在使用應(yīng)用程序和平臺時做出更明智的決定,并認(rèn)識到在數(shù)字空間以及與易受政府監(jiān)控或出現(xiàn)數(shù)據(jù)泄露的公司共享個人信息(尤其是生物識別數(shù)據(jù))的潛在后果。
生成式人工智能應(yīng)用程序具有不可否認(rèn)的吸引力,尤其是對年輕人而言。但是,當(dāng)人們在這些平臺上上傳包含自己肖像的圖片或視頻時,就會在不知不覺中把自己暴露在無數(shù)風(fēng)險里,包括隱私侵犯、身份盜用和潛在的剝削。
雖然我希望有一天我的孩子可以在實時機器翻譯的幫助下與他們的祖父母交流,但我對深度偽造技術(shù)對下一代的影響深感憂慮,尤其是當(dāng)我看到泰勒·斯威夫特的遭遇,或者#我的形象我做主活動分享自己故事的受害者的遭遇,或者其他無數(shù)遭受性騷擾和性虐待的女性被迫保持沉默的遭遇。
我的孩子們正在一個數(shù)字欺騙日益復(fù)雜的世界中成長。要幫助他們駕馭這一復(fù)雜局面,維護自主權(quán)和人格完整,就必須教給他們關(guān)于同意、批判性思維和媒體素養(yǎng)的知識。但這還不夠:我們需要讓開發(fā)這項技術(shù)的公司承擔(dān)責(zé)任。我們還必須推動各國政府快速采取行動。例如,英國將很快開始執(zhí)行《在線安全法案》(Online Safety Bill),該法案把深度偽造定為刑事犯罪,也將迫使科技平臺采取行動并刪除相關(guān)內(nèi)容。更多的國家應(yīng)該效仿英國的做法。
最重要的是,人工智能行業(yè)里的工作人士必須毫不畏懼地公開表達意見,并提醒我們的同行,這種隨意構(gòu)建生成式人工智能技術(shù)的方法是不可接受的。(財富中文網(wǎng))
亞歷山大·沃伊卡(Alexandru Voica)是Synthesia公司的企業(yè)事務(wù)和政策主管,也在穆罕默德·本·扎耶德人工智能大學(xué)(Mohamed bin Zayed University of Artificial Intelligence)擔(dān)任顧問。
Fortune.com上發(fā)表的評論文章中表達的觀點,僅代表作者本人的觀點,不代表《財富》雜志的觀點和立場。
譯者:中慧言-王芳
One Friday evening a few weeks ago, I was in my home country of Romania, visiting family for a funeral, when I found myself thinking: Was it time for me to start teaching my kids how to speak Romanian? For the past 15 years, I have built a life in the U.K., where my kids were born and raised. They love their Romanian grandparents but struggle to communicate with them, and I wanted to do something about it.
So I started looking for solutions. I searched the internet for about an hour but couldn’t find anything useful, so I went back to my evening.
A few days later, I was scrolling through my Instagram feed when an ad appeared for a language learning app. Having worked for a social media company, I knew what had happened: The company had tracked my activity online, saw I was interested in language learning apps, and decided to target me with an ad. And that’s okay: I’ve had similar experiences in the past and even decided to buy products based on this type of targeted advertising.
Over the next few days, I kept getting more and more ads from the same language app. But once I started to pay closer attention, I realized there was something more troubling going on.
While some of the ads had real people excitedly encouraging me to download the app and try it out “risk free,” other ads looked eerily familiar. They featured people speaking directly to me in French or Chinese, claiming to have mastered a foreign language in mere weeks, thanks to the app’s miraculous capabilities. However, what was really going on was not actually miraculous but alarming: The videos were manipulated through deepfake technology, potentially without the consent of the people featured in them.
While AI-generated media can be used for harmless entertainment, education, or creative expression, deepfakes have the potential to be weaponized for malicious purposes, such as spreading misinformation, fabricating evidence, or, in this case, perpetrating scams.
Because I’ve been working in AI for almost a decade, I could easily spot that the people in these ads weren’t actually real, nor were their language skills. Instead, I came to learn thanks to an investigation by Sophia Smith Galer that an app had been used to clone real people without their knowledge or permission, eroding their autonomy and potentially damaging their reputations.
A troubling aspect of these deepfake ads was the lack of consent inherent in their creation. The language app likely used the services of a video cloning platform developed by a generative AI company that has changed its name four times in the last three years and does not have any measures in place to prevent the unauthorized cloning of people or any obvious mechanisms to remove someone’s likeness from their databases.
This exploitation is not only unethical but also undermines trust in the digital landscape, where authenticity and transparency are already in short supply. Take the example of Olga Loiek, a Ukrainian student who owns a YouTube channel about wellness. She was recently alerted by her followers that videos of her had been appearing in China. On the Chinese internet, Loiek’s likeness had been transformed into an avatar of a Russian woman looking to marry a Chinese man. She found that her YouTube content had been fed into the same platform that was used to generate the scam ads I’d been seeing on Instagram, and an avatar bearing her likeness was now proclaiming love for Chinese men on Chinese social media apps. Not only was this offensive to Loiek on a personal level because of the war in Ukraine, but it was the type of content she would have never agreed to participate in if she had had the option of withholding her consent.
I reached out to Loiek to get her thoughts on what happened to her. Here’s what she had to say: “Manipulating my image to say statements I would never condone violates my personal autonomy and means we need stringent regulations to protect individuals like me from such invasions of identity.”
Consent is a fundamental principle that underpins our interactions in both the physical and digital realms. It is the cornerstone of ethical conduct, affirming individuals’ rights to control their own image, voice, and personal data. Without consent, we risk violating people’s privacy, dignity, and agency, opening the door to manipulation, exploitation, and harm.
In my job as the head of corporate affairs for an AI company, I’ve worked with a campaign called #MyImageMyChoice, trying to raise awareness of how nonconsensual images generated with deepfake apps have ruined the lives of thousands of girls and women. In the U.S., one in 12 adults have reported that they have been victims of image-based abuse. I’ve read harrowing stories from some of these victims who have shared how their lives were destroyed by images or videos generated by AI apps. When they tried to issue DMCA takedowns to these apps, they received no reply or were told that the companies behind the apps were not subject to any such legislation.
We’re entering an era of the internet where more and more of the content we see will be generated with AI. In this new world, consent takes on heightened importance. As the capabilities of AI continue to advance, so too must our ethical frameworks and regulatory safeguards. We need robust mechanisms to ensure that individuals’ consent is obtained and respected in the creation and dissemination of AI-generated content. This includes clear guidelines for the use of facial and voice recognition technology, as well as mechanisms for verifying the authenticity of digital media.
Moreover, we must hold accountable those who seek to exploit deepfake technology for fraudulent or deceptive purposes and those who release deepfake apps that have no guardrails in place to prevent misuse. This requires collaboration between technology companies, policymakers, and civil society to develop and enforce regulations that deter malicious actors and protect users from real-world harm, instead of focusing only on imaginary doomsday scenarios from sci-fi movies. For example, we should not allow video or voice cloning companies to release products that create deepfakes of individuals without their consent. And during the process of obtaining consent, perhaps we should also mandate that these companies introduce informational labels that tell users how their likeness will be used, where it will be stored, and for how long. Many consumers might glance over these labels, but there can be real consequences to having a deepfake of someone stored on servers in countries such as Russia, or Belarus where there is no real recourse for victims of deepfake abuse. Finally, we need to give people mechanisms of opting out of their likeness being used online, especially if they have no control over how it is used. In the case of Loiek, the company that developed the platform used to clone her without her consent did not provide any response or take any action when it was approached by reporters for comment.
Until better regulation is in place, we need to build greater public awareness and digital literacy efforts to empower individuals to recognize manipulation and safeguard their biometric data online. We must empower consumers to make more informed decisions about the apps and platforms they use and to recognize the potential consequences of sharing personal information, especially biometric data, in digital spaces and with companies that are prone to government surveillance or data breaches.
Generative AI apps have an undeniable allure, especially for younger people. But when people upload images or videos containing their likeness to these platforms, they unknowingly expose themselves to a myriad of risks, including privacy violations, identity theft, and potential exploitation.
While I am hopeful that one day my children can communicate with their grandparents with the help of real-time machine translation, I am deeply concerned about the impact of deepfake technology on the next generation, especially when I look at what happened to Taylor Swift, or the victims who have shared their stories with #MyImageMyChoice, or countless other women suffering from sexual harassment and abuse who have been forced into silence.
My children are growing up in a world where digital deception is increasingly sophisticated. Teaching them about consent, critical thinking, and media literacy is essential to helping them navigate this complex landscape and safeguard their autonomy and integrity. But that’s not enough: We need to hold the companies developing this technology accountable. We also must push governments to take action faster. For example, the U.K. will soon start to enforce the Online Safety Bill, which criminalizes deepfakes and should force tech platforms to take action and remove them. More countries should follow their lead.
And above all, we in the AI industry must be unafraid to speak out and remind our peers that this freewheeling approach to building generative AI technology is not acceptable.
Alexandru Voica is the head of corporate affairs and policy at Synthesia, and a consultant for Mohamed bin Zayed University of Artificial Intelligence.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.