成人小说亚洲一区二区三区,亚洲国产精品一区二区三区,国产精品成人精品久久久,久久综合一区二区三区,精品无码av一区二区,国产一级a毛一级a看免费视频,欧洲uv免费在线区一二区,亚洲国产欧美中日韩成人综合视频,国产熟女一区二区三区五月婷小说,亚洲一区波多野结衣在线

首頁 500強 活動 榜單 商業(yè) 科技 領導力 專題 品牌中心
雜志訂閱

GPT-4首次亮相,在消費辦公工具人工智能的競賽中,谷歌擊敗微軟

JEREMY KAHN
2023-03-16

谷歌急于證明人工智能競賽中自己并未邊緣化。

文本設置
小號
默認
大號
Plus(0條)

谷歌云首席執(zhí)行官托馬斯·庫里安宣布了一系列針對Google Workspace和谷歌云用戶提供的新生成式人工智能功能。不過,谷歌太急于搶在微軟發(fā)布競爭性公告之前發(fā)布消息,連定價尚未確定就宣布開放訪問其人工智能模型。圖片來源:MICHAEL SHORT—BLOOMBERG VIA GETTY IMAGES

本周又是人工智能新聞重磅頻出的一周。這還沒算上硅谷銀行倒閉可能對一些人工智能初創(chuàng)企業(yè)及其背后風投造成的深遠影響。

OpenAI剛剛發(fā)布了期待已久的GPT-4模型。這是一款大型多模態(tài)模型,支持圖像和文本輸入,不過只支持文本輸出。根據(jù)OpenAI發(fā)布的數(shù)據(jù),在一系列的基準測試中,包括一系列專為人類設計的測試中,GPT-4表現(xiàn)遠遠好于上一代GPT-3.5模型,以及支持ChatGPT的模型。舉例來說,GPT-4在模擬律師資格考試中分數(shù)很高,排名排進前10%。OpenAI還表示,GPT-4比GPT-3.5更安全,具體表現(xiàn)在能提供更多基于事實的答案,而且與GPT-3.5相比,想從GPT-4問出越界的回答要難得多。

不過,該公司也表示模型仍存在缺陷。新版人工智能模型仍會產(chǎn)生幻覺,也就是編造信息。OpenAI指出,從某些方面來說幻覺問題可能更嚴重,因為GPT-4極少給出不準確的答案,所以人們更容易輕信。因此,該模型也可能做出包含偏見和有害的表述。從參數(shù)層面,OpenAI幾乎沒提到GPT-4到底多大,需要多少專門的圖形處理單元訓練,或者確切地說利用哪些數(shù)據(jù)訓練。該公司表示,出于競爭和安全的考慮,希望對相關細節(jié)保密。目前看來,GPT-4相比上一代有了很大的進步,但與過去兩個月里OpenAI和其他公司競相研發(fā)的產(chǎn)品相比,算不上顛覆性的進步。這只會加劇關于OpenAI等科技公司是否不負責任的爭論,因為如此強大的技術持續(xù)存在缺陷和弊端,就已提供給消費者和客戶使用。

與此同時,預計本周四微軟將公布一系列基于人工智能的Office軟件增強功能。中國搜索引擎巨頭百度也剛剛發(fā)布了“文心一言”。隨著ChatGPT以及OpenAI與微軟的聯(lián)盟迅速走紅,谷歌被打了個措手不及,急于證明人工智能競賽中自己并未邊緣化。所以,谷歌發(fā)布了一系列重要的人工智能進展以擊退微軟。

對大多數(shù)人來說最重要的消息是,谷歌表示,流行的辦公效率工具(如谷歌文檔、工作表和幻燈片)中將添加生成性人工智能功能。新功能之一就是人們能通過文本框喚起谷歌人工智能,自動起草幾乎任何類型的文檔,或者為表格數(shù)據(jù)創(chuàng)建不同類型的圖表。用戶可以高亮顯示文本,要求谷歌人工智能編輯或改寫成不同的語氣和風格。還可以在Gmail里自動起草郵件或總結郵件會話內容。在Google Meet里可以生成新的虛擬背景并自動創(chuàng)建談話筆記,同步生成摘要。

谷歌宣布的另一則消息同樣重要:企業(yè)客戶可以通過谷歌云上的應用編程界面,使用其最先進的大型語言模型系列PaLM。

除了PaLM,谷歌還為人工智能開發(fā)者和數(shù)據(jù)科學家推出了更新的Vertex AI平臺。人們可通過該平臺訪問大型基礎模型,不僅來自谷歌,還來自其不斷壯大的人工智能實驗室聯(lián)盟生態(tài)系統(tǒng),如Anthropic和Cohere,以及AI21實驗室和Midjourney,等等。谷歌還推出了名為Generative AI App Builder的軟件,技術實力稍弱的團隊也能利用生成性人工智能模型快速構建并推出定制應用。

谷歌表示,用戶可以使用Vertex AI和Generative AI App Builder兩大新功能:一是企業(yè)搜索工具,可以用谷歌搜索挖掘自己的數(shù)據(jù),包括CRM或ERP軟件生成的數(shù)據(jù),以及內部網(wǎng)站和其他文檔,最后僅返回知識庫中搜到的結果。然后相關結果可用于自然語言任務,如摘要、情感分析或問答,降低了語言模型虛構信息或從其預訓練數(shù)據(jù)而不是客戶自己數(shù)據(jù)中提取信息的風險。另一項新功能是類似聊天機器人的“對話式人工智能”功能,客戶可為相關搜索、自然語言處理和生成人工智能功能配置用戶界面。

谷歌宣布了首批“可信測試者”,相關企業(yè)可以立即訪問新的人工智能服務,包括豐田(Toyota)、德意志銀行(Deutsche Bank)、HCA Healthcare、Equifax、Starz電視網(wǎng)和梅奧診所(Mayo Clinic)等。該公司表示,未來幾周內將大規(guī)模推出新產(chǎn)品和新功能。此舉充分體現(xiàn)人工智能技術競賽多么激烈,新聞發(fā)布會上谷歌云業(yè)務首席執(zhí)行官托馬斯·庫里安被迫承認,盡管谷歌不斷發(fā)布新產(chǎn)品,但尚未確定如何定價。庫里安說,之前谷歌總是以免費開源方式提供人工智能服務,或者相關技術只是“嵌入到既有產(chǎn)品中”。“這是谷歌第一次采用新的通用人工智能模型,而且開發(fā)人員可通過API均可訪問,”他說。

谷歌關于新產(chǎn)品的新聞稿宣揚對“負責任的人工智能”的承諾,發(fā)布的新功能也不斷強調該主題,稱Vertex AI和Generative AI App Builder包括“檢查、理解和修改模型行為”的工具,新系統(tǒng)的信息檢索使用了傳統(tǒng)搜索算法,減少了答案不準確的風險。但庫里安并未明確說明谷歌如何向客戶保證,其大型語言模型喚起后不會出現(xiàn)不恰當?shù)姆磻蛘吒愕氖?,聊天機器人可能從友好的助手變成脾氣暴躁、滿口辱罵和威脅的“魔鬼”,正如測試人員在微軟新必應上發(fā)現(xiàn)的情況一樣。谷歌也沒提到是否計劃采取措施,阻止用戶在其廣受歡迎的辦公效率工具中利用生成性人工智能功能,故意制造錯誤信息或在論文中作弊。

對此的擔憂與日俱增。原因之一可能是,大多研究人員都在科技巨頭工作,如果他們有越界舉動就會丟了工作??萍夹侣劸W(wǎng)站The Verge和Casey Newton旗下的The Platformer剛剛透露,微軟最近解散了人工智能道德和社會團隊——該核心小組一直努力提醒人們擔心微軟正建設的諸多先進人工智能系統(tǒng),并敦促公司放緩推出生成性人工智能的速度。一些倫理專家被分配到其他團隊。一些人則遭到解雇。微軟經(jīng)理向團隊宣布團隊重組的一段錄音泄露給了Casey Newton,錄音中清楚表明,首席執(zhí)行官薩提亞·納德拉和首席技術官凱文·斯科特施加壓力,要求盡快在全公司推廣OpenAI的先進人工智能技術,不管是質疑該決定還是質疑推進速度都不受歡迎。

現(xiàn)在,微軟仍有另一個與“負責任的人工智能”相關的部門,但該部門角色更多是從高層設定原則、框架和流程,而不是實際的安全和道德檢查。人工智能倫理小組的解散進一步證明了為何在人工智能倫理或安全方面,不應該相信科技行業(yè)能做到自我監(jiān)管,以及為什么迫切需要政府監(jiān)管。(財富中文網(wǎng))

譯者:夏林

本周又是人工智能新聞重磅頻出的一周。這還沒算上硅谷銀行倒閉可能對一些人工智能初創(chuàng)企業(yè)及其背后風投造成的深遠影響。

OpenAI剛剛發(fā)布了期待已久的GPT-4模型。這是一款大型多模態(tài)模型,支持圖像和文本輸入,不過只支持文本輸出。根據(jù)OpenAI發(fā)布的數(shù)據(jù),在一系列的基準測試中,包括一系列專為人類設計的測試中,GPT-4表現(xiàn)遠遠好于上一代GPT-3.5模型,以及支持ChatGPT的模型。舉例來說,GPT-4在模擬律師資格考試中分數(shù)很高,排名排進前10%。OpenAI還表示,GPT-4比GPT-3.5更安全,具體表現(xiàn)在能提供更多基于事實的答案,而且與GPT-3.5相比,想從GPT-4問出越界的回答要難得多。

不過,該公司也表示模型仍存在缺陷。新版人工智能模型仍會產(chǎn)生幻覺,也就是編造信息。OpenAI指出,從某些方面來說幻覺問題可能更嚴重,因為GPT-4極少給出不準確的答案,所以人們更容易輕信。因此,該模型也可能做出包含偏見和有害的表述。從參數(shù)層面,OpenAI幾乎沒提到GPT-4到底多大,需要多少專門的圖形處理單元訓練,或者確切地說利用哪些數(shù)據(jù)訓練。該公司表示,出于競爭和安全的考慮,希望對相關細節(jié)保密。目前看來,GPT-4相比上一代有了很大的進步,但與過去兩個月里OpenAI和其他公司競相研發(fā)的產(chǎn)品相比,算不上顛覆性的進步。這只會加劇關于OpenAI等科技公司是否不負責任的爭論,因為如此強大的技術持續(xù)存在缺陷和弊端,就已提供給消費者和客戶使用。

與此同時,預計本周四微軟將公布一系列基于人工智能的Office軟件增強功能。中國搜索引擎巨頭百度也剛剛發(fā)布了“文心一言”。隨著ChatGPT以及OpenAI與微軟的聯(lián)盟迅速走紅,谷歌被打了個措手不及,急于證明人工智能競賽中自己并未邊緣化。所以,谷歌發(fā)布了一系列重要的人工智能進展以擊退微軟。

對大多數(shù)人來說最重要的消息是,谷歌表示,流行的辦公效率工具(如谷歌文檔、工作表和幻燈片)中將添加生成性人工智能功能。新功能之一就是人們能通過文本框喚起谷歌人工智能,自動起草幾乎任何類型的文檔,或者為表格數(shù)據(jù)創(chuàng)建不同類型的圖表。用戶可以高亮顯示文本,要求谷歌人工智能編輯或改寫成不同的語氣和風格。還可以在Gmail里自動起草郵件或總結郵件會話內容。在Google Meet里可以生成新的虛擬背景并自動創(chuàng)建談話筆記,同步生成摘要。

谷歌宣布的另一則消息同樣重要:企業(yè)客戶可以通過谷歌云上的應用編程界面,使用其最先進的大型語言模型系列PaLM。

除了PaLM,谷歌還為人工智能開發(fā)者和數(shù)據(jù)科學家推出了更新的Vertex AI平臺。人們可通過該平臺訪問大型基礎模型,不僅來自谷歌,還來自其不斷壯大的人工智能實驗室聯(lián)盟生態(tài)系統(tǒng),如Anthropic和Cohere,以及AI21實驗室和Midjourney,等等。谷歌還推出了名為Generative AI App Builder的軟件,技術實力稍弱的團隊也能利用生成性人工智能模型快速構建并推出定制應用。

谷歌表示,用戶可以使用Vertex AI和Generative AI App Builder兩大新功能:一是企業(yè)搜索工具,可以用谷歌搜索挖掘自己的數(shù)據(jù),包括CRM或ERP軟件生成的數(shù)據(jù),以及內部網(wǎng)站和其他文檔,最后僅返回知識庫中搜到的結果。然后相關結果可用于自然語言任務,如摘要、情感分析或問答,降低了語言模型虛構信息或從其預訓練數(shù)據(jù)而不是客戶自己數(shù)據(jù)中提取信息的風險。另一項新功能是類似聊天機器人的“對話式人工智能”功能,客戶可為相關搜索、自然語言處理和生成人工智能功能配置用戶界面。

谷歌宣布了首批“可信測試者”,相關企業(yè)可以立即訪問新的人工智能服務,包括豐田(Toyota)、德意志銀行(Deutsche Bank)、HCA Healthcare、Equifax、Starz電視網(wǎng)和梅奧診所(Mayo Clinic)等。該公司表示,未來幾周內將大規(guī)模推出新產(chǎn)品和新功能。此舉充分體現(xiàn)人工智能技術競賽多么激烈,新聞發(fā)布會上谷歌云業(yè)務首席執(zhí)行官托馬斯·庫里安被迫承認,盡管谷歌不斷發(fā)布新產(chǎn)品,但尚未確定如何定價。庫里安說,之前谷歌總是以免費開源方式提供人工智能服務,或者相關技術只是“嵌入到既有產(chǎn)品中”?!斑@是谷歌第一次采用新的通用人工智能模型,而且開發(fā)人員可通過API均可訪問,”他說。

谷歌關于新產(chǎn)品的新聞稿宣揚對“負責任的人工智能”的承諾,發(fā)布的新功能也不斷強調該主題,稱Vertex AI和Generative AI App Builder包括“檢查、理解和修改模型行為”的工具,新系統(tǒng)的信息檢索使用了傳統(tǒng)搜索算法,減少了答案不準確的風險。但庫里安并未明確說明谷歌如何向客戶保證,其大型語言模型喚起后不會出現(xiàn)不恰當?shù)姆磻蛘吒愕氖?,聊天機器人可能從友好的助手變成脾氣暴躁、滿口辱罵和威脅的“魔鬼”,正如測試人員在微軟新必應上發(fā)現(xiàn)的情況一樣。谷歌也沒提到是否計劃采取措施,阻止用戶在其廣受歡迎的辦公效率工具中利用生成性人工智能功能,故意制造錯誤信息或在論文中作弊。

對此的擔憂與日俱增。原因之一可能是,大多研究人員都在科技巨頭工作,如果他們有越界舉動就會丟了工作。科技新聞網(wǎng)站The Verge和Casey Newton旗下的The Platformer剛剛透露,微軟最近解散了人工智能道德和社會團隊——該核心小組一直努力提醒人們擔心微軟正建設的諸多先進人工智能系統(tǒng),并敦促公司放緩推出生成性人工智能的速度。一些倫理專家被分配到其他團隊。一些人則遭到解雇。微軟經(jīng)理向團隊宣布團隊重組的一段錄音泄露給了Casey Newton,錄音中清楚表明,首席執(zhí)行官薩提亞·納德拉和首席技術官凱文·斯科特施加壓力,要求盡快在全公司推廣OpenAI的先進人工智能技術,不管是質疑該決定還是質疑推進速度都不受歡迎。

現(xiàn)在,微軟仍有另一個與“負責任的人工智能”相關的部門,但該部門角色更多是從高層設定原則、框架和流程,而不是實際的安全和道德檢查。人工智能倫理小組的解散進一步證明了為何在人工智能倫理或安全方面,不應該相信科技行業(yè)能做到自我監(jiān)管,以及為什么迫切需要政府監(jiān)管。(財富中文網(wǎng))

譯者:夏林

Greetings. It promises to be (another) massive week in A.I. news. And that’s leaving aside the lingering effects that the collapse of Silicon Valley Bank may have on some A.I. startups and the venture funds backing them.

Right as this newsletter was going to press, OpenAI released its long-anticipated GPT-4 model. The new model is multimodal, accepting both images and text as inputs, although it only generates text as its output. According to data released by OpenAI, GPT-4 performs much better than GPT-3.5, its latest model, and the one that powers ChatGPT, on a whole range of benchmark tests, including a battery of different tests designed for humans. For instance, GPT-4 scores well enough to be within the top 10% of test takers on a simulated bar exam. OpenAI also says that GPT-4 is safer than GPT-3.5—returning more factual answers and it’s much more difficult to get GPT-4 to jump its guardrails than has been the case with GPT-3.5.

But, the company is also saying that the model is still flawed. It will still hallucinate—making up information. And OpenAI notes that in some ways hallucination might be more of an issue because GPT-4 does this less often, so people might get very complacent about the answers it produces. It is also still possible to get the model to churn out biased and toxic language. OpenAI is saying very little about how big a model GPT-4 actually is, how many specialized graphics processing units it took to train it, or exactly what data it was trained on. It says it wants to keep these details secret for both competitive and safety reasons. I’ll no doubt be writing much more about GPT-4 in next week’s newsletter. But my initial take is that GPT-4 looks like a big step forward, but not a revolutionary advance over what OpenAI and others have been racing to put into production over the past two months. And it will only heighten the debate about whether tech companies, including OpenAI, are being irresponsible by putting this powerful technology in the hands of consumers and customers despite its persistent flaws and drawbacks.

Meanwhile, Microsoft is expected to unveil a range of A.I.-powered enhancements to its Office software suite on Thursday. And Baidu, the Chinese search giant, has a big announcement scheduled for later this week. Google, which was caught flat-footed by the viral popularity of ChatGPT and OpenAI’s alliance with Microsoft, is eager to prove that it’s not about to be sidelined in the A.I. race. And the big news today before OpenAI’s GPT-4 announcement was that Google had beaten Microsoft out of the gate with a bunch of big A.I. announcements of its own.

For most people, the main news is that the search giant said it is adding generative-A.I. features to its popular Workspace productivity tools, such as Google Docs, Sheets, and Slides. Among the things people will now be able to do is use a text box to prompt Google’s A.I. to automatically draft almost any kind of document, or to create different kinds of charts for Sheets data. Users can highlight text and ask Google’s A.I. to edit it for them or rewrite it in a different tone and style. You will also be able to automatically draft emails or summarize entire email threads in Gmail. In Google Meet you will be able to generate new virtual backgrounds and automatically create notes of conversations, complete with summaries.

But equally important was the other news Google announced: The company is allowing enterprise customers to tap its most advanced family of large language models—called PaLM —through an application programming interface on Google Cloud.

Beyond PaLM, it has also launched an updated set of its Vertex AI platform for A.I. developers and data scientists. The platform allows them access to large foundation models, not just from Google, but from its growing ecosystem of allied A.I. labs, such as Anthropic and Cohere, as well as AI21 Labs and Midjourney. And it has launched a set of software, called Generative AI App Builder, that will allow slightly less technical teams to quickly build and roll out custom applications using generative A.I. models.

For both Vertex AI and the Generative AI App Builder, Google says users will have access to two new related capabilities: The first is an enterprise search tool that will allow them to perform Google searches across their own data—including data generated by CRM or ERP software, as well as internal websites and other documents—and return results only from that knowledge base. These results can then be used for natural language tasks, such as summarization, sentiment analysis, or question-answering, with less risk that the language model will simply invent information or draw information from its pretraining data rather than the customer’s own data. The other new capability is a chatbot-like “conversational A.I.” function that customers can deploy to act as the user interface for these search, natural language processing, and generative A.I. capabilities.

Google announced a group of initial “trusted testers” who will have immediate access to these new A.I. services including Toyota, Deutsche Bank, HCA Healthcare, Equifax, the television network Starz, and the Mayo Clinic, among others. The new products and features will be rolled out more broadly in the coming weeks, the company said. But it was a sign of just how intense this A.I. technology race has become that Thomas Kurian, the CEO of Google’s Cloud business, was forced to acknowledge during the press briefing that although Google was releasing these new products without having yet worked out exactly how to price them. In the past, Kurian said, Google had always made its A.I. advances available as free, open-source releases or the technology was simply “embedded in our products.” “This is the first time we are taking our new, general A.I. models and making them accessible to the developer community with an API,” he said.

Google’s press release on its new products touted the company’s commitment to “Responsible AI” and it tried to position its release under this rubric, noting that Vertex AI and Generative AI App Builder include tools to “inspect, understand, and modify model behavior” and that the information retrieval aspects of the new systems used traditional search algorithms, lessening the risk of inaccurate answers. But Kurian did not say exactly what sort of guarantees Google could offer customers that its large language models could not be prompted in ways that would elicit inaccurate responses—or worse, might morph their chatbot from a friendly assistant into a petulant, abusive, and threatening “devil-on-your-shoulder,” as testers discovered with Microsoft’s Bing. It also did not address whether Google was planning to take any steps to prevent users of its very popular Workspace tools from using the new generative A.I. features to deliberately churn out misinformation or to cheat on school essays.

Concern about this is growing.One reason may be that most of those researchers are now embedded inside big tech companies and if they step out of line, they get fired. Tech news site The Verge and Casey Newton’s The Platformer just revealed that Microsoft recently disbanded its A.I. ethics and society team—a central group that had been trying to raise concerns about many of the advanced A.I. systems Microsoft was building and had been urging the company to slow down the speed of its generative A.I. roll out. Some of the ethics experts were assigned to other teams. Some were fired. An audio recording of a Microsoft manager addressing the team about its restructuring that leaked to Newton made it clear that there was pressure from CEO Satya Nadella and CTO Kevin Scott to roll out OpenAI’s advanced A.I. technology throughout the company as quickly as possible and that questioning that decision or its pace was not appreciated.

Now Microsoft still has another corporate Office of Responsible AI, but its role is more to set high-level principals, frameworks, and processes—not to conduct the actual safety and ethical checks. The disbanding of the A.I. ethics group is further evidence of why the tech industry should not be trusted to self-regulate when it comes to A.I. ethics or safety and why government regulation is urgently needed.

財富中文網(wǎng)所刊載內容之知識產(chǎn)權為財富媒體知識產(chǎn)權有限公司及/或相關權利人專屬所有或持有。未經(jīng)許可,禁止進行轉載、摘編、復制及建立鏡像等任何使用。
0條Plus
精彩評論
評論

撰寫或查看更多評論

請打開財富Plus APP

前往打開
熱讀文章
美女被男人桶到爽免费网站国产| 国产尤物亚洲精品不卡| 精品国精品国产自在久久| 国产真实乱在线更新| 久久久久亚洲AV无码专区首JN| 亚洲人成无码www久久久| 亚洲日韩欧洲乱码av夜夜摸| 无遮挡又黄又刺激的视频网站| 久久99精品国产麻豆婷婷| 亚洲伊人色一综合网| 女人一区二区三区电影| 国产av人人夜夜澡人人爽麻豆| 国产成av人片在线观看天堂无码| 亚洲人成人无码网WWW国| 视频一区二区三区日韩在线| 四虎一区二区三区精品| 亚洲AV成人无码网天堂| 天堂亚洲日本va中文字幕| 黄色免费在线视频| 免费人成视频激情999| 欧美爆乳大码在线观看| 欧美色精品人妻在线视频| 久久精品午夜福利电影| 亚洲中文字幕无码永久不卡免弗| 人人色在线视频播放| 亚洲国产成人精品激情在线| 国产成人久久精品二三区无码| 99久久精品费精品国产| 亚洲人成色777777精品| 人人爽人人爽人人片AV| 成熟妇女系列视频| 亚亚洲国产精品va在线观看香蕉| 精品国产高清久久久久久| 午夜亚洲AⅤ无码高潮片在线播放| 女人被狂躁C到高潮视频| 无码精品国产一区二区三区蜜桃| 在线观看免费观看最新| 中文无码AV一区二区三区| 国产性生大片免费观看性| 国产熟女一区二区三区十视频| 中文字幕欧美人妻精品一区|