我们能减轻偏见问题吗?

双语阅读:生成式人工智能的 6 大年夜问题_人工智能_成见 智能助手

Bias has become a byword for AI-related harms, for good reason. Real-world data, especially text and images scraped from the internet, is riddled with it, from gender stereotypes to racial discrimination. Models trained on that data encode those biases and then reinforce them wherever they are used.

偏见已成为人工智能干系危害的代名词,这是有道理的。
现实天下的数据,尤其是从互联网上搜索到的文本和图片,充斥着偏见,从性别刻板印象到种族歧视,不一而足。
在这些数据上演习出来的模型会对这些偏见进行编码,然后在任何地方利用时都会强化这些偏见。

Chatbots and image generators tend to portray engineers as white and male and nurses as white and female. Black people risk being misidentified by police departments’ facial recognition programs, leading to wrongful arrest. Hiring algorithms favor men over women, entrenching a bias they were sometimes brought in to address.

谈天机器人和图像天生器方向于将工程师描述成白人和男性,将护士描述成白人和女性。
黑人有可能被警察部门的面部识别程序缺点识别,导致缺点逮捕。
招聘算法倾向男性而非女性,这加深了它们有时须要办理的偏见。

2. How will AI change the way we apply copyright?

人工智能将如何改变我们运用版权的办法?

Outraged that tech companies should profit from their work without consent, artists and writers (and coders) have launched class action lawsuits against OpenAI, Microsoft, and others, claiming copyright infringement.

艺术家和作家(以及程序员)对科技公司未经赞许就从他们的作品中获利感到愤怒,他们对 OpenAI、微软和其他公司发起了集体诉讼,声称它们陵犯了版权。

Now artists are fighting back with technology of their own. One tool, called Nightshade, lets users alter images in ways that are imperceptible to humans but devastating to machine-learning models, making them miscategorize images during training.

现在,艺术家们正在用自己的技能进行反击。
一款名为 "Nightshade "的工具可以让用户以人类无法察觉的办法修正图片,但对机器学习模型来说却是毁灭性的,它会让模型在演习过程中对图片进行缺点分类。

3.How will it change our jobs?

它将如何改变我们的事情?

We’ve long heard that AI is coming for our jobs. One difference this time is that white-collar workers—data analysts, doctors, lawyers, and (gulp) journalists—look to be at risk too. Chatbots can ace high school tests, professional medical licensing examinations, and the bar exam. They can summarize meetings and even write basic news articles. What’s left for the rest of us? The truth is far from straightforward.

我们早就听说人工智能会抢走我们的饭碗。
这次不同的是,白领--数据剖析师、年夜夫、状师和--看起来也面临风险。
谈天机器人可以在高中考试、专业医疗执照考试和状师考试中取得精良成绩。
他们可以总结会议内容,乃至可以撰写基本的***宣布。
我们还能剩下什么?事实远非如此大略。

Even so, many researchers see this technology as empowering, not replacing, workers overall. Technology has been coming for jobs since the industrial revolution, after all. New jobs get created as old ones die out.

即便如此,许多研究职员认为这项技能总体上增强了工人的能力,而不是取代工人。
毕竟,自工业革命以来,技能就一贯在创造就业机会。
旧的事情岗位被淘汰,新的事情岗位随之产生。

4.What misinformation will it make possible?

它将使什么缺点信息成为可能?

Using generative models to create fake text or images is easier than ever. Many warn of a misinformation overload. OpenAI has collaborated on research that highlights many potential misuses of its tech for fake-news campaigns. In a 2023 report it warned that large language models could be used to produce more persuasive propaganda—harder to detect as such— at massive scales. Experts in the US and the EU are already saying that elections are at risk.

利用天生模型创建虚假文本或图像比以往任何时候都要随意马虎。
许多人警告说,缺点信息将过载。
OpenAI 互助开展的研究强调了其技能在假***活动中存在许多潜在滥用的情形。
它在 2023 年的一份报告中警告,大型措辞模型可用于大规模制作更具说服力的宣扬品,而这种宣扬品更难被创造。
美国和欧盟的专家已经表示选举面临风险。

Here’s the catch: it’s impossible to know all the ways a technology will be misused until it is used.

问题是:在一项技能被利用之前,我们不可能知道它被滥用的所有办法。

5.Will we come to grips with its costs?

我们能接管它的代价吗?

The development costs of generative AI, both human and environmental, are also to be reckoned with.

天生式人工智能的开拓本钱,包括人力本钱和环境本钱,也须要加以考虑。

With generative AI now a mainstream concern, the human costs will come into sharper focus, putting pressure on companies building these models to address the labor conditions of workers around the world who are contracted to help improve their tech.

随着天生式人工智能成为主流,人力本钱也将成为关注焦点,这将给建立这些模型的公司带来压力,哀求它们办理天下各地工人的劳动条件问题,而这些工人与公司签订条约以帮助公司改进技能。

6.Will doomerism continue to dominate policymaking?

末日论会连续主导政策制订吗?

Doomerism—the fear that the creation of smart machines could have disastrous, even apocalyptic consequences—has long been an undercurrent in AI. But peak hype, plus a high-profile announcement from AI pioneer Geoffrey Hinton in May that he was now scared of the tech he helped build, brought it to the surface.

长期以来,人工智能领域一贯暗流涌动,人们担心智能机器的出身会带来灾害性的后果,乃至是天下末日。
但随着炒作的高峰期到来,再加上人工智能先驱杰弗里-辛顿(Geoffrey Hinton)在五月份高调宣告,他现在对自己帮助建立的技能感到恐怖,使这一问题浮出水面。

Benaich points out that some of the people ringing the alarm with one hand are raising $100 million for their companies with the other. “You could say that doomerism is a fundraising strategy,” he says.

Benaich 指出,有些人在敲响警钟的同时,也在为自己的公司筹集 1 亿美元的资金。
他说:“你可以认为末日论是一种筹资策略。

(英语原文摘自MIT Technology Review)