admin管理员组

文章数量:1597481

ai人工智能可以干什么

为什么要阅读这篇文章? (Why should you read this article?)

What are the most pressing issues when it comes to ethics in AI and robotics? How will they affect the way we live (and work)? Sooner or later these issues will concern you, whether you work in the field or not. Here we will go through the main ideas contained in the paper Robot ethics: Mapping the issues for a mechanized world, while I add some of my own input. You will not have many answers, but will probably start asking the right questions.

关于人工智能和机器人技术的道德问题,最紧迫的问题是什么? 它们将如何影响我们的生活(和工作)方式? 无论您是否在野外工作,这些问题迟早都会引起您的关注。 在这里,我们将介绍“ 机器人道德:映射机械化世界中的问题”一文中包含的主要思想,同时添加一些自己的意见。 您不会有很多答案,但是可能会开始提出正确的问题。

真的是机器人吗? (What is a robot, really?)

Although this question might seem a bit too basic, it is important to outline a precise definition of what actually is a robot (and thus what is not one).

尽管这个问题似乎有点太基础了,但重要的是要概述一个确切的定义,即什么是机器人(因此什么不是机器人)。

There are some obvious cases, such as a high-level AI-enhanced autonomous military drone (which is probably considered a robot by any reasonable definition) and a regular remote-controlled old-school car (which is usually not considered to be a robot). But what about the grey area? A human-controlled drone that is capable of eventually finding its way back to its owner, is it a robot?

有一些明显的情况,例如高级AI增强的自主军用无人机(按任何合理的定义可能被认为是机器人)和常规的遥控老式汽车(通常不被认为是机器人) )。 但是灰色区域呢? 最终能够找到其主人的人类控制无人机,是机器人吗?

The answer to what is a robot is not straightforward, so there is no consensus around it. The article then comes up with a working definition, to facilitate the discussion:

关于什么是机器人的答案并不简单,因此围绕它尚无共识。 然后,本文提出了一个有效的定义,以促进讨论:

“a robot is an engineered machine that senses, thinks, and acts.”

“机器人是一种能够感知,思考和行动的工程机械。”

This definition implies robots must be equipped with sensors and with some sort of intelligence to guide its actions, and it also includes biological and virtual robots.

这个定义意味着机器人必须配备传感器和某种智能以指导其动作,并且还包括生物和虚拟机器人。

说到机器人,我们现在在哪里? (When it comes to robots, where are we now?)

Now that we are on the same page, what are robots doing today, and what will they be able to do in the future? The answer to these questions change quite fast these days but, generally speaking, robots are used mostly for repetitive tasks that do not require a lot of judgement (such as vacuuming). They are also very useful when it comes to dangerous tasks, such as mine detection and bomb defusing.

既然我们在同一页上,那么今天的机器人在做什么,将来它们又能做什么? 这些问题的答案近来变化很快,但是通常来说,机器人通常用于不需要很多判断(例如吸尘)的重复性任务。 当涉及到危险任务(例如探雷和炸药分解)时,它们也非常有用。

A successful example are the ubiquitous Roomba vacuum-cleaners, which account for almost half of the world’s service robots. The ones that pose the most pressing ethical issues, however, are in other areas of expertise:

一个成功的例子是无处不在的Roomba吸尘器,该吸尘器约占世界服务机器人的一半。 但是,构成最紧迫的道德问题的是其他专业领域:

Photo by Lenny Kuhne on Unsplash
Lenny Kuhne在 Unsplash上 拍摄的照片

劳动自动化 (Labour automation)

Probably one of the most commented consequences of the recent advances in AI is that human labour is quickly being replaced by robots, which can be less prone to error, do not suffer from fatigue and emotional issues, and are usually cheaper to maintain in the long run. Will this really happen, though? Or is it just a way to make good headlines? Well, the answer probably lies in the middle. Some jobs will definitely die due to automation in the next few years, but this has been happening since way before AI came to exist. Some examples of jobs replace by machines a long time ago include the “bowling alley pinsetter”, young boys who set up the bowling pins for clients and “human alarm clocks”, responsible for waking people up by knocking on their windows. Even though this fear of being replaced looks recent, it has actually been around for ages and, so far, it has not come to happen.

AI的最新进展中最受关注的后果之一可能是,人工正在Swift被机器人取代,这种机器人不太容易出错,不会遭受疲劳和情绪问题,并且长期维护成本通常较低跑。 不过,这真的会发生吗? 还是只是成为头条新闻的一种方式? 好吧,答案可能在中间。 在未来几年中,某些工作肯定会由于自动化而死亡,但这是在AI出现之前就已经发生了。 很久以前,用机器代替工作的一些例子包括“保龄球馆保镖”,为客户设置保龄球保龄球的小男孩和“人闹钟”,负责敲开窗户唤醒人们。 尽管这种担心被替换的恐惧看起来是最近的,但实际上已经存在了很多年了,到目前为止,还没有发生。

One reason is that, although there is a lot of hype behind the capabilities of AI, current robots are still far from being able to do some of the most mundane activities we do these days. Another important reason is that most robots have a very narrow scope: they are usually really good at performing a hyper-specific task, whereas most jobs require a more generalist approach. Finally, new jobs are being created all the time, many of which related to building and operating robots.

原因之一是,尽管AI的功能背后有很多宣传,但当前的机器人仍无法完成我们如今所做的一些最平凡的活动。 另一个重要的原因是,大多数机器人的范围非常狭窄:它们通常非常擅长执行超特定任务,而大多数工作则需要更通用的方法。 最后,新的工作一直在创造,其中很多与建造和操作机器人有关。

So, to answer our question: yes, many jobs will no longer exist and will be replaced by robots. This is actually a constant process that has been happening for centuries, and it will continue to happen. Human labour, however, will remain relevant for a long time, just in different ways. It is important for us, then, to prepare for this new world and to understand which skills will be the most needed in it. I believe those are either highly technical skills, related to coding and AI, or activities that require a human approach and a very diverse skillset, which are quite hard to be automated.

因此,回答我们的问题:是的,许多工作将不再存在,将由机器人代替。 实际上,这是一个持续不断的过程,已经存在了多个世纪,并且还将继续发生。 但是,人类劳动将长期保持相关性,只是以不同的方式。 因此,对我们来说,为这个新世界做准备并了解哪些技能是其中最需要的很重要。 我相信这些要么是与编码和AI相关的高技术技能,要么是需要人工方法和非常多样化的技能组合的活动,而这些活动很难实现自动化。

Photo by Michael Marais on Unsplash
Michael Marais在 Unsplash上的 照片

军事 (Military)

Military robots come in many shapes, from bomb-defusing cars to weapon-equipped drones. On one hand, they can save lives, by replacing soldiers when it’s time for dangerous work. On the other hand, there’s the obvious question: is it ethical to use so much power and technology to kill people? The second, not-so-obvious question is: should we let AI decide who to kill?

军事机器人的形态多种多样,从拆弹车到装备武器的无人机。 一方面,他们可以在需要危险工作的时候更换士兵,从而挽救生命。 另一方面,存在一个明显的问题:使用这么多的力量和技术杀死人是否合乎道德? 第二个不太明显的问题是:我们应该让AI决定杀死谁吗?

Even though it might sound absurd at first, robots can actually think before they shoot: they don’t get scared, they don’t panic, they don’t have prejudices (or at least they shouldn’t). They can also be less prone to error than humans. It can, however, turn the odds even more in favour of military powerhouses: imagine a war between the U.S. and Venezuela, except that North-Americans can control robots from far away, risking nothing, while the South-Americans are being exterminated, without a chance of fighting back. This scenario is actually a possibility for the future.

尽管乍一看听起来很荒谬,但机器人实际上可以在射击之前进行思考:它们不会感到恐惧,不会惊慌,不会有偏见(或者至少不应该有偏见)。 它们也比人类更不容易出错。 但是,它可以使可能性更大,更倾向于军事强国:想象一下美国和委内瑞拉之间的战争,只是北美人可以控制遥远的机器人,不承担任何风险,而南美人正在灭绝,而没有反击的机会。 这种情况实际上是未来的可能性。

We can’t really state what is or is not ethical here, it depends too much on cultural factors, but it is definitely worthwhile taking into consideration future regulatory risks when developing new AI. The field is evolving fast, and regulation struggles to keep up, but keep in mind that just because something is not regulated now, that doesn’t mean this will still be the case in 5 years.

我们在这里不能真正说明什么是道德,这在很大程度上取决于文化因素,但是在开发新的AI时考虑到未来的监管风险绝对值得。 该领域正在快速发展,监管努力难以跟上,但是请记住,仅仅因为现在没有监管,这并不意味着5年后仍会如此。

友谊 (Companionship)

This might be one of the most controversial applications for AI: sex robots are getting more and more realistic. You can choose not only how your robot looks, but also how it will react to your advances. Many questions have come up because of those features, such as: is it ok to make a robot that looks a lot like a celebrity (or someone you know)? What about one that opposes any sexual interaction, so you can simulate a rape?

这可能是AI最具争议的应用程序之一:性机器人变得越来越现实。 您不仅可以选择机器人的外观,还可以选择它将对您的进步做出什么样的React。 由于这些功能,出现了许多问题,例如:制造看起来很像名人(或您认识的人)的机器人可以吗? 反对任何性交的人呢,可以模拟强奸呢?

One one hand, there might be serious psychological consequences not yet unveiled, for people who use these toys. They could also reinforce unhealthy sexual behaviour and expectations. On the other hand, they might be exactly what some people need to let off some steam, meaning they will be less prone to practice their unwanted fantasies with another human being.

一方面,对于使用这些玩具的人来说,可能会有尚未揭晓的严重心理后果。 它们还可以增强不健康的性行为和期望。 另一方面,它们可能正是某些人需要释放的动力,这意味着他们将不太容易与另一个人一起实践自己不想要的幻想。

All of these are new issues, not yet addressed, but there might be many interesting questions to be answered in fields such as Philosophy or Psychology.

所有这些都是尚未解决的新问题,但是在哲学或心理学等领域可能会回答许多有趣的问题。

我们应该考虑哪些因素? (What factors should we look at?)

安全与错误 (Safety and errors)

AI is prone to error, mainly due to two factors: it is made by humans, so the actual code might contain bugs or logic flaws; and it is often based on probability, meaning that even when the code is perfectly done, its actions are based on imperfect information and will always entail some degree of risk.

人工智能容易出错,主要是由于两个因素:人工智能是人为造成的,因此实际代码可能包含错误或逻辑缺陷; 而且它通常基于概率,这意味着即使代码完美完成,其操作也基于不完善的信息,并且始终会带来一定程度的风险。

These two sources of failure are, for the moment, inevitable. We should, however, always compare the expected levels of error of machines and humans. For instance, it is not unusual for cops to mistake normal objects such as umbrellas or drills for guns, and end up shooting innocent people. Should we stop using human cops? Every 1.35 million people are killed in road accidents around the world. Should we stop people from driving?

目前,这两种失败源都是不可避免的。 但是,我们应该始终比较机器和人为的预期错误水平。 例如,警察通常将伞或伞等正常物体误认为枪支,并最终射击无辜的人。 我们应该停止使用人类警察吗? 全球每有135万人死于交通事故。 我们应该阻止人们开车吗?

These are just a few examples that illustrate that, from an utilitarian perspective, it doesn’t really matter if machines make mistakes, as long as they don’t cause more damage than humans in the same activity. In order to ensure this, of course, new technology should be thoroughly tested in a safe environment first. Measures to reduce possible damage (for example, using non-lethal weapons for robot-cops first) should be taken and, only when error levels are lower than human level by a considerable amount, the innovation should be implemented, in order to ensure a safety margin.

这些只是几个例子,从功利主义的角度来看,只要机器在同一活动中不会造成比人类更大的损害,机器是否会犯错并不重要。 为了确保这一点,当然应该首先在安全的环境中对新技术进行彻底的测试。 应该采取减少可能的损害的措施(例如,首先使用非致命武器作为机器人警察的武器),并且只有当错误水平比人的水平低很多时,才应进行创新,以确保安全裕度。

However, we, as a society, are still quite uncomfortable with the idea of completely replacing humans by robots in certain activities. It just “feels too risky”. That is completely fine, since we should, indeed, approach the unknown carefully. My hope is, however, that in the future, once people are more used to this idea, the decision should be more based in the comparison between the two levels of error than in the moral implications. This type of reasoning can literally save lives down the road.

但是,作为一个社会,我们仍然不满意在某些活动中用机器人完全替代人类的想法。 它只是“觉得太冒险了”。 完全可以,因为我们确实应该谨慎对待未知事物。 但是,我希望,将来,一旦人们更习惯了这个想法,该决定就应该更多地基于两个错误级别之间的比较,而不是道德含义上。 这种推理可以从字面上挽救生命。

Photo by Giammarco Boscaro on Unsplash
Giammarco Boscaro在 Unsplash上 拍摄的照片

法律与道德 (Law and ethics)

The recent technological advances require new legislation, to answer the moral questions that have never been asked before. When a robot makes a mistake, who should be held accountable? Its owner, the team of developers who made it? When we really start mixing biological tissue and robotics, will these robots be considered people, or something else in between? Will they have rights? When we start adding microchips to peoples brains, what is the line between an actual person and a cyborg?

最近的技术进步需要新的立法,以回答以前从未提出过的道德问题。 当机器人犯错时,应该追究谁的责任? 它的所有者,开发团队是谁创造的? 当我们真正开始混合生物组织和机器人技术时,这些机器人会被视为人,还是介于两者之间? 他们有权利吗? 当我们开始在人们的大脑中添加微芯片时,实际的人与人之间的界线是什么?

And what about international law? When a robot is designed in California, assembled in China and exported all over the world, which ethics will it follow? Will a developer in the Silicon Valley be up to program a robot to kill protesters in Hong Kong? Will the developer be held accountable in the US if this robot actually kills someone in China?

那国际法呢? 当机器人在加利福尼亚设计,在中国组装并出口到世界各地时,它将遵循什么道德规范? 硅谷的开发人员会否编程机器人杀死香港的抗议者? 如果该机器人实际上在中国杀死了某人,开发商将在美国承担责任吗?

To be honest, we are still far from that level of sophistication, but we might still see it in our lifetime and be impacted by new legislation. Who would have thought, for instance, 40 years ago, that data and privacy would be such important topics, protected (or not) by so many different laws?

坦白地说,我们离成熟水平还很远,但是我们可能仍然会在我们的一生中看到它,并且会受到新法规的影响。 例如,谁会想到40年前,数据和隐私将成为如此重要的主题,受到(或不受)许多不同法律的保护?

社会影响 (Social impact)

All the issues we have addressed so far will have a significant impact in our society, but I would say the two most relevant ones are the loss of jobs, and the changes in human relationships.

到目前为止,我们已经解决的所有问题都会对我们的社会产生重大影响,但是我想说两个最相关的问题是失业和人际关系的变化。

As I said, technological advances have been happening for ages now, replacing human in many activities and killing old jobs, while new career oportunities arise. That is fine, it allows us to work on more significant endeavors, while robots get the boring ones. That does not mean, however, that new jobs will come to replace old ones forever. It might be the case that, in a certain point in the future, less overall human work is needed, and people become less relevant. Guess who those people are? Probably poor, uneducated people from peripherical countries, since they have the jobs that are the most easily automated. What happens then?

正如我所说,技术进步已经发生了很多年了,在许多活动中替代人类并杀死旧工作,同时出现了新的职业机会。 很好,它使我们可以进行更重要的工作,而机器人则可以进行无聊的工作。 但是,这并不意味着新的工作将永远取代旧的工作。 在将来的某个特定时刻,可能会需要较少的总体人力工作,并且人们的相关性也会降低。 猜猜那些人是谁? 来自外围国家的穷人,未受教育的人,因为他们拥有最容易自动化的工作。 那会发生什么呢?

Many people bet on Universal Basic Income: the idea that the government would give a minimum level of income to everyone. This would mean that, even if you lost your job, you would still have where to live and what to eat. This money would possibly come from taxing big companies that have saved a lot by automating their production. Would it work? We can’t know for sure, but it definitely looks promising. But what then? Would people just stop working and move on with their lives? Although this might seem like a dream to many people, our society was not built around this sort of life, and many people need to work so they feel like they have a life purpose. Not an easy equation to solve.

许多人押注通用基本收入:政府将给所有人最低收入的想法。 这意味着,即使您失业,您仍然可以住哪里和吃什么。 这笔钱可能来自对通过自动化生产而节省了很多钱的大公司征税。 能行吗? 我们不能确定,但​​是看起来确实很有希望。 但是那又怎样呢? 人们会停止工作并继续生活吗? 尽管对于许多人来说这似乎是一个梦想,但我们的社会并不是建立在这种生活方式的基础上的,因此许多人需要工作,因此他们觉得自己有人生目标。 这不是一个容易解决的方程式。

The other major impact of robots and AI will be in human relationships: when we reach the point where it gets hard to distinguish between AI and an actual human, it will get quite easy to start creating feelings towards robots. If you have ever seen the film Her, you know what I mean. Could this mean reducing time spent with other people? Is this a bad thing? Will we start creating unrealistic expectations towards other people, based on our experiences with robots? Will people be allowed to marry robots?

机器人和AI的另一个主要影响将是人与人之间的关系:当我们难以区分AI和实际人类时,就很容易开始对机器人产生情感。 如果您看过电影《 她》 ,您就会明白我的意思。 这是否意味着减少与他人在一起的时间? 这是坏事吗? 我们会根据我们在机器人方面的经验开始对他人提出不切实际的期望吗? 人们会被允许嫁给机器人吗?

结论 (Conclusion)

As I said, this article contains more questions than answers, but it would be irresponsible (and quite presumptuous too) to try and give too many answers: they would all probably be wrong.

就像我说的那样,本文包含的问题多于答案,但是尝试给出过多的答案将是不负责任的(并且也很冒昧):它们都可能是错误的。

I hope, however, this has given you a bit of food for thought, either just for the sake of it, if you work in the field, so that you can start incorporting these questions in your next project meetings at work.

但是,我希望,如果您在野外工作,这可以给您带来一些思考的机会,或者仅仅是为了它的缘故,以便您可以在下一次工作中的会议中开始提出这些问题。

If you would like to go further, I recommend two books by Yuval Noah Harari, which you have probably heard of before: Homo Deus and 21 Lessons for the 21st Century. They discuss some of these same questions and many more, about the future of humankind. If you want to learn more about how robots engage in creative activities such as songwriting, check out this article on Creativity and AI.

如果您想走得更远,我建议您阅读两本尤瓦尔·诺亚·哈拉里(Yuval Noah Harari)所著的书:《 现代人类》《 21世纪的21个教训》 。 他们讨论了一些同样的问题,以及有关人类未来的更多问题。 如果您想了解有关机器人如何参与诸如写歌之类的创造性活动的更多信息,请查阅有关创造性和人工智能的文章。

“It is change, continuing change, inevitable change, that is the dominant factor in society today. No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be … This, in turn, means that our statesmen, our businessmen, our everyman must take on a science fictional way of thinking” — Isaac Asimov

“变化,持续变化,不可避免的变化是当今社会的主导因素。 不仅要考虑现实世界,还要考虑未来世界,再也不能做出明智的决定。这反过来意味着我们的政治家,商人,每个人都必须采取科幻小说的方式思想” —艾萨克·阿西莫夫(Isaac Asimov)

This article is loosely based on a paper published on Artificial Intelligence, addressing some of the most important issues surrounding artificial intelligence and robotics: Robot ethics: Mapping the issues for a mechanized world by Patrick Lin, Keith Abney and George Bekey, with some of my personal input as well.

本文大致基于发表在《 人工智能》上的一篇论文,该论文解决了围绕人工智能和机器人技术的一些最重要的问题: 机器人伦理学:由Patrick Patrick,Keith Abney和George Bekey撰写的有关机械化世界的问题的地图,个人输入也是如此。

Feel free to reach out to me on LinkedIn if you would like to discuss further, it would be a pleasure (honestly).

如果您想进一步讨论,请随时在LinkedIn上与我联系,这是一种荣幸(诚实)。

翻译自: https://towardsdatascience/can-we-make-artificial-intelligence-more-ethical-a0fb7efcb098

ai人工智能可以干什么

本文标签: 我们可以人工智能使人更具道德