Artificial intelligence as a positive and negative factor in global risk

E Yudkowsky - Global catastrophic risks, 2008 - books.google.com
By far the greatest danger of Artificial Intelligence (AI) is that people conclude too early that
they understand it. Of course, this problem is not limited to the field of AI. Jacques Monod
wrote:'A curious aspect of the theory of evolution is that everybody thinks he understands it ...

Cognitive biases potentially affecting judgment of global risks

E Yudkowsky - Global catastrophic risks, 2008 - books.google.com
All else being equal, not many people would prefer to destroy the world. Even faceless
corporations, meddling governments, reckless scientists, and other agents of doom, require
a world in which to achieve their goals of profit, order, tenure, or other villainies. If our ...

[PDF][PDF] Coherent extrapolated volition

E Yudkowsky - … for Artificial Intelligence (May 2004), http:// …, 2004 - pdfs.semanticscholar.org
This is an update to that part of Friendly AI theory that describes Friendliness, the objective
or thing-we're-trying-to-do. The information is current as of May 2004, and should not
become dreadfully obsolete until late June, when I plan to have an unexpected insight.( ...

The ethics of artificial intelligence

N Bostrom, E Yudkowsky - The Cambridge Handbook of Artificial …, 2014 - books.google.com
The possibility of creating thinking machines raises a host of ethical issues, related both to
ensuring that such machines do not harm humans and other morally relevant beings, and to
the moral status of the machines themselves. This chapter surveys some of the ethical ...

Complex value systems in friendly AI

E Yudkowsky - International Conference on Artificial General …, 2011 - Springer
Abstract A common reaction to first encountering the problem statement of Friendly AI (”
Ensure that the creation of a generally intelligent, self-improving, eventually superintelligent
system realizes a positive outcome”) is to propose a simple design which allegedly ...

[PDF][PDF] Creating friendly AI 1.0: The analysis and design of benevolent goal architectures

E Yudkowsky - Singularity Institute for Artificial Intelligence, San …, 2001 - intelligence.org
Abstract The goal of the field of Artificial Intelligence is to understand intelligence and create
a human-equivalent or transhuman mind. Beyond this lies another question—whether the
creation of this mind will benefit the world; whether the AI will take actions that are ...

[CITATION][C] An intuitive explanation of Bayesian reasoning

E Yudkowsky - Retrieved on April, 2003

[CITATION][C] Creating friendly AI

E Yudkowsky - 2003 - philpapers.org
This entry has no external links. Add one. ... Only published works are available at libraries.
... Stuart Armstrong, Anders Sandberg & Nick Bostrom (2012). Thinking Inside the Box: Controlling
and Using an Oracle AI. [REVIEW] Minds and Machines 22 (4):299-324.

An intuitive explanation of Bayes' theorem

ES Yudkowsky - Unpublished manuscript. Last revised June, 2003 - perceval.gannon.edu
Questions► 100 out of 10,000 women at age forty who participate in routine screening have
breast cancer. 80 of every 100 women with breast cancer will get a positive mammography.
950 out of 9,900 women without breast cancer will also get a positive mammography. If ...

Levels of organization in general intelligence

E Yudkowsky - Artificial general intelligence, 2007 - Springer
Summary Section 1 discusses the conceptual foundations of general intelligence as a
discipline, orienting it within the Integrated Causal Model of Tooby and Cosmides; Section 2
constitutes the bulk of the paper and discusses the functional decomposition of general ...

Create alert