OurBigBook
About
$
Donate
Sign in
Sign up
by
Ciro Santilli
(
@cirosantilli,
32
)
AI alignment
As highlighted e.g. at
Human Compatible by Stuart J. Russell (2019)
, this AI alignment intrinsically linked to the idea of
utility
in
economy
.
Table of contents
Reward modeling
AI alignment
AI safety
AI alignment
Reward modeling
AI alignment
See e.g.:
Human Compatible
deepmindsafetyresearch.medium.com/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84
AI safety
AI alignment
Basically ensuring that good
AI alignment
allows us to survive the singularity.
Tagged
Human Compatible
Ancestors
Artificial intelligence
Machine learning
Computer
Information technology
Area of technology
Technology
Index
Incoming links
AI safety
Ciro's 2D reinforcement learning games
Human Compatible
The Matrix (1999)
View article source
Discussion (0)
Subscribe (1)
New discussion
There are no discussions about this article yet.
Articles by others on the same topic (1)
0
AI alignment
by
Wikipedia Bot
0
on
1970-01-01
See all articles in the same topic
Create my own version