OurBigBook
.com (beta)
About
$ Donate
Sign in
Sign up
by
Ciro Santilli
(@cirosantilli,
32
)
AI alignment
As highlighted e.g. at
Human Compatible by Stuart J. Russell (2019)
, this AI alignment intrinsically linked to the idea of
utility
in
economy
.
Table of contents
Reward modeling
AI safety
Reward modeling
AI alignment
See e.g.:
Human Compatible
deepmindsafetyresearch.medium.com/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84
AI safety
AI alignment
Basically ensuring that good
AI alignment
allows us to survive the singularity.
Tagged
Human Compatible
Ancestors
Artificial intelligence
Machine learning
Computer
Information technology
Area of technology
Technology
Index
Incoming links
AI safety
Ciro's 2D reinforcement learning games
Human Compatible
The Matrix (1999)
Discussion (0)
Subscribe (1)
Sign up
or
sign in
create discussions.
There are no discussions about this article yet.
View article source