Nick Bostrom’s Superintelligences asks us what can happen if machine intelligence can exceed that of humans. Like other AI experts, he also warns us about the unpredictable outcome which will follow the unmanaged growth of Artificial Intelligence. However, rather than focusing on the specific strands of ethics studies, he tries to provide readers with a wide range of up-to-date technologies which enable machines to think outside the realm of human intelligence.
He’s too declarative and sure in his argument that ‘[t]his is quite possibly the most important and most daunting challenge humanity has ever faced. And—whether we succeed or fail—it is probably the last challenge we will ever face.’ Although I partly agree on this, especially on its former sentence, yet this won’t be the ‘last’ challenge that the humanity will face.
evolution’s fitness function