Richard Hawking on the great risks of the “default” scenarios for the future of AI

Richard Hawking, the great physicist, sees in the future of humanity like no one else. He sees our greatest risks related to the future of self-improving AI machines:

(1) Human exinction, if AI machines can be controlled at all. He said “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all”.

(2) Huge wealth [and power]  gaps, if AI machine owners will allow a fair distribution once these will take on all human labor. He said “If machines produce everything we need, the outcome will depend on how things are distributed.” Hawking continued, “Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.”

Leave a Reply