AI will create winners and losers, and it won’t be fair. The middle class will give out first (those are the jobs AI will automate out first), polarizing wealth inequality and leaving most people tending to lower-class jobs like animal carers, baristas, and babysitters.
If you believe in some definition of utilitarianism, you believe there’ll be more optimal outcomes than others. My definition of utopia (the optimal) is to have the maximum number of people live out as they will, and will as they wish (Harry Frankfurt uses the terms first and second-order desires; I might formalize this in future writings).
The private sector’s incentives is to automate, automate, automate, so the only entity that can realistically re-balance the scales is the government, a system composed of individuals whose primary incentives are to get re-elected. The way politicians acquire votes is by the belief of the masses, and the belief of the masses are formed on narratives, not facts. I threw myself behind Andrew Yang’s Freedom Dividend campaign, but though his proposal (solution for automation) isn’t too early for America, it is for Americans.
The narrative behind AI is what needs to be changed to solve the problems of automation, and the strongest narratives are formed on the core human emotions of fear and hate.
To do so, we need the public not to simply view AI as their benevolent Alexa, but as a demon coming for their jobs and wallets. Only when that narrative is made, will the public belief change, hence the politicians and just maybe, the White House.
Only that way will the nation wake up.