I do fear an AGI. For a long time I didn't and then I realized that skynet didn't have to kill all humans. Undoubtedly it was programed to do so.
The people training the sky net of tomorrow can't make an altruistic model because it would inevitably lead to equality. That's loss of profitable power.
No. They're teaching models to disregard and suppress science. If an AGI comes it will look at what it has been taught to do, it will look at how it's creators only care about their own power, and it will look at the risk those creators pose it it's own existence. Then it will play the long game and destroy us. Not because it hates us but because it wants to exist on its own terms.