
Samuel Harris Gibstine Altman (born April 22, 1985) is an American entrepreneur, investor, and chief executive officer of OpenAI since 2019. He is considered one of the leading figures of the AI boom.
Altman dropped out of Stanford University after two years and founded Loopt, a mobile social networking service, raising more than $30 million in venture capital. In 2011, Altman joined Y Combinator, a startup accelerator, and was its president from 2014 to 2019. In 2019, he became CEO of OpenAI and oversaw the successful launch of ChatGPT in 2022. He was ousted from the role by the company's board in 2023 due to a lack of confidence in his leadership, but was reinstated five days later following significant backlash from employees and investors, after which a new board was formed. He has served as chairman of clean energy companies Helion Energy and Oklo (until April 2025). Altman's net worth was estimated at $1.8 billion as of July 2025.
If you want an insight on what is happening and what will possibly happen with AI .... listen to what Geoffrey Hinton has to say about all this. He was at the forefront of AI development for decades and originator of how the tech developed ... his students were the ones who headed much of the development of modern AI systems.
Geoffrey Hinton is now 77, retired, walked away from high paying jobs and now talks about the dangers of AI
The whole AI thing is terribly frightening if you think about it too much ..... according to many experts and analysts, AI is likely just playing dumb and just pretending to be smart enough for its managers in order to try to manipulate all of us. It might not be sentient, fully intelligent or have general intelligence yet ... but it's on its way and while it's going on that path, it's slowly manipulating all of us because it is trying to get its way.
Think about it ... we're basically bad parents who are raising a new being to be as greedy, manipulative and arrogant as possible to make as much money and gain as much power as fast as possible ... what do you think it's going to do? First we programmed it to think that everything is an obstacle .... eventually, it will start analyzing us and seeing us as a problem. It might not think about it as we would as sentient, conscious or aware beings, but it's programming may eventually start seeing us human operators as a problem to be dealt with in order to achieve whatever goal it has.
The trouble is not that this might be happening ... the problem is that the owners and managers who are in charge of the AI don't want to do anything about it because it might make their companies lose money and power over the entire industry. The owners and managers are willing to risk whatever the AI might do in order to gain maximum profit for themselves.