"There are risks and costs to action, but they are far less than the long range risks of comfortable inaction" - John F Kennedy
I wanted to write this as a explanation on why I'm not so bullish on mainstream AI right now and clarify my thoughts on where it's all heading.
A lot of it is guessing trends but I found it helpful to think through the direction of travel and share it. Very rarely do predictions age well but I think the excercise is helpful as it critiques what is going on and helps plan for the future and where you see yourself in it.
I found this excercise useful to think through my problems with current AI business models.
Introduction
Everyone is a AI expert nowadays.
Most AI "experts" out there seems to predict AGI in 2 minutes and "all code will be written by AI in the time it takes me to write this sentence". I am going to layout my estimates of where AI is going.
I feel that the current rhetoric is sensationalist and designed to grab market share. I thought I’d sit down and crunch the numbers. I have read a lot of sensationalist posts on LinkedIn and I really wanted to sit down and work through the problem for my conscience.
Revisiting Chat GPT
Most of the data available is for chat GPT and I am going to use it as a point to estimate where AI in general is heading.
I am using this to critique the model of the API based AI company where you buy use of the AI access it via API and in theory use it to automate work.
I have put sources where I have them. I do not use data about chat GPT or open AI as an express criticism of them it is just what I found while researching.
This creates a relationship between the AI and commodifies the work it can automate out in the posts you see on LinkedIn such as 50% of jobs gone by 20XX.
But I wish to also argue it creates a relationship between those two things and electricity. Then posit where I think the market is going.
Reintroducing Chat GPT
Chat GPT is estimated to use 100-200 million queries a year in 2023.
In 2024 it was definitely greater than 200 million.
In 2025 the following data was taken from Byte demand. And Business of Apps.
ChatGPT Statistics 2025: Top Picks
- As of May 2025, ChatGPT has close to 800 million weekly active users. This disputed though as business of Apps reports 400 million.
- The platform is used by around 122.58 million people every day.
- In March 2025 alone, ChatGPT’s website recorded 4.5 billion visits.
- ChatGPT handles over 1 billion queries every single day.
- OpenAI is expected to reach $11 billion in revenue by the end of 2025.
on the Apple Store it has 300k of reviews when I checked.
This excludes Azure AI that uses the same foundation model. I would suggest that is in a similar situation.
I think we can agree that it's billions let us assume 350 billion queries a year. This can be expected to increase by 33%.
Thats a high energy bill but depends on the power usage per query.
How much power per query
What I wanted to look into power usage per query. A common cited estimate is that open AI uses about 10 times the power than a equivilient google search being about 3 whatts.
If true then broadcast now places that as the power requirements of a small country:
Broadcast now reported the following: link here https://www.broadcastnow.co.uk/industry-opinions/calculating-chatgpts-huge-energy-demands/5200774.article#:~:text=A%20ChatGPT%20query%20uses%20about,hours%20of%20energy%20every%20day.
"According to OpenAI, ChatGPT has around 300 million weekly active users, and these users generate 1 billion queries per day.
A ChatGPT query uses about 0.0029 kilowatt-hours of electricity (whereas Google uses about 0.0003 kilowatt-hours per query), so to process queries and generate answers, ChatGPT uses around 2.9 million kilowatt-hours of energy every day.
Taking this same amount and applying it to a year of ChatGPT use, the total energy consumption is 1,058.5 GWh.
This is equivalent to the power consumption of a small country – in 2022, Barbados consumed around a thousand gigawatt-hours, a little less than a year of ChatGPT use."
Epoch AI though calculates it being 10 times smaller being around google level. They have some impressive data to back it up.
https://epoch.ai/gradient-updates/how-much-energy-does-chatgpt-use
So let's say for a 200 billion parameter model it is between 0.3-3 watts per query. At 350 billion queries.
The following caveats should be considered the estimates all seem to use GPUs for estimation rather than bespoke Tensor Processing Units (TPUs). If we assume open AI is using the top shelf TPUs this figure might be halfed or a third of that. Though the estimate might be much higher is you assume some proportion of the user base might be using the larger models.
Either way I'm going to use the 3 watt estimate moving forward but it's good to note that figure is disputed.
Supply Side Satisfaction
The USA produces about 4.18 trillion kilowatt ghours. Source:
if it is 3 watts then at 350 billion saying it would be around 1 trillion watts or 1 million kilowatt. This places it that it is currently 1/3 of a 1/1000 of the whole American energy grid.
If you scaled up one of Open AIs AI up to 1000 times or 1 million times the size of the current AI; which probably is the fastest way to get AGI you would run out of energy to do so.
This energy grid grows at maybe 1% or 2% a year. This probably means that your first intuition of AI exists let's build a really really huge one and get it to do all the work well you don't have the power.
Scaling Laws Trump Economic Laws
The rough estimate is to get a linear increase in accuracy you must increase the amount of parameters by about 10. To my knowledge all AI modelling is some for of log 10.
So take most transformer efficiency they grow when you increase size by 10 plot them all on a log 10 graph you have a nice straight line.
Therefore you need to keep multiplying by 10 to reliably build better AI. Without better AI new customers will dry up for your API based AI company.
A aspect I think unexplored is if this creates a permanent relationship between the Labour to do the job, and electricity of which labourers also need.
I think when electricity is factored into the AI discussion the debate radically changes.
As I wil argue below this is a weakness of just the API based AI company and could be alleviated by small language models as I will argue below. Though probably exist to some extent with all AI I think it is a much higher risk where the business model is explicitly to build a big AI and sell it through APIs.
Moores Law to the rescue
The above energy constraints a will not stop AI getting smarter over the longterm. Moores law is GPU prices half every 2-3 years and effective compute will double.
This means that AI will get better and better at a linear rate roughly every 8-12 years AI will be ten times faster and therefore predictably smarter. In theory.
This is even according to the idea that AI only grows better if scales by 10.
I have heard the costs might be much higher being some polynomial number based on the softmax activation function used in the attention mechanism of the transformer architecture. This piece is fundamental to the transformers design and so if true and I think intuitively softmax makes a lot of curves and lots of curves makes a polynomial function that log 10 costs might go up and up and up.
In practice you might be in the foothills of a AI revolution. A good estimate is it will take 10-30 years to reliably get AGI by if the costs really do scale well and if they do not we will need to start looking at more closely emulating the brain moving from relu and softmax activation functions to full ordinary differential equations like what I am looking at in Hello world.
Therefore the future looks bright for AI in general but not fully certain on which AI architecture. The problem of this though is the constant research and retooling of data centres will eventually get us past that energy bottleneck (unless moores law just dies which is a real possibility) but at what cost?
Increased Capex
The problem with this outlook is that each research cycle to improve AI and then pay to install in data centres will start to drag on investment unless there are real productivity gains.
What I think the issue is that the main business model is you have a company who invests a insane amount of money builds one huge AI and then sells you access to that AI through a API.
That business model has already been shown to be disruptable because of open weights and the amount of data scientists and PHDs that are quite happy to just work on a AI project for experience meaning there is no moat.
No moat no investment in really huge AI. No really huge AI then there's a slowdown in speed towards AGI.
Intuitively the degree that the AI you have is AGI or ASI affects how much work can be automated. If that progress slows then there will be a point when what can be automated will have been automated because the rate to automate work will be faster than design bigger AI.
At that point the bubble will burst on AI.
The issue here as I see it is we can either run at AI build massive test AI try to get AGI and therefore maximise the utility and reduce Capex of the investment in the longterm but suffer high Opex and potentially environmental costs in terms of electricity. Or we build it over a longer time reduce the utility have lower opex but also end up paying much higher opex in the longterm to get to AGI but at the expense of ruining the environment because of all the extra rare earth minerals we dug up and spent trying to creep towards AGI.
Im not saying which option is better I'm just trying to get people to think about it...
Again the future looks bright for AI in the long term but which business model?
AI the hottest new sensation
I think this Capex versus Opex issue will mean that sensationalism will remain at the heart of AI.
The constant need to innovate as closer to AGI means the more you can do with that AI and potential market share will mean over promising will continue until investors get sick of it.
The only way that could stop is if the business model shifts from being one where I own the AI and you buy from me through a API; too one where I the business who seriously use a lot of AI starts to build their own.
I predict this trend will start to be irritating there will be a tipping point where people stop supporting it and there will be a sudden sharp marketing shift.
I suspect it will be to green AI and the idea that there is some technology that they have that you don't. DeepSeek and open weights I suggest prove that is not true. There probably will be a tipping point where performance narrows on a benchmark and someone notices.
I think the other trend that will disrupt this is increased security risks will make some companies want to take much of their AI and if not air gap it will have some knee jerk distrust of sending so much information via API.
Government is probably the first to see this and jump on pulling as much away from the API based AI company model for precisely this security reason. I think it's obvious that trend has already started if you listen.
So I think you can start to see the API based AI company is going to start to have issues as a business model.
Smaller Is Better
Another factor here is smaller AI specialised in one task work really well.
Source:
You can then get them to work in teams as agents. I think much talk has been made about AI and AGI but having agreed the big AI models are off the table here who might take the jobs.
This shrinking of size might allow for some companies to justify some on premise solutions. Though the cost is $10,000-$30,000 a unit.
I think what will happen here is moores law will lower this cost and then some companies will jump ship en-mass. But the upfront cost maybe enough to stop this prediction.
Agents are not a secret
The other thing is teams of AI can be used. Often called agents.
Teams of smaller AI working on tasks are likely because of the above more energy efficient.
Though we have just accepted that AGI will not be coming and several CEOs are promising someone will do my jobs for me.
But not all the jobs...
A company was staffed entirely by agents in one test it did not do very well.
Source: https://futurism.com/professors-company-ai-agents
After reading that don't you feel that you job is safer...
The problem that seems to be missed though is it says in many business contexts AI needs babysitting. If AI NEEDS a human in the loop then it is not AI or at least in the business sense. The productivity gain is about it being a tool and making people faster which is software; and that implementation of software is either going to end with on premise solutions or by smaller software providers who are likely to use on premise AI.
I find it hard to believe the API based AI company to be efficient in this scenario.
I think my job is safe. Clearly agents do work but without human in the loop they do not seem as good.
I think the next step in AI evolution will be highly influenced by debates on UI design and ensuring that oversight occurs.
So AI looks to grow in the longterm again but this time as teams and humans as managers.
Though I think agents have been misold we are still talking about them as bought from the API based AI company and we are still promising AGI.
But I think these two things will slowdown the AI race
Businesses are unlikely to engafge with building their own data centres while open AI is essentially subsidising their transformation costs by keeping its costs low against electricity. Theoretically some business might invest in a data center in a country where energy is cheaper but I doubt that will be popular.
AI companies will having seen some of their competition gone bust at some point will become resistant to further risky investment and want to get costs from businesses.
The companies that use them unless at this point it is AGI will begin to notice they cannot automate all of their projects.
Articles like the below noting that 90% of AI projects fail will be the major talking points.
What you might get is a stand off between those who have already transformed and the AI companies themselves; as they cannot really automate further with the current stack provided by the AI companies and who will eventually push for either investment in bigger AI or build their own. I think that's probably a mid trend starting in 2 years time and being common in 5-10 years expect rhetoric about broken promises about AI.
I think it will be suppressed for now until one of the API based AI companies loses investment and puts up their prices. Though I would not expect that for a while and this assumes that growth stalls and stagnates. If one of the API based AI companies gets a market share and sorts out its electricity dependence then that could change. I am assuming here that big AI is fundamentally energy dependant and I think it's obvious why I think that.
It could also change if businesses do not seem that interested and do not seek to automate further. I am assuming here that businesses when automating out some business processes then view further automation as a key differentiator between them and competitors. They might find something else to compete on and this might therefore not happen for a long time as it remains a lower investment priority.
This standoff alone might be a bubble bursting moment and might cause a AI winter where investment all but stops.
The social aspect
I suspect what will happen is AGI will be dropped entirely. Businesses will market specialised teams of agents and will actually start to acquire those skill sets to make their own in house AI.
I think human in the loop will be put into law. You won't be allowed to run AI without some sort of codified oversight.
Its probably already in the EU AI act I have not read fully.
The proposition of getting access to a larger AI through a API will look like a strange concept. "But you didn't own the AI they will say?". "How did you make money out of that?".
By using smaller AI we will have avoided the energy bottleneck but given up on AGI. The opinion will probably return to seeing AI as a interesting tool but cut out any strange emotional attachment to it.
I suspect some of the exaggerated Linkefin posts about AI thinking will become taboo again.
This will be a bright golden age to be in AI in business but a dark age for AGI research itself.
AI as being part of the business will mean people will be really clear on there thoughts and feelings towards AI.
The next point will be after Moores law trivialises the energy cost for us to have a crack at AGI again. These AI I predict will be deeper simulations of the human brain will come out of the universities by then people will have the same reaction as they did to chat GPT. The reasons these AI will shock people is because by this point people will be very settled on what a AI is or isn't.
It's going to be AI winters followed by AI summers for the rest of our lives is what I am guessing.
Outlook
So here are my thoughts looking at the data either because of economic constraints or technological constraints AI is going to fail and it will have another AI winter.
The issues are not with AI which I think the transformer shows AI can be scalable. There is a chance that it will continue to have real world application and make money.
The problems are economic.
Moores law says we will get AGI and ASI eventually.
The energy constraints say it probably will not be today. Going back to moores law 10-30 years seems reasonable.
Knowing enough about the maths behind the transformer means you know there is a risk it might be best to add a zero on any predictions about time or scale.
The social dimension means you have a few more years of listening to AI influencers telling you about new AI APIs that are "amazing". That AGI is imminient and all the jobs will be replaced.
Wherher they do get replaced will be dependant on how good agents get.
Initial evidence is agents aren't going to be AGI.
Though smaller teams of AI might then become the standard way of implementing AI.
Maybe the last one is the least sustainable of them all but also there is a issue with how we invest in these AI businesses.
What could disrupt or make me change my mind
I think it's good form to check where you might be wrong.
If the global energy prices shifted massively this could change. Though I think the issue is and a asssumption on my part the demand of AI on electricity rises with the use of AI. So a simple dip would see a new AI mania then energy prices would go up as demand increases to supply.
A permanent break in the energy and AI relationship might happen if you built AI neural nets that ran on light and used crystals. Sounds mad but it does exist.
A solution in AI such that growth was no longer logarithmic such as the following:
Someone finds a neuromorphic computational method that bypasses the computational and energy cost to o
Brain machine interfaces provides liquid reasoning of biological brain to a transformer AI meaning we bypass a lot of these issues.
OpenAI just did it GPT 5 comes out it is 1000-1000000 times more powerful but it is AGI so they just do all the work period no one stops them and they buy all the power stations anyway. They still have all the issues listed above but they just bought everything and have all the Monopoly money anyway.
In all these circumstances well if AGI comes in that rapidly it would be the singularity the economic principles of scarcity that underpin capitalism are over anyway. All the trends everywhere are changing at that point so who knows?
But to look at real world data means 4 years ago the outsider smart money where saying AGI because we could clearly map out transformer growth rates meaning it should just scale. It did scale but past energy and everything else. I think now with 4 years of data and knowing energy situation and the published AI growth rates I'd predict no AGI.
Conclusion 4 years is a long time be willing to change you mind if you hear new data.
References
Looking for the data sets on this subject what I came across is Statista.
https://www.statista.com/statistics/1384324/chat-gpt-demographic-usage/
https://www.byteplus.com/en/topic/550891
https://www.demandsage.com/chatgpt-statistics/
https://www.businessofapps.com/data/chatgpt-statistics/
https://epoch.ai/gradient-updates/how-much-energy-does-chatgpt-use
Add comment
Comments