I wonder if AI value not appearing in the productivity statistics is about the type of work it can replicate — spewing out corporate jargon. That is the very thing it can do is the sort of thing that ends up being what Graeber would call a bullshit job.
There’s something in AI destroying the basis of post-1980s employment but no one noticing due to the very fact that such jobs had nothing to do with the underlying productivity of the economy.
On the subject of AI and bullshit jobs, my workplace is going all out to encourage us to use AI. So on Monday I asked it to summarise the meetings in my calendar for the week ahead. It did it fine in a nice list. I thought, wouldn’t it be great if it presented it on screen showing Monday to Friday along the top and the hours down the left and with each meeting at the point in time on the day it was due to happen. I have been told that if I had asked it to do that, it would have. Thats really neat.
One analysis I read indicates that the fundamental problem here is the amount of debt involved in these transactions, one that could disrupt banks when the massive and hidden loans funding all of this investment cannot be paid back. It would be one thing if this all was being financed by a sovereign state, but these are private investments, and the amount of debt is staggering.
The funny thing is that these billionaires think they are insulated from the same violence they inflict on the world everyday. The only thing protecting them is the myth of their impunity. And yes, AI is a tool of systemic economic and social violence. They fail to realize that they too are just sexual organs of Capital, to borrow Han’s metaphor, without the agency they imagine they exercise. This is proven through their need to hide behind security services and walled estates. They fear the pitchforks and torches, which is why they try to bend governments to their will. They are successful,for now, but it won’t last as people become increasingly more impoverished and they, more bloated.
Hi Grace! Thanks for the explainer. Even my veterinarian brain can handle this breakdown. Speaking of breakdown... Could you give me some sort of timeline as I should be retiring next year and I've got to pull my money out of the market before I have no retirement money! That's the problem right?
A huge chunk of it is wrapped up in mutual funds etc. It is managed by a financial planner who says I'm diversified enough to do okay in a downturn.
I'm half joking asking for your timeline estimate So if you could address the other half that would be awesome hahaha. I really appreciate your presence here!
Haha I'm afraid I couldn't tell you when's best to take your money out Michael - if I had that kind of info I'd be a lot richer than I am! There's every chance the boom will keep going for a few years, and you could miss out on a lot of gains if so. Equally, it could burst in a few months.
Either way, it's worth making sure you're hedging against risk - I'd put some money in high rated corporate and gvt bonds, and defensive stocks like consumer staples (esp discount retailers), health, and utilities - the types of assets that do well during a recession. If your FP says you're diversified enough already, I'm sure that's sound advice and I'd stick with where you are!
Retirement isn't the time horizon to focus on when considering stock vs fixed income allocation, it's death. If you're not yet retired, you presumably still have at least a good 20-25 years left, which is plenty of time to recover from a couple of major market corrections. So why give up good returns from equities for lousy returns from fixed income when you're not really at risk of losing your shirt?
I've been 100% equities (through mutual funds) up to a few years ago when I moved to investing directly myself into high yield but still blue chip stocks (about 20 of them, along with a high yield ETF, so it's plenty diversified). I trade very infrequently, and only to rebalance from low yields (i.e. stocks that have risen sufficiently in price) to higher yielding (lower price) stocks. Since they're all blue chip, (large cap, steadily rising dividends, dividend payouts less than earnings) I'm not concerned with stock market ups and downs, I'm just collecting dividends to fund the retirement payout.
Taking control of your own portfolio isn't difficult and the 1-2% you're probably paying in mutual fund and/or advisor fees is a big chunk of your potential returns that you're giving away.
If you are risk averse, you should start pulling your money out now; but don’t do so all at once, move to cash slowly to take advantage of rising valuations. Otherwise, make sure your stock allocation is only 40 percent, with the rest in income generating bonds. Then you should be able to weather a downturn. People who,left their money in the markets after 2000 and 2008 did better than those who,pulled their money.
I’m not a digital native & I don’t know what to make of AI. Sometimes I hear the whole thing is hype, & they’re just predicative text on steroids, then I hear they’re unfolding protein molecules!?
There's a big difference between LLMs like ChatGPT, which are one application of AI, and innovations like Alphafold, which uses similar technology for genuinely useful purposes. LLMs can, of course, be useful, but they're probably not going to solve the toughest problems in science and technology. Check out the book Empire of AI for more on this
Great post Grace, interesting point about the circular economy although maybe less risky with these big, cash rich players that are essentially making a bet on the long term success of OpenAI becoming profitable. Probably a solid bet in whatever form that comes. The circular economy in the dot com era sounded more systemic from your comparison, perhaps even amongst smaller players. Can you see that with AI too? I suspect the bubbly fall out will be more with smaller players with decent tech that fail to establish in a competitive market.
Yes I'd agree with you there Dom. The economy as a whole has become so much more concentrated there's less risk of any of the big players going under these days (it's hard to imagine what a modern Enron or WorldCom would look like). My hunch is that a lot of the risk is building up in private financial markets, and while the big guys like KKR and Blackstone will probably be fine, there are a lot of institutions that will be really vulnerable in the event of a crash - and many have connections to the big banks, so it's hard to tell how risk could spread.
I feel like the central lie underpinning the hype is the "AGI is just around the corner if we scale more". If AGI is going to happen in the next few years then any amount of investment is worthwhile and it'll pay off because of the theoretical potential of AGI (if you can get anyone to define it consistently). It would theoretically enable a mass automation of knowledge work, significantly increasing the power of capital relative to workers in industries that have previously been forced to pay well to attract staff. But AGI is not just around the corner, LLMs will not stochastically parrot our way to it just by scaling another few / 10 / 100 billion parameters. Performance has already plateaued and the cost to run this models is enormous compared to traditional enterprise software and often less than what even premium paid users are forking out for it.
As someone who works with companies to help them understand where to use AI (often the answer is just don't) and to evaluate its performance the on the ground reality is so far removed from the hype. LLMs are great at synthetic benchmarks and are very superficially convincing when used outside your domain of expertise but as soon as you set strict evaluation criteria they fall apart. For certain types of NLP task they are certainly streets ahead of the previous generation of technology and there are use cases where they can be beneficial but we're talking about incremental increases to productivity rather than a revolution. Think fancy autocomplete in your code editor rather than generating an entirely novel app from scratch (which if it even works will be a bug filled mess full of critical security flaws).
The hype wave has been going for almost three years now but reality is creeping in and the trickly of negative articles and opinions is turning into a river. The technology as it actually works simply cannot deliver on the hype, the revenue will not materialise and the crash is inevitable. Just a case of when and how big the bailout will be.
Phil, I'm so glad to hear you say this! I'm no expert but from my limited research it really doesn't seem as though LLMs are the best route to AGI. In fact, with my political economy hat on, it doesn't even seem like AGI is the right thing to be aiming for. The Chinese are taking a much more pragmatic approach to AI, deploying it strategically in production to boost efficiency and cut costs - it seems like this approach is going to prove much more valuable in the long run.
A really pertinent worry in the end! We need to use AI to empower those workers instead of disempowering them, we should safeguard the workers against the abuses of AI. We also need to be cautious of the AI tyranny which justifies the surveillance capitalism. An AI-integrated AI is inevitable ,however ,without the wide participation by the workers to design ,AI will usher us in a highly desperate and chilling space where all of our actions are closely monitored ,revolution is banned, and democratic rights are canceled.
It's becoming exhausting having to get through an economic crisis every decade. People in the US are finally waking up to the neo robber barons' destruction. The only bright side is real leftist politics is on the rise. We desperately need it in America.
Yes, the "good" old capitalist system of privatizing profits and socializing losses...we're all capable of being capitalists like that...
It’s even more strange as the financial markets know that real valuation is based on the stability and predictability of revenue,
They call it Net Present Value ( NPV) based on standard cashflow discounting.
Thats why contracts and memberships make a business more valuable.
We buy future stable revenue streams when we buy a company .
Anythjng else is gambling or shirt term trading.
The finance sector knows this as they are set up for it.
When you ignore the impacts on the future revenue stream you are exhibiting a bias called hyperbolic discounting.
To me, sustainability means protecting the future so the present has more value/ importance. Apply financial principles to our society,
If we remove it , the NPV of oil companies will be zero as they will run out of revenue .
They get around it by using short term timeframes and denial .
It’s time to use their own principles to call them out.
I wonder if AI value not appearing in the productivity statistics is about the type of work it can replicate — spewing out corporate jargon. That is the very thing it can do is the sort of thing that ends up being what Graeber would call a bullshit job.
There’s something in AI destroying the basis of post-1980s employment but no one noticing due to the very fact that such jobs had nothing to do with the underlying productivity of the economy.
Haha that's so true! There's a great article to be written about AI and bullshit jobs I think...
On the subject of AI and bullshit jobs, my workplace is going all out to encourage us to use AI. So on Monday I asked it to summarise the meetings in my calendar for the week ahead. It did it fine in a nice list. I thought, wouldn’t it be great if it presented it on screen showing Monday to Friday along the top and the hours down the left and with each meeting at the point in time on the day it was due to happen. I have been told that if I had asked it to do that, it would have. Thats really neat.
These are times that would’ve produced some quality Graeber works
One analysis I read indicates that the fundamental problem here is the amount of debt involved in these transactions, one that could disrupt banks when the massive and hidden loans funding all of this investment cannot be paid back. It would be one thing if this all was being financed by a sovereign state, but these are private investments, and the amount of debt is staggering.
Yep - check out this piece I wrote last week on the debt problem! https://substack.com/@graceblakeley/p-176414880
The funny thing is that these billionaires think they are insulated from the same violence they inflict on the world everyday. The only thing protecting them is the myth of their impunity. And yes, AI is a tool of systemic economic and social violence. They fail to realize that they too are just sexual organs of Capital, to borrow Han’s metaphor, without the agency they imagine they exercise. This is proven through their need to hide behind security services and walled estates. They fear the pitchforks and torches, which is why they try to bend governments to their will. They are successful,for now, but it won’t last as people become increasingly more impoverished and they, more bloated.
Hi Grace! Thanks for the explainer. Even my veterinarian brain can handle this breakdown. Speaking of breakdown... Could you give me some sort of timeline as I should be retiring next year and I've got to pull my money out of the market before I have no retirement money! That's the problem right?
A huge chunk of it is wrapped up in mutual funds etc. It is managed by a financial planner who says I'm diversified enough to do okay in a downturn.
I'm half joking asking for your timeline estimate So if you could address the other half that would be awesome hahaha. I really appreciate your presence here!
Mike
Haha I'm afraid I couldn't tell you when's best to take your money out Michael - if I had that kind of info I'd be a lot richer than I am! There's every chance the boom will keep going for a few years, and you could miss out on a lot of gains if so. Equally, it could burst in a few months.
Either way, it's worth making sure you're hedging against risk - I'd put some money in high rated corporate and gvt bonds, and defensive stocks like consumer staples (esp discount retailers), health, and utilities - the types of assets that do well during a recession. If your FP says you're diversified enough already, I'm sure that's sound advice and I'd stick with where you are!
Mike,
Retirement isn't the time horizon to focus on when considering stock vs fixed income allocation, it's death. If you're not yet retired, you presumably still have at least a good 20-25 years left, which is plenty of time to recover from a couple of major market corrections. So why give up good returns from equities for lousy returns from fixed income when you're not really at risk of losing your shirt?
I've been 100% equities (through mutual funds) up to a few years ago when I moved to investing directly myself into high yield but still blue chip stocks (about 20 of them, along with a high yield ETF, so it's plenty diversified). I trade very infrequently, and only to rebalance from low yields (i.e. stocks that have risen sufficiently in price) to higher yielding (lower price) stocks. Since they're all blue chip, (large cap, steadily rising dividends, dividend payouts less than earnings) I'm not concerned with stock market ups and downs, I'm just collecting dividends to fund the retirement payout.
Taking control of your own portfolio isn't difficult and the 1-2% you're probably paying in mutual fund and/or advisor fees is a big chunk of your potential returns that you're giving away.
If you are risk averse, you should start pulling your money out now; but don’t do so all at once, move to cash slowly to take advantage of rising valuations. Otherwise, make sure your stock allocation is only 40 percent, with the rest in income generating bonds. Then you should be able to weather a downturn. People who,left their money in the markets after 2000 and 2008 did better than those who,pulled their money.
I’m not a digital native & I don’t know what to make of AI. Sometimes I hear the whole thing is hype, & they’re just predicative text on steroids, then I hear they’re unfolding protein molecules!?
There's a big difference between LLMs like ChatGPT, which are one application of AI, and innovations like Alphafold, which uses similar technology for genuinely useful purposes. LLMs can, of course, be useful, but they're probably not going to solve the toughest problems in science and technology. Check out the book Empire of AI for more on this
Thank you, I’ll check that book out.
🤯
Indeed!
Another "Minsky Moment" in the pipeline?
https://www.newyorker.com/magazine/2008/02/04/the-minsky-moment
Great post Grace, interesting point about the circular economy although maybe less risky with these big, cash rich players that are essentially making a bet on the long term success of OpenAI becoming profitable. Probably a solid bet in whatever form that comes. The circular economy in the dot com era sounded more systemic from your comparison, perhaps even amongst smaller players. Can you see that with AI too? I suspect the bubbly fall out will be more with smaller players with decent tech that fail to establish in a competitive market.
Yes I'd agree with you there Dom. The economy as a whole has become so much more concentrated there's less risk of any of the big players going under these days (it's hard to imagine what a modern Enron or WorldCom would look like). My hunch is that a lot of the risk is building up in private financial markets, and while the big guys like KKR and Blackstone will probably be fine, there are a lot of institutions that will be really vulnerable in the event of a crash - and many have connections to the big banks, so it's hard to tell how risk could spread.
By institutions you mean PE/VC firms?
non-bank financial institutions - check out this piece https://substack.com/@graceblakeley/p-176414880
Thank you for sharing. Having looked at this a bit more, perhaps the rise of non bank lending is the real risk here, not an AI bubble?
I feel like the central lie underpinning the hype is the "AGI is just around the corner if we scale more". If AGI is going to happen in the next few years then any amount of investment is worthwhile and it'll pay off because of the theoretical potential of AGI (if you can get anyone to define it consistently). It would theoretically enable a mass automation of knowledge work, significantly increasing the power of capital relative to workers in industries that have previously been forced to pay well to attract staff. But AGI is not just around the corner, LLMs will not stochastically parrot our way to it just by scaling another few / 10 / 100 billion parameters. Performance has already plateaued and the cost to run this models is enormous compared to traditional enterprise software and often less than what even premium paid users are forking out for it.
As someone who works with companies to help them understand where to use AI (often the answer is just don't) and to evaluate its performance the on the ground reality is so far removed from the hype. LLMs are great at synthetic benchmarks and are very superficially convincing when used outside your domain of expertise but as soon as you set strict evaluation criteria they fall apart. For certain types of NLP task they are certainly streets ahead of the previous generation of technology and there are use cases where they can be beneficial but we're talking about incremental increases to productivity rather than a revolution. Think fancy autocomplete in your code editor rather than generating an entirely novel app from scratch (which if it even works will be a bug filled mess full of critical security flaws).
The hype wave has been going for almost three years now but reality is creeping in and the trickly of negative articles and opinions is turning into a river. The technology as it actually works simply cannot deliver on the hype, the revenue will not materialise and the crash is inevitable. Just a case of when and how big the bailout will be.
Phil, I'm so glad to hear you say this! I'm no expert but from my limited research it really doesn't seem as though LLMs are the best route to AGI. In fact, with my political economy hat on, it doesn't even seem like AGI is the right thing to be aiming for. The Chinese are taking a much more pragmatic approach to AI, deploying it strategically in production to boost efficiency and cut costs - it seems like this approach is going to prove much more valuable in the long run.
A really pertinent worry in the end! We need to use AI to empower those workers instead of disempowering them, we should safeguard the workers against the abuses of AI. We also need to be cautious of the AI tyranny which justifies the surveillance capitalism. An AI-integrated AI is inevitable ,however ,without the wide participation by the workers to design ,AI will usher us in a highly desperate and chilling space where all of our actions are closely monitored ,revolution is banned, and democratic rights are canceled.
It's becoming exhausting having to get through an economic crisis every decade. People in the US are finally waking up to the neo robber barons' destruction. The only bright side is real leftist politics is on the rise. We desperately need it in America.