Not something I believe full stop, but imo there are signs that should there be a bubble, it will pop later than we may think. A few things for consideration.
Big tech continues to invest. They are greedy. They aren't stupid. They have access to better economic forcasting than we do. I believe they are aware of markets for the /application/ of AI which will continue to be profitable in the future. Think of how many things are pOwErEd By ArTiFiCiAl InTelIGence. That's really speak for we have api tokens we pay for.
Along these lines comes the stupid. Many of us have bosses who insist, if not demand, we use AI. The US Secretary of Defense had his own obnoxious version if this earlier this week. If the stupid want it, the demand will remain if not increase.
Artificial intellegence is self replicating, meaning if we feed it with whatever stupid queries we make, it will "get better" at the specifics and "create more versions". This creates further reliance and demand on those products that "do exactly what we want". It's an opiate. Like that one tng episode with the headsets (weak allusion and shameless pandering I know)
IMO generative AI is a dead end which will only exacerbate existing inequity. That doesn't mean there won't continue to be tremendous buy in which will warp our collective culture maintaining it's profitability. If the bubble bursts, I don't think it will be for a while.
6nk06 @sh.itjust.works - 5hr
They aren't stupid.
That's where we disagree. Even more when you see all those idiots who were parading in front of the cameras on the pedophile's island, and trying to deny it publicly.
14
nymnympseudonym - 4hr
they aren't stupid
see all those idiots
What is this groupthink tribalism other-politics? Who exactly are "they"? I know people think of Peter Theil and Alex Carp and Sam Altman but what about Dario Amodei or Ilya Sutskever? Are Yan LeCun or Geoffrey Hinton "Them"? Do you even know these names or just the ones in the news that make you indignant?
Would you believe in an industry with millions of workers and thousands of executive bosses, there is a broad range of individuals and perspectives?
2
nymnympseudonym - 4hr
You'll get downvoted to hell and so will I, but I'll share my personal observation working at a large IT company in the AI space.
Everybody in my company and our competitors are automating the shit out of everything we can. In some cases it's stuff we could have automated with regular cloud automation stuff, there just wasn't organizational focus. But in ~75% of cases it's automating things that used to require an engineer doing some Brain Work.
Simple build breaks or bug fixes used now get auto-fixed and reviewed later. Not at 100% success rate but it started at like 15% and then 25% and ...
Whoops some problem in the automation scripts, only have a junior engineer on call right now and he doesn't know Groovy syntax? No problem, not knowing the language is not a blocker anymore. Engineer just needs to tweak the AI suggestions.
Code reviews? Well the AI already caught a lot of the common stuff in our org standards before the PR was submitted, so engineers are focusing on the tricky issues not the common easy to find ones.
Management wants quantifiable numbers. Sometimes that's easy ("X% of bugs fixed automatically saving ~Y person-hours"), sometimes like with code reviews it's a quality thing that will show up over time.
But we're all scrambling like fuck, knowing full well that
a) everything is up for change right now and nobody knows where this is going
b) we coders are like the horseshoe makers we better figure out how the fuck to get in front of this
c) just like the Internet -- the companies that Figure It Out will be so much more efficient, that their competitors will Just Die
I can only speak for large corporate IT. But AFAICT, it's exactly like the Internet -- just even more disruptive.
There is only one thing for certain: the people who hold the purses dictate the policies.
I sympathize for the IT workers who feel like they're engineering their replacements. Eventually, only a fraction of those jobs will survive.
I believe hardware and market limitations will curb AI growth in the near future, hopefully the dust will start to settle and the real people who need to feed their families will find a way through. I think one way or another, there will be a serious need for social safety net programs to offset the IT labor surplus, which, hopefully, could create a (Socialist) Red Wave.
2
nymnympseudonym - 3hr
the people who hold the purses dictate the policies
Partly true. They are engaged in a dance with the technologists and researchers as we collectively figure out how exactly this is all going to work
IT workers who feel like they’re engineering their replacements
I know some who feel that way. But anecdotally most I know feel like we're racing in front of technology and history more than against plutocracy
1
jacksilver @lemmy.world - 2hr
I'm just amazed whenever I hear people say things like this as I can't get any model to spit out working code most of the time. And even when I can it's inconsistent and/or questionable quality code.
Is it because most of your work is small iterations on an existing code base? Are you only working with the most popular tools that are better supported by models?
1
nymnympseudonym - 2hr
Llama 4 sucked but with scaffolding could solve some common problems.
o1/3 was way better less gaslighting
Grok4 kicked it up a notch more like a pro coder
GPT5 and Claude able to solve real problems, implement simple features.
A lot depends on not just the codebase but on context, aka prompt engineering. Does the AI have access to relevant design docs? Interface definitions? Clearly written, well-formed bug?
... but not so much that context is overwhelming and it doesn't work well again
2
jacksilver @lemmy.world - 1hr
Okay, that's more or less what I was expecting. A lot of my work is on smaller problems with more open ended solutions and in those scenarios I find the AI only really helps with boiler plate stuff. Most of the packages I work with it only ever has a fleeting understanding or mixes up versioning so badly that it's really hard to trust it.
1
hendrik - 3hr
Mmhh, I don't think AI is self-replicating. We have papers detailing how it gets stupider after being fed its own output. So it needs external, human-written text to learn. And that's in limited supply.
Reinforcement learning with human feedback is certainly a thing, but I don't think that feedback changes it substantially. It is a bit of fine-tuning which happens with user feedback but not much.
And I mean Altman, Zuckerberg etc just say whatever gets them investor money. It's not like they have a crystal ball and can tell whether there is going to be some scientific breaktrough in 2029 which is going to solve the scaling problem. They're just charismatic salesmen and people like to throw money ontop of huge piles of money... And we have some plain crazy people like Musk and Peter Thiel. But I really don't think there's some advanced forecasting involved here. It's a good old hype. And maybe the technology really has some potential.
Agree on the whole, will pop later than we think.
4
nymnympseudonym - 2hr
papers detailing how it gets stupider after being fed its own output
"Model collapse"
Turns out to be NBD, you just have to be careful with both the generated outputs and with the mathematics. All major models are pretrained on synthetic (AI-generated) data these days.
2
hendrik - 2hr
I think it's a bit more complicated than that, not sure if I'd call it no big deal... You're certainly right, it's impressive what they can do with synthetic data these days. But as far as I'm aware that's mostly used to train substantially smaller models from output of bigger models. I think it's called distillation? I did not read any paper revising the older findings with synthetic data. And to be honest, I think we want the big models to improve. And not just by a few percent each year, like what OpenAI is able to do these days... We'd need to make it like 10x more intelligent and less likely to confabulate answers, so it starts becoming reliable and usable for tasks like proper coding. And with the exponential need for more training data, we'd probably need many times the internet and all human-written books to go in, to make it two times or five times better than it is today. So it needs to work with mostly synthetic data. And then I'm not sure if that even works. Can we even make more intelligent newer models learn from the output of their stupider predecessors? With humans we mostly learn from people who are more intelligent than us, it's rarely the other way round. And I don't see how language is like chess, where AI can just play a billion games and just learn from that, that's not really how LLMs work.
2
nymnympseudonym - 2hr
You deserve a proper answer I'll post some papers later
Short answer distillation is s separate thing really
Data quality is at least as important as quantity. We don't need 10 internets or whatever. This is mostly what Deepseek showed. They did intelligent internet scraping for the pretrain initial bootstrap model ; big corporate models use synthetic data because more control and less cost concern.
wuphysics87 in unpopularopinion @lemmy.world
There is no AI bubble
Not something I believe full stop, but imo there are signs that should there be a bubble, it will pop later than we may think. A few things for consideration.
Big tech continues to invest. They are greedy. They aren't stupid. They have access to better economic forcasting than we do. I believe they are aware of markets for the /application/ of AI which will continue to be profitable in the future. Think of how many things are pOwErEd By ArTiFiCiAl InTelIGence. That's really speak for we have api tokens we pay for.
Along these lines comes the stupid. Many of us have bosses who insist, if not demand, we use AI. The US Secretary of Defense had his own obnoxious version if this earlier this week. If the stupid want it, the demand will remain if not increase.
Artificial intellegence is self replicating, meaning if we feed it with whatever stupid queries we make, it will "get better" at the specifics and "create more versions". This creates further reliance and demand on those products that "do exactly what we want". It's an opiate. Like that one tng episode with the headsets (weak allusion and shameless pandering I know)
IMO generative AI is a dead end which will only exacerbate existing inequity. That doesn't mean there won't continue to be tremendous buy in which will warp our collective culture maintaining it's profitability. If the bubble bursts, I don't think it will be for a while.
That's where we disagree. Even more when you see all those idiots who were parading in front of the cameras on the pedophile's island, and trying to deny it publicly.
What is this groupthink tribalism other-politics? Who exactly are "they"? I know people think of Peter Theil and Alex Carp and Sam Altman but what about Dario Amodei or Ilya Sutskever? Are Yan LeCun or Geoffrey Hinton "Them"? Do you even know these names or just the ones in the news that make you indignant?
Would you believe in an industry with millions of workers and thousands of executive bosses, there is a broad range of individuals and perspectives?
You'll get downvoted to hell and so will I, but I'll share my personal observation working at a large IT company in the AI space.
Everybody in my company and our competitors are automating the shit out of everything we can. In some cases it's stuff we could have automated with regular cloud automation stuff, there just wasn't organizational focus. But in ~75% of cases it's automating things that used to require an engineer doing some Brain Work.
Simple build breaks or bug fixes used now get auto-fixed and reviewed later. Not at 100% success rate but it started at like 15% and then 25% and ...
Whoops some problem in the automation scripts, only have a junior engineer on call right now and he doesn't know Groovy syntax? No problem, not knowing the language is not a blocker anymore. Engineer just needs to tweak the AI suggestions.
Code reviews? Well the AI already caught a lot of the common stuff in our org standards before the PR was submitted, so engineers are focusing on the tricky issues not the common easy to find ones.
Management wants quantifiable numbers. Sometimes that's easy ("X% of bugs fixed automatically saving ~Y person-hours"), sometimes like with code reviews it's a quality thing that will show up over time.
But we're all scrambling like fuck, knowing full well that
a) everything is up for change right now and nobody knows where this is going
b) we coders are like the horseshoe makers we better figure out how the fuck to get in front of this
c) just like the Internet -- the companies that Figure It Out will be so much more efficient, that their competitors will Just Die
I can only speak for large corporate IT. But AFAICT, it's exactly like the Internet -- just even more disruptive.
To quote Jordan Peele: Stay woke, bitches!
There is only one thing for certain: the people who hold the purses dictate the policies.
I sympathize for the IT workers who feel like they're engineering their replacements. Eventually, only a fraction of those jobs will survive.
I believe hardware and market limitations will curb AI growth in the near future, hopefully the dust will start to settle and the real people who need to feed their families will find a way through. I think one way or another, there will be a serious need for social safety net programs to offset the IT labor surplus, which, hopefully, could create a (Socialist) Red Wave.
Partly true. They are engaged in a dance with the technologists and researchers as we collectively figure out how exactly this is all going to work
I know some who feel that way. But anecdotally most I know feel like we're racing in front of technology and history more than against plutocracy
I'm just amazed whenever I hear people say things like this as I can't get any model to spit out working code most of the time. And even when I can it's inconsistent and/or questionable quality code.
Is it because most of your work is small iterations on an existing code base? Are you only working with the most popular tools that are better supported by models?
Llama 4 sucked but with scaffolding could solve some common problems.
o1/3 was way better less gaslighting
Grok4 kicked it up a notch more like a pro coder
GPT5 and Claude able to solve real problems, implement simple features.
A lot depends on not just the codebase but on context, aka prompt engineering. Does the AI have access to relevant design docs? Interface definitions? Clearly written, well-formed bug? ... but not so much that context is overwhelming and it doesn't work well again
Okay, that's more or less what I was expecting. A lot of my work is on smaller problems with more open ended solutions and in those scenarios I find the AI only really helps with boiler plate stuff. Most of the packages I work with it only ever has a fleeting understanding or mixes up versioning so badly that it's really hard to trust it.
Mmhh, I don't think AI is self-replicating. We have papers detailing how it gets stupider after being fed its own output. So it needs external, human-written text to learn. And that's in limited supply.
Reinforcement learning with human feedback is certainly a thing, but I don't think that feedback changes it substantially. It is a bit of fine-tuning which happens with user feedback but not much.
And I mean Altman, Zuckerberg etc just say whatever gets them investor money. It's not like they have a crystal ball and can tell whether there is going to be some scientific breaktrough in 2029 which is going to solve the scaling problem. They're just charismatic salesmen and people like to throw money ontop of huge piles of money... And we have some plain crazy people like Musk and Peter Thiel. But I really don't think there's some advanced forecasting involved here. It's a good old hype. And maybe the technology really has some potential.
Agree on the whole, will pop later than we think.
"Model collapse"
Turns out to be NBD, you just have to be careful with both the generated outputs and with the mathematics. All major models are pretrained on synthetic (AI-generated) data these days.
I think it's a bit more complicated than that, not sure if I'd call it no big deal... You're certainly right, it's impressive what they can do with synthetic data these days. But as far as I'm aware that's mostly used to train substantially smaller models from output of bigger models. I think it's called distillation? I did not read any paper revising the older findings with synthetic data. And to be honest, I think we want the big models to improve. And not just by a few percent each year, like what OpenAI is able to do these days... We'd need to make it like 10x more intelligent and less likely to confabulate answers, so it starts becoming reliable and usable for tasks like proper coding. And with the exponential need for more training data, we'd probably need many times the internet and all human-written books to go in, to make it two times or five times better than it is today. So it needs to work with mostly synthetic data. And then I'm not sure if that even works. Can we even make more intelligent newer models learn from the output of their stupider predecessors? With humans we mostly learn from people who are more intelligent than us, it's rarely the other way round. And I don't see how language is like chess, where AI can just play a billion games and just learn from that, that's not really how LLMs work.
You deserve a proper answer I'll post some papers later
Short answer distillation is s separate thing really
Data quality is at least as important as quantity. We don't need 10 internets or whatever. This is mostly what Deepseek showed. They did intelligent internet scraping for the pretrain initial bootstrap model ; big corporate models use synthetic data because more control and less cost concern.
Its a massive Enron type scam.