And i don't mean stuff like deepfakes/sora/palantir/anything like that, im talking about why the anti-genai crowd isn't providing an alternative where you can get instant feedback when you're journaling
LeninWeave [none/use name, any] - 2mon
an alternative where you can get instant feedback when you're journaling
GenAI isn't giving you feedback. It's not a person. The entire thing is a social black hole for a society where everyone is already deeply alienated from each other.
39
The Free Penguin - 2mon
Im not looking for opinions lol
-2
queermunist she/her - 2mon
It's a toy. I'm not against toys, but the amount of energy and resources we are pouring into this toy is alarming.
My impression is that a lot of people realize this tech will be used against them under capitalism, and they feel threatened by it. The real problem isn't with the tech itself, but with capitalist relations, and that's where people should direct their energy.
32
knfrmity - 2mon
It's a complete waste of resources
The economic fallout of the bubble bursting could be unprecedented. (Yes shareholder value ≠ quality of life, but we've seen how working people get fucked over when the stock market crashes)
The environmental fallout is rarely considered
The cost to human knowledge and even thinking ability is huge
The emotional relationships people form with these models are concerning
What's the societal cost of further isolating people?
What opportunity cost is there? How many actually useful things aren't being discovered because the big seven are too focused on LLMs?
Nobody even wants LLMs. There's no path to profitability. GenAI is a trillion dollar meme.
Even when it does generate useful output sometimes, LLMs are probabilistic and therefore outputs are not reproducible
Why do you need instant feedback when you're doing absolutely anything? (Sometimes it's warranted but then talk with a person)
32
LeninWeave [none/use name, any] - 2mon
The cost to human knowledge and even thinking ability is huge
100%.
We are communists. We should understand the labor theory of value. Therefore, we should understand why GenAI does not create any new value: it's not a person and it does no labor. It recycles existing knowledge into a lower-average-quality slurry, which is dispersed into the body of human knowledge used to train the next model which is used to produce slop that is dispersed into the... and so on and so forth.
24
Cowbee [he/they] - 2mon
I don't think that's the point Marxists that are less anti-AI are making. Liberals might, but they reject the LTV. If we apply the law of value to generative AI, then we know that it's the same as all machinery, it's simply crystallized former labor that can lower the socially necessary labor time of certain commodities in certain conditions.
Take, say, a stock image for a powerpoint slide that illistrates a concept. We can either have people dedicated to making stock images in broad and unique enough situations, and have people search for and select the right image, or we can generate an image or two and be done with it. Side by side, the end products are near-identical, but the labor-time involved in the chain for each is different. The value isn't higher for the generated image, it lowers the socially necessary labor time for stock images.
We are communists, here, and while I do think there's some merit to the argument that misunderstanding the boundaries and limitations of LLMs leads to some workers and capitalists relying on it in situations it cannot be, I also think the visceral hatred I see for AI is sometimes clouding people's judgements.
TL;DR AI does have use cases. It isn't creating new value, but it can lower SNLT in certain situations, and we as communists need to properly analyze those rather than dogmatically dismiss it whole-cloth. It's over-applied in capitalism due to the AI bubble, that doesn't mean it's never usable.
10
LeninWeave [none/use name, any] - 2mon
I generally agree with you here, my problem is that despite this people do treat AI as though it's capable of thought and of labor. In this very thread there are some (luckily not many) people doing it. As you say, it's crystallized labor, just like a drill press.
6
Cowbee [he/they] - 2mon
Some people treat it that way, and I agree that it's a problem. There's also the people that take a dogmatically anti-AI stance that teeters into idealist as well. The real struggle around AI is in identifying how we as the proletariat can make use of it, identifying what its limits are, while using it to the best of our abilities for any of its actually useful use-cases. As communists, we sit at an advantage already by understanding that it cannot create new value, and is why we must do our best to take a class-focused and materialist analysis of how it changes class dynamics (and how it doesn't).
9
LeninWeave [none/use name, any] - 2mon
I agree with you here, although I want to make a distinction between "AI" in general (many useful use cases) and LLMs (personally, I have never seen a truly convincing use case, or at least not one that justifies the amount of development going into them). Not even LLM companies seem to be able to significantly reduce SNLT with LLMs without causing major problems for themselves.
Fundamentally, in my opinion, the mistaken way people treat it is a core part of the issue. No capitalist ever thought a drill press was a human being capable of coming up with its own ideas. The fact that this is a widespread belief about LLMs leads to widespread decision making that produces extremely harmful outcomes for all of society, including the creation of a generation of workers who are much less able to think for themselves because they're used to relying on the recycled ideas of an LLM, and a body of knowledge contaminated with garbage that's difficult to separate from genuine information.
I think any materialist analysis would have to consume that these things have very dubious use cases (maybe things like customer service chat bots) and therefore that most of the labor and resources put into their development are wasted and would have been better allocated to anything else, including the development of type of "AI" that are more useful, like medical imaging analysis applications.
5
CriticalResist8 - 2mon
would have been better allocated to anything else, including the development of type of “AI” that are more useful, like medical imaging analysis applications.
This is what China is developing currently, and many other cool things with AI. Although medical imaging AI was also found to have their limitations, but maybe they need to use a different neural method.
Even if capitalist companies say that you can or should use their bot as a companion doesn't mean you have to. We don't have to listen to them. I've used AI to code stuff a lot, and it got the results -- all for volunteer and free work, where hiring someone would have been prohibitive, and AI (LLM specifically) was the difference between offering this feature or canceling the idea completely.
There's a guy on youtube who bought Unitree's top of the line humanoid robot (yes they ship to your doorstep from China lol) and with LLM help codes for it, because the documentation is not super great yet. Then with other models he can have real-time image detection, or use the LIDAR more meaningfully than without AI. I'm not sure where he's at today with his robot, he was working on getting it to fetch a beer from the fridge - baby steps, because at this stage these bots come with nothing in them except the SDK and you have to code literally everything you want it to do, including standing idle. The image recognition has an LLM in it so that it can detect any object, he showed an interesting demo: in just one second, it can detect the glass bottles in the camera fram and even their color, and adds a frame around it. This is a new-ish model and I'm not entirely sure how it works but I assume it has to have an LLM in it to describe the image.
I'm mostly on Deepseek these days, I've completely stopped using chatGPT because it just sucks at everything. It hallucinates so much less and becomes more and more reliable, although it still outputs nonsensical comparisons. But it's like with everything you don't know: double-check and exercise critical thinking. Before LLMs to ask our questions we had wikipedia, and it wasn't any better (and still isn't). edit - like when deepseek came out with reasoning, which they pioneered, it completely redefined LLM development and more work has been done from this new state of things, improving it all the time. They find new methods to improve AI. I think if there was a fundamental criticism I would make of it is that perhaps it was launched too soon (though neural networks have existed for over a decade), and of course overpromised by tech companies who rely on their AI product to survive. OpenAI is dying because they don't have anything else to offer than GPT, they don't make money on cloud solutions or hardware or anything like that. If their model dies, they die along with it. So they're in startup philosophy mode where they try to iterate as fast as possible and consider any update is a good update (even when it's not) just to try and retain users. They bleed 1 billion $ a month and live entirely on investor value, startup mode just doesn't scale that high up. It's not their 20$ subscriptions that are ever going to keep them afloat lol.
7
Cowbee [he/they] - 2mon
I think that's a problem general to capitalism, and the orientation of production for profit rather than utility. What we need to do as communists is take an active role in clarifying the limitations and use-cases of AI, be they generative images, LLMs, or things like imaging analysis. I often see opposition to AI become more about the tool than the use of it under capitalism, and the distortions beyond utility that that brings.
7
LeninWeave [none/use name, any] - 2mon
I think that's a problem general to capitalism, and the orientation of production for profit rather than utility.
True, but like I said, companies don't seem to be able to successfully reduce labor requirements using LLMs, which makes it seem likely that they're not useful in general. This isn't an issue of capitalism, the issue of capitalism is that despite that they still get a hugely disproportionate amount of resources for development and maintenance.
I often see opposition to AI become more about the tool than the use of it under capitalism, and the distortions beyond utility that that brings.
I do oppose the tool (LLMs, not AI) because I have yet to see any use case that justifies the development and maintenance costs. I'll believe that this technology has useful applications once I actually see those useful applications in practice, I'm no longer giving the benefit of the doubt to technology we've seen fail repeatedly to be implemented in a useful manner. Even the few useful applications I can think of, I don't see how they could be considered proportional to the costs of producing and maintaining the models.
2
CriticalResist8 - 2mon
I don't follow. LLMs are a machine of course, what does that imply? That Something needs to be productive to exist? By the same LTV, LLMs reduce socially necessary labor time, like all machines.
6
LeninWeave [none/use name, any] - 2mon
LLMs are a machine of course, what does that imply?
That they create nothing on their own, and the way they are used currently leads to a degradation of the body of knowledge used to train the next generation of LLMs because people treat them like they're human beings capable of thought and not language recyclers, spewing their output directly into written works.
People blow themselves up enough learning to cook meth from books and from other people, I don't think they should be taking instructions from the "three Bs in blueberry" machine.
6
10TH_OF_SEPTEMBER_CALL [any, any] - 2mon
Blowing oneself up is just part of the thrill!
3
chgxvjh [he/him, comrade/them] - 2mon
Sure that tells us that some of the massive investments are stupid because their end-product won't have much or any value.
You still have a bunch of workers that used to produce something of value that required a certain amount of amount of labor that is now replaced by slop.
So the conclusion of the analysis ends up fairly similar, you just sound more like a dork in the process.
0
LeninWeave [none/use name, any] - 2mon
You still have a bunch of workers that used to produce something of value that required a certain amount of amount of labor that is now replaced by slop.
A lot of the applications of AI specifically minimize worker involvement, meaning the output is 100% slop. That slop is included in the training data for the next model, leading to a cycle of degradation. In the end, the pool of human knowledge is contaminated with plausible-sounding written works that are wrong in various ways, the amount of labor required to learn anything is increased by having to filter through it, and the amount of waste due to people learning incorrect things and acting on them is also increased.
5
CriticalResist8 - 2mon
These are all historical problems of capitalism; we need to be able to cut through the veil instead of going around it, and attack the root cause, otherwise we are just reacting to new developments.
18
knfrmity - 2mon
This is the case for some of the critiques but I wouldn't say all.
1
CriticalResist8 - 2mon
I didn't want to dump a point-by-point on you unprompted but if you let me know I can write one up happily. A lot of what is said about AI is just capitalism developing as it does, the technology might be novel and unprecedented (it's not entirely, a lot of what AI and AI companies do was already commonplace), but the trend is perfectly in line with historical examples and the theory.
Some less political people might say we just need better laws to steer companies correctly but of course we know where that goes, so the solution is to transform the class character of the state to transform the relations of production, and we recognized this long before AI existed. So my bigger point is that we need to keep sight on what's important, socialism; not simply reacting to new developments any time they happen as this would only keep us running circles within the existing state of things.
A lot of what happens in the western tech sphere is happening in other industries under late-stage capitalism, chasing shorter and shorter term profits and therefore shorter-term commodities as well. But there is also a big ecosystem of open-source AI that exists inside capitalism, though it's again not unique to AI and open-source under capitalism has its own contradictions.
It's like... at this point I think a DotP is more likely than outlawing AI is lol. And I think it's healthy to see it like this.
13
10TH_OF_SEPTEMBER_CALL [any, any] - 2mon
Most of the harm comes from the hype and social panic around it. We could have threaded it as the interesting gadget it is, but the crapitalists thoughts they finally had a way to get rid of human labour and crashed the work economy... again
11
HakFoo @lemmy.sdf.org - 2mon
What I don't like is that they're selling a toy as a tool, and arguably as the One And Only Tool.
You're given a black box and told to just keep prompting it to get lucky. That's fine for toys like "give me a fresh low-quality wallpaper every morning." or "pretend you're Monkey D. Luffy and write a song from his perspective."
But it's not appropriate for high-stakes work. Professional tools have documented rules, behaviours, and limits. They can be learned and steered reliably because they're deterministic to a fault. They treat the user with respect and prioritixe correctness. Emacs didn't wrap it in breathess sycopantic language when the code didn't compile. Lotus 1-2-3 didn't decide to replace half the "7's" in your spreadsheet with some random katakana becsuse it was close enough. AutoCAD didn't add a spar in the middle of your apartment building because it was statistically probable after looking at airplane wings all day.
26
CriticalResist8 - 2mon
I mean software glitches all the time, some widespread software has long-standing bugs in it that its developers or even auditors can't figure out and people just learn to work around the bug. Photoshop is made on 20 year old legacy code and also uses non-deterministic algorithms that predate AI (the spot healing brush for example which you often have to redo several times to get a different result). I agree that there's a big black box aspect to LLMs and GenAI, can't say for all AI, but I don't think it's necessarily inherent to the tech or means it shouldn't be developed more.
Actually image AI is severely simple in its methods. Provide it with the exact same inputs (including the seed number) and it will output the exact same image every time, with only very minor variations. Should it have no variations? Depends; image gen AI isn't an engineering tool and doesn't profess to have a 0.1mm margin of error like other machines might need to.
Back in 2023 already China used an AI (they didn't say what type exactly) to blueprint the electrical cabling on a new ship model, and it did it with 100% accuracy. It used to take a team of engineers one year to do this and an AI did it in 24 hours. There's a lot of toy aspects to LLMs but this is also a trap of capitalism as this is what tech companies in startup mode are banking on. It's not all neural models are capable of doing.
You might be interested that the Iranian government has recently published guidelines on AI in academia. Unfortunately I don't have a source as this comes from an Iranian compsci student I know, they say that you can use LLMs in university but need to note the specific model used, time of usage, and can prove you understand the topic then that's 100% clean for Iranian academic standards.
Iran is investing a lot in tech under heavy sanctions, and making everything locally (it is estimated 40-50% of all uni degrees in Iran are science degrees). To them AI is a potential way to improve their conditions under this context, and that's what they're exploring.
12
Sleepless One - 2mon
Back in 2023 already China used an AI (they didn’t say what type exactly) to blueprint the electrical cabling on a new ship model, and it did it with 100% accuracy.
Do you have a link to the story? I ask because AI is a broad umbrella that many different technologies fall under, so it isn't necessarily synonymous with generative AI/machine learning (even if that's how the term has been used the past few years). Hell, machine learning isn't even synonymous with neural networks.
Circling back to the Chinese ship, one type of AI I could plausibly see being used is a solver for a constraint satisfaction problem. The techniques I had to learn for these in college don't even involve machine learning, let alone generative AI.
6
CriticalResist8 - 2mon
I sent the story on perplexity and looked at its sources :P (people often ask me how I find sources, I just ask perplexity and then look at its links and find one that fits)
This is sort of the issue with "AI" often just meaning "good software" rather than any specific technique.
From a quick read the first one seems to refer to a knowledge-base or auto-CAD solution which is fundamentally different from any methods related to LLMs.
The second one is some actually really impressive feature engineering used to solve an optimization problem with Machine Learning tools, which is actually much closer to a statistician using linear regressions and data mining than somebody using an LLM or a GAN.
Importantly, neither method is as computationally intensive as LLMs, and the second one at least is a very involved process requiring a lot of domain knowledge, which is exactly the opposite of how GenAI markets itself.
2
10TH_OF_SEPTEMBER_CALL [any, any] - 2mon
I mean software glitches all the time, some widespread software has long-standing bugs in it that its developers or even auditors can't figure out and people just learn to work around the bug
yeah my dad can kill a dozen people if something goes wrong at work. Yet they use windows and proprietary shit.
If software isn't secured it shouldn't be used.
2
CriticalResist8 - 2mon
We can make software less prone to errors with proper guidelines and procedures to follow, as with anything. Just to add that it's not solely on software devs to make it failproof.
I would make the full switch to Linux but I need Windows for photoshop and premiere lol. And I never got Wine to work on Mint, but if I could I would ditch windows today. I think helping people get acquainted with linux is something AI can really help with, and may help more people make the switch.
6
10TH_OF_SEPTEMBER_CALL [any, any] - 2mon
yes. It's a tool that can (and must) be seized and re-appropriated imo. But it's not magic. Main issue is that capitalists are selling it as some kind of genius in a bottle.
6
Horse {they/them} - 2mon
I never got Wine to work on Mint, but if I could I would ditch windows today.
apologies if this is annoying, but have you tried Lutris?
it's designed for games, but i use it for everything that needs wine because it makes it easy to manage prefixes etc. with a nice gui
5
CriticalResist8 - 2mon
No worries, I haven't tried it but I also don't have my Mint install anymore lol (Windows likes to delete the dual boot file when it updates and I never bothered to get it working again). I might give it another try down the line but I'm not ready to ditch Adobe yet. I'll keep it in mind for if I make the switch in the future.
7
fox [comrade/them] - 2mon
isn't providing an alternative where you can get instant feedback when you're journaling
ELIZA was written in the 60s. It's a natural language processor that's able to have reflective conversations with you. It's not incredible but there's been sixty years of improvements on that front and modern ones are pretty nice.
Otherwise, LLMs are a a probabilistic tool: the input doesn't determine the output. This makes them useless at things tools are good at, which is repeatable results based on consistent inputs. They generate text with an authoritative voice but all domain experts find that they're wrong more often than they're right, which makes them unsuitable as automation for white-collar jobs that require any degree of precision.
Further, LLMs have been demonstrated to degrade thinking skills, memory, and self-confidence. There are published stories about LLMs causing latent psychosis to manifest in vulnerable people, and LLMs have encouraged suicide. They present a social harm which cannot be justified by their limited use cases.
Sociopolitically, LLMs are being pushed by some of the most evil people alive and their motives must be questioned. You'll find oceans of press about all the things LLMs can do that are fascinating or scary, such as the TaskRabbit story (which was fabricated entirely). The media is culpable in the image that LLMs are more capable than they are, or that they may become more capable in the future and thus must be invested in now.
18
Darkcommie - 2mon
Because we can see what it does without properly regulation and also it’s very overhyped by tech companies in how much utility it actually has
18
The Free Penguin - 2mon
Ye imo they're not regulating it in the right places
They're uber-focused on making it reject making how-to guides for things they don't like that they don't see the real problem: Technofascist cults like palantir being able to kill random people with the press of a button
2
KalergiPlanner - 2mon
"And i don’t mean stuff like deepfakes/sora/palantir/anything like that"
bro, we don't live in a world where LLMs are excluded from those uses
the technology itself isn't bad, but we live in a shitty capitalist world where every instance of automation, rather than liberating mankind, fucks them over.
a thing that can allow one person to do the labor of many is a beautiful thing, but under capitalism increases of productivity only lead to unemployment; though, on the bright side, it consequently also causes a decrease in the rate of profit.
18
CoreComrade - 2mon
For myself, it is the projected environmental impact. The power demand for data centers has already been on the rise due to the growth of the internet. With the addition of AI and the training thereof, the amount of power is rising/will rise at an unsustainable rate. The amount of electricity used creates strain on existing power grids, the amount of water that goes into cooling the hardware for the data centers creates strain on water supply, and this all plays into a larger amount of carbon emissions.
Beyond the above the threat of people losing jobs within an already brutal system is a bit terrifying to me. Though others have already wrote more in length here regarding this.
18
CriticalResist8 - 2mon
We have to be careful how we wield the environmental arguments. In the first phase, it's often used to demonize Global South countries that are developing. Many of these countries completely skipped the personal computer step and are heavy consumers of smartphones and 4G data because it came around the time they could begin to afford the infrastructure (it's why China is developing 6G already), but there's a lot of arguments people make against smartphones (how the materials for them are produced, how you have to recharge a battery, how they get disposed of, how much electricity 5G consumes etc), but if they didn't have smartphones then these countries would just not have the internet.
edit: putting it all under the spoiler dropdown because I ended up writing an essay anyway lol.
::: spoiler environmental arguments
In the second phase in regards to LLM environmental impact it really depends and can already be mitigated. I'll try not to make a huge comment because I don't want to write an essay, but the source's claims need scrutiny. Everything consumes energy - even we as human bodies release GHG. Going to work requires energy and using a computer for work requires energy too. If AI can do in 10 seconds what takes a human 2 hours, then you are certainly saving energy, if that's the only metric we're worried about.
So it has to be relativized which most AI environmental articles don't do. A chatGPT prompt consumes five times more electricity than a google search, sure, but that amount is close to 0 watts. Watching Youtube also consumes energy, a minute of youtube consumes much more energy than an LLM query does.
Some people will say that we need to stop watching Youtube, no more treats or fun for workers, which is obviously not something we take seriously (deleting your emails to make room in data centers was a huge thing on linkedin a few years ago too).
And all of this pales in comparison to the fossil fuel industry that we keep pumping money into in the west or obsolete tech that does have greener alternatives but we keep forcing on people because there's money to be made.
edit - and the meat and animal industry.... Beef is very water-intensive and polluting, it's not even close to AI. If that's the metric then those that can should become vegan.
Likewise for the water usage, there was that article about texas telling people to take fewer showers because it needs the water for data centers... I don't know if you saw it at the time, it went viral on social media. It was a satirical article against AI, that people used as a serious argument. Texas never said to take fewer showers, these datacenters don't use a lot of water at all as a share of total consumption in their respective geographical areas. In the US a bigger problem imo is the damming of the Colorado River so that almost no water reaches Mexico downstream, and the water is given out to farmers for free in arid regions so they can grow water-intensive crops like rice or dates (and US dates don't even taste good)
It also has sort of an anti-civ conclusion... Everything consumes energy and emits pollution, so the most logical conclusion is to destroy all technology and go back to living like the 13th century. And if we can keep some technology how do we choose between AI and Youtube?
Rather I believe investments in research make things better over time, and this is the case for AI too (and we would have much better, safe nuclear power plants too if we kept investing in research instead of giving in to fearmongering and halting progress but I digress). I changed a lot of my point of view on environmentalism when back in 2020 people were protesting against 5G because "microwaves" and "we don't need it" and I was on board (4G was plenty fast enough) until I saw how in some places they use 5G for remote surgery and that's a great thing that they couldn't do with 4G because there was too much latency. A doctor in China with 6G could perform remote surgery on a child in the Congo.
In China electricity is considered a solved problem; at any time the grid has 2-3x more energy than it needs. The west has decided to stop investing in public projects and instead concentrate all surplus value in the hands of a select few. We have stopped building housing, we stopped building roads and rail, but we find the money to build datacenters that could be much greener, but why would they be when that costs money and there's no laws that mandate it?
Speaking of China they use a lot of coal still (comparatively speaking) but they also see it just an outdated means of energy production that can be replaced by newer, better alternatives. It's very different, they're doing a lot of solar and wind - in the west btw chinese solar panels are tariffed to hell and back, if they weren't every single building in europe would be equipped with solar panels - and even pioneering new methods of energy production and storage, like the sodium battery or gravity storage. Gravity battery storage (raising and lowering heavy blocks of concrete over the day) is not necessarily Chinese but in Europe this is still just a prototype. In China they're already building them as part of their energy strategy. They don't demonize coal as uniquely evil like liberals might, but rather that once they're able to, they'll ditch coal because there's better alternatives now.
In regards to AI in China there's been a few articles posted on the grad and it's promising. They are careful about efficiency because they have to be. I don't know if you saw the article from a few days ago about Alibaba Cloud cutting the number of GPUs needed to host their model farm by 82%. The test was done on NVidia H20 cards which is not a coincidence, it's the best China can get by US decree. The top of the line model is the H100 (the H20 having only 20% of the capabilities) but the US has an order not to export anything above the H20 to China, so they find creative ways to stretch it. And now they're developing their own GPU industry and the US shot itself in the foot again.
Speaking of model farm... it's totally possible to run models locally. I have a 16GB GPU and I can generate realistic pictures (if that's the benchmark) in 30 seconds, the model only needs 5GB Vram but the architecture inside the card is also important for speed. For LLM generation I can run 12B models, rarely higher, and with new efficiency algorithms I think over time that will stretch to bigger and bigger models, all on the same card. They run model farms for the cloud service because so many people connect to it at the same time, but it's not a hard requirement for running LLMs. In another comment I mentioned how Iran is interested in LLMs because like 4G and other modern tech that lags a bit in the west, they see it as a way to stretch their material conditions more (being heavily sanctioned economically).
There's also stuff being done in the open source community, for example LORAs are used in image generation and help skew the generation towards a certain result. This means you don't need to train a whole model, loras are usually trained by people on their machines with like 100 images. As for training time it can be done in 30 minutes to train a lora. So what we see is comparatively few companies/groups making full models (either LLM or image gen, called checkpoints) and most people making finetunes for these models.
Meanwhile in the West there's a 500 billion $ "plan" to invest in the big tech companies that already have a ton of money, that's the best they can muster. Give them unlimited money and expect that they won't act like everything is unlimited. Deepseek actually came out shortly after that plan (called Stargate) and I think pretty much killed it before it even took off lol. It's the destiny of capitalism to con the government into giving them money, of course they were not going to say "no actually if we put some personal investment we could make a model that uses 5x less energy", because they would not get 500 billion $ if they did. They also don't care about the energy grid, that's an externality for them - the government will take care of it, from their pov.
Anyway it's not entirely a direct response to your comment because I'm sure you don't believe in all the fearmongering, but it's stuff I think is important to keep in mind and I wanted to add here. And I ended up writing an essay anyway lol.
:::
13
infuziSporg [e/em/eir] - 2mon
Why would you want instant feedback when you're journaling? The whole point of journaling is to have something that's entirely your own thoughts.
17
The Free Penguin - 2mon
I dont like writing my own thoughts down and just having them go into the void lol and i want a real hoomin to talk to about these things but i dont have one TwT
0
infuziSporg [e/em/eir] - 2mon
What does "go into the void" mean? The LLM may use them as context for a while or it may not use them as context at all, it may even periodically erase its memory of you.
I find talking about heavy or personal things way easier with strangers than with people you know. There's no stakes with a stranger you can literally walk up to someone on the street or in a park who doesn't look busy and ask them if they want to talk.
6
Fruitbat [she/her] - 2mon
Is it okay if I push back a bit? Since your last comment just feels a little dismissive? I don't know the Free Penguin, but I will point out to other things why someone might not be able to easily talk to someone? Like for example, if someone can't can't walk or get around, they won't be able to just talk to someone like that. Mainly speaking about my mom before she died since she had copd and her health decline after something happened to her at her former work place. But anyways she really hurt her spine and couldn't really get around. I remember her being very upset with how alone she felt.
Then also speaking for myself, I have a speech impediment, + anxiety, so it is really difficult for me to just approach someone and talk to them depending on various factors. along with that, another thing to, but some strangers can be outright hostile and make things worse and someone else might just have a lot of bad interactions with strangers. Since to go back to myself, people do judge how someone speaks and tends to see little of you, like if you have an accent or have trouble speaking.
3
infuziSporg [e/em/eir] - 2mon
Chronic loneliness and anxiety are a function of societal arrangements that are exacerbated by capitalist solutions, not inherent and unavoidable parts of the human condition until they are cured by a panacea ex machina.
Believe it or not, before 2022 we did have lots of different approaches around the world to these things. And we are poorer for turning away from all those approaches.
I am a rather awkward person in many ways, I am instantly recognizable by many people as "weird", I have my own share of anxiety that I've gotten better at masking over the years. If I spent ages 19-25 interacting with a digital yes-man instead of with humans, I would have no social skills.
Your response sounds closely analogous to when car proponents use the disabled as a shield. We don't need everyone to drive, we need to minimize the distance between each other, and making driving (or LLM usage) a necessity for getting by in society only creates bigger problems, because the root problem is not being adequately addressed.
-1
Fruitbat [she/her] - 2mon
I feel like you might be taking me at bad faith here or misinterpreting me.
Chronic loneliness and anxiety are a function of societal arrangements that are exacerbated by capitalist solutions, not inherent and unavoidable parts of the human condition until they are cured by a panacea ex machina.
I agree? I'm very aware.
Believe it or not, before 2022 we did have lots of different approaches around the world to these things. And we are poorer for turning away from all those approaches.
I would argue that depends. Not everywhere has a lot of different approaches to these things. If anything, if we go to LLM's, all they did was take inherit contradictions and brought them to new heights, but that these things were already there to begin with, maybe smaller in form.
Your response sounds closely analogous to when car proponents use the disabled as a shield. We don’t need everyone to drive, we need to minimize the distance between each other, and making driving (or LLM usage) a necessity for getting by in society only creates bigger problems, because the root problem is not being adequately addressed.
Again, where do I say that besides being taken at bad faith or misread into? All I'm simply is trying to point out that there usually reasons why someone would turn to something like an LLM or might not easily talk to someone else. As you said, the root problem is not being addressed. To add, it also just leaves a bad taste in my mouth and kind of hurts, to be that what I said sounds closely analogous to using the disable as a shield, especially when I was talking about myself or my mom.
Since for example, when my mom was in the hospital before the last few weeks she died. She had to communicate on a white board for staff since they couldn't understand her. I also had to use the same white board to because staff couldn't understand what I was saying either. Just to give you an idea of how I have trouble speaking to others. I'm not saying someone shouldn't try to interact with others you know and just go talk to a chatbot. People should have another person to talk to.
4
The Free Penguin - 2mon
Yeah but what if i make em uncomfyyyyy
-1
infuziSporg [e/em/eir] - 2mon
The ability to self-actualize and shape the world belongs to those who are willing to potentially cause momentary discomfort.
Also the default status of many people is lonely and/or anxious; receiving social energy from someone often at least takes their mind off that.
Advancements in material technology in the past half century have often ended up stunting our social development and well-being.
2
The Free Penguin - 2mon
Yeah but the things i ask the robot about a real hoomin would find it really creepy to talk to a stranger about
0
ZWQbpkzl [none/use name] - 2mon
I would be extremely cautious about that sort of usage of AI. Commercial AI's are psychopathic sychophants and have been known to drive people insane by constantly gassing them up.
Like you clearly want someone to talk to about your life and such (who doesn't?) and I understand not having someone to talk to (fewer and fewer do these days). But you're opting for a corporate machine which certainly has instructions to encourage your dependence on it.
5
The Free Penguin - 2mon
Also i delete my convos about these things after 1 prompt so i dont have a lasting convo on that
But tbh exposure to the raw terms of the topic has let me go from tech allegories to T9 cipher to where i am now where i can at least prompt a robot using A1Z26 or hex to obscure the raw terms a bit
1
The Free Penguin - 2mon
Have there been cases of deepseek causing ai psychosis or is it just chatgpt
1
ZWQbpkzl [none/use name] - 2mon
No idea. But I'd say its less likely. Especially if you're running a local model with Ollama.
I think key here is to prevent the AI from developing a "profile" on you and self controlled ollama sessions are the surest bet for that.
1
Aqloy - 2mon
Do you worry about AI psychosis?
1
big_spoon - 2mon
there's the people who hate it bc they have petit-bourgueois leanings and think at the stuff as "stealing content" and "copyrighted material" like artist people, "code monkeys" or writers
and there's the people that hate it because it's an obvious grift made to siphon resources and "try" to be a big replacement for proles, and a huge wasteful technology that dries water sources and rises electricity bills with their data centers
yeah, it's kinda useful to make a drawing, fill blank space in a document, or being a dumb assistant who hallucinates anything to pretend that it knows stuff
16
LeninWeave [none/use name, any] - 2mon
there's the people who hate it bc they have petit-bourgueois leanings and think at the stuff as "stealing content" and "copyrighted material" like artist people
It's actually not petty bourgeois for proletarians in already precarious positions to object to the blatant theft of their labor product by massive corporations to feed into a computer program intended to replace them (by producing substandard recycled slop). Even in the cases where these people are so-called "self-employed" (usually not actually petty bourgeois, but rather precarious contract labor), they're still correct to complain about this - though the framing of "copyrighted material" is flawed (you can't use the master's tools to dismantle his house). No offense, but dismissing them like this is a bad take. I agree with the rest of your comment.
11
10TH_OF_SEPTEMBER_CALL [any, any] - 2mon
I agree with you, a friend of mine was illustrator, she lost all her jobs. Now she does scenery.
But they lost their jobs because there's a downward pressure to cost everywhere in society. That money is, once again, stolen.
Also you forgot to mention the 2 dollars-a-day kenyan labourer that trained chatGPT in the first place. Or the awful water consumption.
I dont think the tech itself is evil, or even special, it's just a big array of number at the end. The issues are the techbros and their new hype.
3
CriticalResist8 - 2mon
I am/was in visual arts (or rather graphic design) and it's been a long time coming. Spec work was all the rage to rally against back in the day, and despite the protests it hasn't gone anywhere. I'm sure that back in the 90s some old-school designers were against Photoshop too. And of course we are the first person to lose our jobs when a crisis hits, because marketing takes a backseat.
For years Photoshop was the standard in website mockups which is just wild to me as it's not what it's meant for. Today we have tools like Figma, which unfortunately exist as SaaS (Software-as-a-service, with a monthly subscription and on the 'cloud'). The practice still endures despite the fact that you have to code for mobile now and monitors don't come in the standard 4:3 aspect ratio anymore but in many variations.
Oh, I could add SaaS too to the difficulties designers face. For example Wordpress has a lot of prefab themes, so you don't even need to mockup anything in Photoshop or figma anymore. You just pick one and start building your website - it's how I made all my websites, I don't need to duplicate elements in image files and I honestly have no idea how I would even start making a modern website on Photoshop. The footer for example which is the same on all pages is easily editable from Wordpress. I feel like I would be wasting time making a footer on Photoshop when I can just edit it on the website directly in a visual editor and it will update itself on every page.
I don't see that any of the above as a bad thing overall. What we see is that society adapts to changing conditions. The fight is still for socialism.
5
PolandIsAStateOfMind - 2mon
I’m sure that back in the 90s some old-school designers were against Photoshop too.
Yes, CGI in general was demonised and at some point even came close to the current shitstorm, with the same arguments about death of the art and human creativity etc. in reality it just vastly increased the output of the art, especially commercial one.
1
The Free Penguin - 2mon
The commodification of art has been a disaster for the human race
4
10TH_OF_SEPTEMBER_CALL [any, any] - 2mon
yea coz artists are useless for he machine minded.
1
ComradeSalad - 2mon
Those aren’t petit bourgeois tendencies, those are pre-capitalist artisanal tendencies.
Except for the copyright aspect, but independent artists rarely clamour about their “copyrights”, as their issue is more about how their work, whether copyrighted or not, is getting fed into a capitalist black hole machine designed to replace workers to benefit no one but a few capitalists.
Most within the art world could care less if someone took the time to learn their style one to one by studying their work. That’s the entire point of art to a degree, as the end product is still an expression of labour value. Something that can’t be said about GenAI
4
ZWQbpkzl [none/use name] - 2mon
GenAI really is taking people's jobs. It might not do it better. It might be less safe. It might even be less cost efficient. It's still happening.
Its not even a case of Do you think you can be replaced by AI? Instead its, Does your employer think you can be replaced with AI? Any white collar worker would be foolish to think that's something their employer has not considered. Corporate advertising is pleading them to reconsider multiple times a day.
15
CriticalResist8 - 2mon
Exactly, and this process keeps happening in capitalism making AI neither unique nor truly new in its social repercussions. Therefore the answer is socialism so that tech frees us instead of creating crises.
Although the companies that bought on the promise of replacing labor are now walking it back as they realize it doesn't replace but enhances labor. It's like Zuckerberg not allowing his kids on Facebook, AI companies are not replacing their employees with AI either but they sell the package because capitalism needs to make money, not social good.
9
ZWQbpkzl [none/use name] - 2mon
AI companies are not replacing their employees with AI either
You sure about that? I mean they're obviously hiring more because they have the investors at the moment but that doesn't mean they arent using AI internally.
1
CriticalResist8 - 2mon
They're rehiring now, for example Klarna laid off their customer service reps to replace with AI but they're walking it back and rehiring human reps (tbh klarna has other problems right now lol).
5
ZWQbpkzl [none/use name] - 2mon
Klarna is not an AI company. I was asking if AI companies really weren't replacing their own employees with AI.
1
CriticalResist8 - 2mon
I read too fast lol. I was talking about the engineers that work on the models in this case; tech companies would never replace them with AI because they know it wouldn't work out.
But I looked more broadly into it couldn't find any source that says the mass layoffs we are seeing in tech currently are replacing jobs with AI, rather it seems they're getting rid of the job entirely as happens routinely in the industry (as they are shifting to another focus which currently is AI). There's Amazon who is building automated warehouses but YMMV; they also started on these before AI and have been at it for a while.
For new AI companies like openAI, the jobs they are giving to AI (such as customer service) were never created in the first place, so it's not replacing a worker, since the job never existed.
3
10TH_OF_SEPTEMBER_CALL [any, any] - 2mon
I mean my employer can replace me with AI. It WILL break.
4
ZWQbpkzl [none/use name] - 2mon
Allegedly the AWS outage was after they replaced a chunk of their team with AI.
4
10TH_OF_SEPTEMBER_CALL [any, any] - 2mon
Good.
2
GreatSquare - 2mon
It's not feedback. That's not what the tool is for. It doesn't have an opinion. There's no one on the other side of the screen. The "A" stands for Artificial.
15
robot_dog_with_gun [they/them] - 2mon
no the "a" stands for "a piece of shit marketing guy scammed a bunch of people"
6
chgxvjh [he/him, comrade/them] - 2mon
That's still feedback.
Feedback can be based on very simple rules and still be feedback. Or in case of LLMs piles of statistics.
2
GreatSquare - 2mon
I don't see that as feedback but it depends on your definition of feedback. Just having something come out of the AI is not feedback to me.
A writer writes something. An AUDIENCE provides feedback on their writing. An AI can be processing the writing but can't be an audience because it is just a tool. Just because the AI returned some text back won't change that fact regardless of the content of the text.
2
CriticalResist8 - 2mon
Data is feedback for example. If you change something on a web page and notice a huge drop in visits then that provides actionable information, i.e. feedback. The visitors didn't vocalize it, you only see it as numbers on a spreadsheet.
3
GreatSquare - 2mon
True but OP isn't using AI to collate or analyse data of the visitors to his website.
As I said, it's how you use the tool. Every usecase is not valid. In a LOT of cases AI is not useful or efficient and it's sometimes doing more harm than good.
2
chgxvjh [he/him, comrade/them] - 2mon
A recipient responds. If you consider the response to adapt your work (current or future), that's feedback.
2
GreatSquare - 2mon
Not to me. As I said, my definition of feedback is a lot tighter.
1
The Free Penguin - 2mon
And tbh im not looking for opinions or human interaction on there im just looking for something that says my posts in another way for idek what reason but a human would get uncomfy reading them so yeah
-2
ZWQbpkzl [none/use name] - 2mon
crowd isn't providing an alternative where you can get instant feedback when you're journaling
Side bar: This is a very specific usage of GenAI. Are you like writing your diary into ChatGPT?
15
Catalyst_A - 2mon
There doesn't need to be an alternative option to offer. I don't support genAI because its flooded the internet with fake content thatnhas no label to differentiate it. It's unreversible.
15
loathsome dongeater - 2mon
There is stuff like spellcheck and languagetool which can give you a specific variety of feedback.
12
Shinji_Ikari [he/him] - 2mon
It has a flattening effect. The things that come out the other end don't sound human. They sound like the collective mouth of reddit and blog spam.
I don't know why you'd use it for journaling. what feedback do you even need for journaling? Shouldn't that be your thoughts and not your thoughts filtered through the machine of averages and disembodied?
11
chgxvjh [he/him, comrade/them] - 2mon
That's kind of like the how people where talking about how genai always fucks up the hands in pictures. I don't think that's permanent
4
The Free Penguin - 2mon
Yea so i type my natural thoughts in and tbh i have some of em that i currently dont share w humans cuz they're kinda sensitive and i dont use genai for emotional attachment just to see them written in a different way i guess
2
Twongo [she/her] - 2mon
genai turned the internet into a hellhole. nothing is genuine. information became worthless. facts don't matter anymore.
it carries itself into the world outside the internet. slopaganda, decision making and policymaking are affected by genai and will make your life actively worse.
welcome to the post-fact world where you can't even trust yourself.
11
darkernations - 2mon
This is the correct take from an ML perspective (essentially an extension of the fact that we should not lament the weaver for the loom):
The problem is not the technology per se (criticisms such as energy consumption or limitations of the tools just means there's room for improvement for the tech or how we use it) but capitalism. If you want a flavour of opinions on this this click on my username and order comments by most controversial for the relevant threads.
Artisans that claim they are for marxist proletariat emancipation but fear the socialisation of their own labour will need to explain why their take is not Proudhonist.
That post really is an excellent article in truly understanding the Marxist critique of reaction and bourgeoisie mindsets. Another one that people here should read along with it is Stalin’s Shoemaker; it highlights the dialectical materialist journey of a worker developing revolutionary potential:
Class consciousness means understanding where one is in the cog of the machine and not being upset because one wasn’t proletariat enough. This is meant to be Marxism not vibes-based virtue signaling.
Marxism is a science. People should treat it is as such and take the opportunity to study and learn, to develop their human potential beyond what our societies consider is acceptable.
do not use an LLM for whatever the heck you think you're doing with it.
9
CriticalResist8 - 2mon
Why not?
11
robot_dog_with_gun [they/them] - 2mon
you do not seem to understand what the technology is or its limitations. Other commenters have corrected your farcical assertion about
instant feedback
-1
CriticalResist8 - 2mon
Please come back when you can comment without insulting people's intelligence, this doesn't add anything to the conversation. This isn't reddit.
4
robot_dog_with_gun [they/them] - 2mon
i didn't insult their intelligence at all. plenty of people are fooled by the talking scrabble bag, this is a matter of education and credulity.
1
rainpizza - 2mon
you do not seem to understand what the technology is or its limitations.
Stop projecting yourself onto others. The other commenters that you are referencing were corrected as well for their own farcical and fallacious assessments.
3
10TH_OF_SEPTEMBER_CALL [any, any] - 2mon
I have one that knows a lot of things about how to spy on the cops during an uprising
2
rainpizza - 2mon
Those people are usually Westerners that take the easy route which is to blame a tool for the issues caused by capitalism.
However, if you look beyond the small western world into countries like China, Cuba, Vietnam and others in the global South, AI, including genai, is celebrated. You can find plenty of content in Xiaohongshu with comments fascinated with the inventions of people.
One example of this is this song created by a person that used AI for the production of it:
There is even a Yotube channel called Dialectical Fire that posts incredible content using AI.
All I know is that this new form of luddism will disipate into history similarly to the past luddism of past century.
9
LeninWeave [none/use name, any] - 2mon
All I know is that this new form of luddism will disipate into history similarly to the past luddism of past century.
You're aware that the luddites were correct, right? They weren't vulgar technology haters, they had valid concerns about their pay and the quality of the products produced (actually an excellent comparison to many people who oppose LLMs), which turned out to be accurate. The idea of luddites as you use it here is explicitly liberal propaganda used to smear labor movements for expressing valid concerns, and they didn't dissipate into history, there were and are subsequent similar labor movements.
19
☆ Yσɠƚԋσʂ ☆ - 2mon
The point is that even though the concerns the luddites had were correct, their methods were not. Hence why they failed. Now, people are trying to do the same things that we know don't work.
12
10TH_OF_SEPTEMBER_CALL [any, any] - 2mon
Can you elaborate dear? why do you say their methods were not?
1
Cowbee [he/they] - 2mon
Luddites attacked machinery, blaming it on their declining quality of life. The correct approach is to attack the capital relations directly, ie to attack the capitalists themselves, and take hold of the productive forces built up by capitalism already, directing it for the good of all rather than the profits of the few.
11
10TH_OF_SEPTEMBER_CALL [any, any] - 2mon
I mean, the combative communist cells here have blown weapon industry plants. It's true they "failed" but it gave people hope
-1
Cowbee [he/they] - 2mon
That's different entirely though, sabotaging the war machine is entirely different from attacking factories for consumer commodities.
6
10TH_OF_SEPTEMBER_CALL [any, any] - 2mon
Of course. I'm just saying sabotaging the tool of production cant be "that" useless, would it.
-1
☆ Yσɠƚԋσʂ ☆ - 2mon
Their methods failed to effect structural change, and their whole movement was ultimately swept away.
7
Fruitbat [she/her] - 2mon
I think Yogthos and Cowbee said it well, but I just wanted to add some of Marx's thoughts from vol 1, chapter 15 in regards to luddites if you haven't read it, forgive me if you already have.
::: spoiler spoiler
About 1630, a wind-sawmill, erected near London by a Dutchman, succumbed to the excesses of the populace. Even as late as the beginning of the 18th century, sawmills driven by water overcame the opposition of the people, supported as it was by Parliament, only with great difficulty. No sooner had Everet in 1758 erected the first wool-shearing machine that was driven by water-power, than it was set on fire by 100,000 people who had been thrown out of work. Fifty thousand workpeople, who had previously lived by carding wool, petitioned Parliament against Arkwright’s scribbling mills and carding engines. The enormous destruction of machinery that occurred in the English manufacturing districts during the first 15 years of this century, chiefly caused by the employment of the power-loom, and known as the Luddite movement, gave the anti-Jacobin governments of a Sidmouth, a Castlereagh, and the like, a pretext for the most reactionary and forcible measures. It took both time and experience before the workpeople learnt to distinguish between machinery and its employment by capital, and to direct their attacks, not against the material instruments of production, but against the mode in which they are used.
:::
7
PolandIsAStateOfMind - 2mon
It took both time and experience before the workpeople learnt to distinguish between machinery and its employment by capital, and to direct their attacks, not against the material instruments of production, but against the mode in which they are used.
Sadly, a lot of them still can't, as evidenced even in this thread.
3
Fruitbat [she/her] - 2mon
I hope AI discussion in the future start to blossom further into something better, because as it stands, it just feels rather disheartening when things get like bad? I'm not sure what it is with AI that leads to like, other comrades willing to call other comrades "dumb" elsewhere or other dejecting statements. It just gets very demoralizing.
2
PolandIsAStateOfMind - 2mon
I think it will pass in few years max. AI bubble will either break or be deflated, Chinese will improve their tech even more, online shitstorm artisans will either find new niche or get a job, AI creations will get less recognizable and so on.
2
rainpizza - 2mon
I hope AI discussion in the future start to blossom further into something better
It will improve but I feel that some comrades are disconnected from stories where AI is having a positive impact in people's life. Stories that will make them question: "Why are there good stories in China but not in the West? What is missing for the West to have similar stories?".
I try to post a lot of AI news in c/Technology but it clearly is not working. Sorry for the ping but @CriticalResist8@lemmygrad.ml by pure coincidence do you have any ideas on how to improve visibility regarding AI?
1
10TH_OF_SEPTEMBER_CALL [any, any] - 2mon
I appreciate the view and the tactical point, there is of course truth in there. Now there's the matter of ability as well. Imagine being a resistant during fascist takeover. Every bit would count wouldn't it? I can't blame the ghetto kids for burning cars, if you understand me.
2
Fruitbat [she/her] - 2mon
I'm not sure if I entirely understand? What your describing sounds like a different situation compared to talking about ai and means of production in general? I just mean it just sounds like your describing war or guerilla warfare in that context? in which, in terms of war, every bit does count like you said. and to go to ai, fascist states are using it for fascistic purposes like what the ruling class in the united states is doing with it. but I feel like that has less to do with the tool considering ai has a a wide variety of use like with what China showing what to do with it, and more to do with the people employing it for fascistic purposes? and in turn that has to do more with resisting oppressing and fascism in general and less to do with ai? I'm not sure if I properly articulated my thought.
3
10TH_OF_SEPTEMBER_CALL [any, any] - 2mon
I'll answer more thoughoughfully later but don't worry about it, I was just thinking aloud
2
Sleepless One - 2mon
The luddites were dead fucking wrong. Instead of seizing the means of production, they thought smashing them would solve their woes. It doesn't matter that the luddites were skilled machine operators with a rudimentary form of class consciousness; their understanding of the issue was idealist and therefore opposed to Marxism. Luddism is liberalism.
9
rainpizza - 2mon
Adding to what other comrades eloquently explained, we should not repeat the mistakes from the past but actually learn from them.
Instead of focusing on the tool and projecting the evil from capitalists into it, we have to build a labor movement with the intention of seizing the means of production and to fight the capitalists and the imperialists. Socialism is the only way out and the proof of its success is already showing in places like China. Examples:
Working toward a world where the workers own the means of productions is a better endeavor rather than destroying the means of productions and sabotaging the tools as those luddites did.
4
BarrelsBallot - 2mon
Why would you want to outsource one of the last vestiges of being a human we have left (thinking) to a 3rd party of any kind?
I don't care if it's an AI or an underprivileged person in another region of the world, get that shit out of here. The internet and similar tools of isolation are bad enough, now we're being handed keys to an artificial friend keen on severing our social connections and ability to think on our own.
9
The Free Penguin - 2mon
I think about it too i only ask the robot after i alr thought about it
1
小莱卡 - 2mon
People fear that they're gonna lose their job that consists 99% of sending and receiving emails and doing zoom meetings. They know their job is bullshit and replaceable.
9
chgxvjh [he/him, comrade/them] - 2mon
Burocratic nightmare
GenAI can be kind of useful when you use it on purpose. But it will be used to make anything bureaucratic an even bigger nightmare. After sale customer support, talking to humans in a call center at least incurs costs to the corporation, with gen AI they can keep you in the loop forever for cents.
Unemployment claims, immigration, disability pay, hiring are also all made worse by AI.
Devaluing human labor
They are coming for our jobs. Or at least they making our jobs worse.
Waste of resources
Energy, water, computation ...
I think this is one of the weaker arguments tbh.
Corporations get away with blatant mass theft of intellectual property
Destruction of social reasoning
Science & academia already was in a bad spot with reproducibility crisis, fake/bad studies. Now this is automated.
Instead of letting humans do creative work, too much attention will be taken up reviewing slop.
This problem also exists in social and traditional media.
People also put a lot of implicit trust in AI answers when the answer might just be based on a whitewashed shitpost or wrong for other reasons. With websearch it's easier to judge yourself whether the source is to be trusted.
People letting AI control their lives
This will be worse in the future when either companies learn how to manipulate datasets to get ahead (similar to search engine optimization). Or when AI companies will just straight up places advertisements in AI answers.
Destruction of human connection
People replacing their human friends with AI friends and partners isn't healthy.
8
The Free Penguin - 2mon
Also IP as a commodity needs to be abolished
6
chgxvjh [he/him, comrade/them] - 2mon
Private property should be abolished.
6
CurseAvoider - 2mon
Unemployment claims, immigration, disability pay, hiring are also all made worse by AI.
eeeeeh disability claim is already a nightmare without AI, they never have enough people for the amount of cases with new ones coming in every day. As someone waiting for my disability claim putting AI in the process can't possibly be worse than what I'm going through already lol. it took them 3 months to send me a first update and it was to ask me some details about my past employment and nothing else. at worst the AI will deny me which a human will also do because they hope you don't appeal, but i assure you i will appeal any denial lol
5
The Free Penguin - 2mon
And tbh i hate that i hate this AI-fueled loneliness epidemic i want to go outside and meet people rather than being stuck behind a screen talking with a nobody because everyone is abandoning human connection for a syncophantic robot
3
TankieTanuki [he/him] - 2mon
I think you're thinking of LLMs. GenAI is the stuff that makes deepfakes and "art"; LLMs make text.
8
Cricket@lemmy.zip - 2mon
I thought LLMs are just one type of GenAI. They all generate new media content based on existing media content. LLMs just specialize on text, while the other types of GenAI specialize in other types of media. At least that's how I understand the terms.
2
TankieTanuki [he/him] - 2mon
You might be right. Looks like LLMs are a subset of GenAI.
2
Cricket@lemmy.zip - 2mon
Thanks for confirming. I wasn't completely sure.
2
Pieplup (They/Them) - 2mon
The Kavernacle has videos on this. He talks about how it's eroding emotional connection in society and having people offload their thikcing onto chatgpt. I think this is a problem
But my main issue i'm most passionate about is the issue of misinformation. In the process of writing thsi post i did an experiment and asked it some questiosn about autism. I asked them waht autsitic burnout is. They gave an explanation that's incorrect, and furthers the incorrect assumption alot of pepole make that i'ts something specific to autistic people. But it's a wider phenomon of physiological neurocongitive burnout. I confronted them on this they refined their position then I asked them why they said it.
It constnatly contradicts itself and will just be like yeah you are correct i am wrong, while continuing to not repeat the same incorrect claim.
https://i.imgur.com/KINH7lV.pnghttps://i.imgur.com/EHtDwNj.png
According to chatgpt their own sentence contradicts itself.
They also proceeded to tell invent a new usage of a very obscure medical term that is not widely used then try to gaslight me into believing it's a commonly jused term among autsitic people whne it isn't
https://i.imgur.com/LStZdNg.png
And what frustrates me even more is a couple months ago i had someone swear to me up and down that, the hallucinations in chatgpt were fixed and they ain't that bad anymore. Granted, they were far worse in the past. It litaerlly tol dme autims level system was something that no longer exists despite it being currently widely used.
But here's the problem. I am an expert on this topic. Most people aren't asking chatgpt questions about things they are an expert in, and they also are using it as a therapist.
All in all i wasn't expecting it to have no hallucinations but i was atelast expecting it to not still be a massive issue in just basic information retrival on topics that aren't even super obscure and information si widely available about.
Ultimately here's the issue. The vast majority of pro-genai people don't know what genai actually is and why it is bad to use it in the way they are as a result. GenAI is a very advanced from of predictive text function. It just predicts what it thinks the words following that queery is based on the tereabytes maybe evne petabytes of infromation it's scrapped from the internet. Which means it's not really useful for anything beyond very basic things like asking it to generate simple ideas or summarize an article or video and very basic coding. I only dabble very lighlty in programming but frmo hwat i've heard actaul experienced programmers say it trying to use chatgpt for major coding just means having to rewrite most of the code.
8
TankieReplyBot - 2mon
Imgur links were detected in your comment. Here are links to the same locations on alternative frontends that protect your privacy.
And honestly i think the main reason is that i feel like i worry too much about my own self-image and have had bad experiences with other hoomins on the interwebs being absolute assholes to me cuz i told them im a commie in confidence only for them to share it with their friends who then went on to harass me and idk i just feel like because the robot wont talk about me behind my back i dont feel as much weighing me down talking about more sensitive stuff to it
0
Pieplup (They/Them) - 2mon
I thought you were a penguin? You're a fake penguin, A huamn pretend to be one.
Mr. The Fake Penguin
the answer is to find better friends. maybe try joining an online communist group or something.
Also i'm ckinda confused cause you say and like you are refreencing something but ikd what you are referencing.
3
The Free Penguin - 2mon
I had some drama in the past with someone from the Roblox Elevator Community (i alr made posts about what a shithole that place is) and i thought someone was cool but they turned on me
1
The Free Penguin - 2mon
Wait a sec, what you said up there sounds exactly like what that guy said to me that I'm "pretending to be Russian"
1
Conselheiro - 2mon
It was a joke, a la "in the internet nobody knows you're a dog". But adding to his point, yes, you need better friends.
Maybe join the Genzedong matrix server? It was a pretty chill place back when I used it.
Also if you have access, consider therapy. One of the greatest advantages psychologists have over LLMs is that they are able to disagree with you. That could help you with whatever thoughts you're struggling with, without having to care about being judged.
2
The Free Penguin - 2mon
yeah i was in communist groups and still am, it was just that the roblox elevator community is a shithole filled with people who make being anticommunist their whole personality and harass anyone who dares to say anything positive about the CPC
2
Pieplup (They/Them) - 2mon
You are, you admitted it you are a human pretending to be a penguin.
1
The Free Penguin - 2mon
ok why are you so eerily sounding so similar to that guy saying i'm pretending to be russian
1
Pieplup (They/Them) - 2mon
not sure who you are talking about, i just don't like pretenders.
1
Marat - 2mon
"Providing and alternative where you can get instant feedback while you're journaling" Forgive me, could you elaborate? I'm a little confused
8
The Free Penguin - 2mon
I sometimes put some of my more sensitive thoughts into an LLM mainly cuz i dont want em going into the void and i dont want a hoomin getting uncomfy reading them either
-2
Conselheiro - 2mon
GenAI is the highest form of commodification of culture so far. It treats all text, images, videos, songs, speech and all other forms of organic cultural expression as slop to be generated over and over without its original context. It provides little to no serious improvement in industry, and is only propped up despite no profits due to either artificial growth in internet platforms or unrealistic expectations from the AGI folks.
And it's inneficient. We could easily have more therapists rather than wasteful chatbots that cost billions. Such technology can only exist as a bandage to the ailments of neoliberalism, and is not a solution to anything. And that's not even going into the worsening impact of cultural imperialism due to the tendency of these models to reproduce Northwestern cultural hegemony.
The alternative is actually pretty simple: measures to lower unemployment. Most capitalist countries have issues with unemployment or underemployment. And most tasks of Gen"AI" can be done by paid humans quite well, possibly even at actually lower costs than what the informatics cartel is tanking in order to ride the bubble.
Human labour is what produces value. All else is secondary.
7
Munrock ☭ - 2mon
They are against it because anything bad that could come of further adoption of AI will happen if it's profitable for the capitalists. Even now NVIDIA is continuing to pump the bubble when everyone knows it's a bubble, because of the profits.
The prospects are entirely different in China, because capitalists are regulated. But nobody who is vehemently against AI is aware of the difference and likely will tell you China is capitalist. So all the problems that arise from capitalist control of AI are ascribed to AI alone as well.
3
LeninWeave [none/use name, any] - 2mon
The prospects are entirely different in China, because capitalists are regulated.
The potential consequences are severely reduced because of this, but up to a point the government will allow capitalists to do the same harmful things they do in the west. They just won't allow them to collapse the whole economy and destroy the lives of half the country. That doesn't necessarily mean that the prospects of LLMs are fundamentally different, just that their potential fallout is much more limited because China is not capitalist.
2
CriticalResist8 - 2mon
China objectively treats AI differently than the West. Whether that constitutes a 'fundamental' difference I don't know, I'm not big on that word philosophically, but the fact that Chinese private companies are providing their models open source, and you can use them free of charge (the most you will pay is a very low fee for API usage) shows a huge difference, they don't consider it a product to make a profit on, i.e. A commodity.
AI is also not held solely by the capitalists in China, the government develops on it. Just recently Beijing schools started a pilot program to teach middle and high school students AI through science projects.
What we need to ask is what do they see in it, what this fills for them and helps them with.
but up to a point the government will allow capitalists to do the same harmful things they do in the west.
Can you please give any examples of the Chinese government allowing capitalists to do harmful things to the Chinese people?
2
LeninWeave [none/use name, any] - 2mon
Commodification of housing is a recent example. They were allowed to do harm, up to a point. When the bubble popped, the state caught it to prevent it from doing massive harm.
This is a strange question to even ask. China has labor exploitation, China has capitalists, China has rent seeking behavior, these are all things that do harm people and the Chinese government acknowledges them. That's what the socialist market economy means, you take the bad with the good.
1
Munrock ☭ - 2mon
It might seem like a strange question but people tend to have wildly different pespectives and understanding about China depending on what part of the world they're in, what the media they consume says about it, and whether they've lived there or not. So even if we agreed with each other it'd still be worth going through because we're in a public forum where it's helpful for others to be more illustrative.
"up to a point" is doing massive work here.
China has capitalism: in Taiwan, in Hong Kong, and in Macao. When you cross the boundary between Hong Kong and Shenzhen it's like night and day. Shenzhen ain't no backwater either, but the cost of living, quality of life, environment, air pollution (the prevalence of electric cars makes so much difference that despite being less strict about smoking in public, it's still cleaner in SZ), advertising, cleanliness of public facilities, parks in walking distance of everywhere... when they say 'take the bad with the good' that doesn't mean not doing anything about the bad, and the difference a government can make in dealing with the bad is huge.
2
LeninWeave [none/use name, any] - 2mon
I agree with you, however they still tend to act mostly after some damage has been done. That's the purpose of the socialist market economy, to allow development of productive forces but catch it when it goes wrong. That doesn't mean capitalists are prevented from making harmful decisions to begin with. The difference with Hong Kong is that there's basically no catching, it just gets worse and worse.
As I said, a typical recent example was the housing bubble the government had to step in to catch. It was allowed to go on for a while before it was stopped, and it did harm people during that time.
2
Munrock ☭ - 2mon
It's not getting worse here in Hong Kong, it's slowly getting better. The British brainrot is taking a long time to unfuck; there are full grown adults who were babies at the handover passing down romanticised stories about British rule to their kids that they were told by their boomer parents, and the eldest generation who remember how it was practically apartheid pre-Maclehose reforms are all elderly. But the education system and the reality checks from what's happening in the States, Ukraine and Palestine are all pushing back.
1
blobii - 2mon
genAI tries to computerise the only thing we can truly call human. Abstract thought in creativity. So it’s bad because it feels cold and inhuman and doesn’t even do its job that well.
2
Cricket@lemmy.zip - 2mon
I scanned the thread and am not sure if anyone else has mentioned that GenAI is obliterating the value of creators' (i.e., visual artists, musicians, writers, etc.) labor. This is ruining people's livelihoods and will drastically reduce the amount of new human artistic output.
This is on top of the various other economic, environmental, ethical, and inaccuracy issues with it. This is all why I won't touch the stuff.
0
小莱卡 - 2mon
Meh, if your art is matched by genAI, it's time to do something else. Also genAI could at the same time that it destroys the livelihood of certain people, also increases the livelihood of other people, think of someone that recently started selling a tomato sauce, with genAI they don't have to pay thousands of dollars to a graphic designer for a brand logo or a label.
3
Cricket@lemmy.zip - 2mon
I must be missing something and would like to understand this line of thinking. Do communists really think that the availability of easy knock-off logos and labels outweighs artists getting paid for their labor?
0
darkernations - 2mon
Do communists really think that the availability of easy knock-off logos and labels outweighs artists getting paid for their labor?
One would have to explain why it is not capitalism but the technology itself that is at fault. Marxists should be for the socialisation and automation of all labour; under capitalism this may mean unemployment so for artisans that leaves only one source of income which is the defense of proprietorship which is again reactionary.
For marxists the above should be relatively straight forward but for the uninitiated it may take more reading around. In practice, though, our opinions often reflect our relative class positions or aspirations ie under capitalism often "emancipation" or just protection of income in the long-run often ends up being seeking to become a labour aristocrat (which here could involve gatekeeping skilled labour) or bourgoisie aspiration (protection of intellectual property).
One should not lament the weaver for the loom; society should advance to give each person freedom to enjoy non-paid activities and should increasingly advance so that we should not have to be paid in order to survive ie working towards the abolishment of wage slavery. However, under capital there is no mechanism for this so results in the increasing contradictions and immiseration of society.
(Marxism is a science, and in the above context it is the understanding of the mechanisms of capital and how we could potentially build socialism from that understanding)
Edited to add, in case you are not aware-
Capitalism is not the same as commerce ie markets, money and trading have existed before capitalism and will exist after it
Communism= moneyless + stateless + classless society
Socialism could be considered the stage in between capitalism and communism
4
Cricket@lemmy.zip - 2mon
Thanks for the explanation.
One problem I see with comparing GenAI with earlier automation like the loom, etc. is that GenAI is in no way paying for itself so far or any time in the forseeable future. It currently loses money hand over fist despite massive breaks in the form of "free" source content and subsidized electricity. I imagine that those previous forms of automation were self-sustaining financially from fairly early after their invention.
2
darkernations - 2mon
Thank you!
One problem I see with comparing GenAI with earlier automation like the loom, etc. is that GenAI is in no way paying for itself so far or any time in the forseeable future. It currently loses money hand over fist despite massive breaks in the form of “free” source content and subsidized electricity. I imagine that those previous forms of automation were self-sustaining financially from fairly early after their invention.
There is an economic phenomenon called the Tendancy of the rate of profit to fall (https://en.prolewiki.org/wiki/Tendency_of_the_rate_of_profit_to_fall) which correlates with what you are saying; under capitalism there is no clean way out of this which is why you see AI market bubbles in the west that you don't see it in a socialist country like China.
And it is also, as you have rightly pointed out, why the return on investment is generally signifcantly worse than it was before (as a general trend). If one does a course in business administration in the west they do mental gymnastics why the rate of operational profit (ie not fictious capital of share price inflation) in the IT sector as a whole is so abysmal and have ad-hoc theories with no predictive value to explain the phenomenon (ie they don't have a scientific approach).
You'll find, and pretty much every marxist will testify to this, that seemingly puzzling politico-economical phenomenon/contradictions under liberal economics not only have robust explanations in marxism but also very good predictive powers - as a science does, and continually refines theory to reflect better observations seen as a science should.
Edited to add - some further reading just in case you need them if you read the article in the above link:
Thank you for the additional analysis and readings! I need to read a lot more about all this and other relevant topics and theory.
3
darkernations - 2mon
3
小莱卡 - 2mon
Yes? I certainly prefer abundance of stuff over scarcity. If you had asked me that the availability of mass produced furniture outweighs artisan carpenters getting paid for their labor i also would've said yes.
GenAI can be used to generate the boring art that no one wants to do so artists can dedicate themselves to making novel art.
4
Cricket@lemmy.zip - 2mon
Fair enough, I get your point.
2
CriticalResist8 - 2mon
That's a loaded question; 'knock-off' is a value judgment.
Artists don't form a class - no profession does. Between artists, you will find proletariat, bourgeois and petit-bourgeois artists. So we must first ask, which artists are we talking about protecting exactly?
Artistry is also not the first job/skill that's being automated. If the only problem was 'livelihood' (that is, per your usage of the word not the labor-power one is able to provide but the current work for which they are paid) then we might as well destroy any piece of technology that makes a job more accessible so that people can remain experts in their field and ensure they keep a job in their very specialized skill, and that would immediately revert us back to the feudal period where everyone worked the fields and the women also mended clothes on the side for some petty cash. And you died at 30 of dysentery.
But that's not how capitalism works; capitalism tends towards monopolization. None of what AI is doing right now is entirely new or unique. Capitalism itself started by obliterating many jobs (proletarianizing people in the process, getting them from their subsistence* farms and looms and into factories working for a wage), and yet we still consider it progressive because it lays the conditions down for socialism and then communism. It created a proletariat for which the ideology could be laid down: marxism.
I think the question no one has asked yet is what would people who don't like AI want to see done about AI? You can certainly try to legislate, but we know how laws go. Get another party in at the next elections and the laws change entirely. You can try to ban AI, but other countries will keep using it and something else will eventually pop up and spark the same debate. The mechanical loom and the steam machine are just two historical examples. Internet was considered a novelty in the early 90s and people thought it wouldn't last.
But you can't undo the contradictions, and the wheel keeps turning. Whether AI will endure is something that is not dependent on our arguments, whether pro or against. As communists we understand automation will make communism possible, and it's only under capitalism that it leads to the destruction of jobs (AI hasn't really led to job destruction but that's not the main point here). Under communism automation replacing your job means you still get the result of this automation and also you don't have to work anymore, in a nutshell.
These are all contradictions of capitalism Marx highlighted before and this is why the solution is socialism.
There is certainly a lot to say about AI (and technological advancements) in capitalism, but our sights are on socialism.
4
Cricket@lemmy.zip - 2mon
Thanks for the explanation and for the link.
I think the question no one has asked yet is what would people who don’t like AI want to see done about AI?
Regarding this question, one possible answer for me is that I would like AI to pay its own way. Right now, AI is losing money hand over fist despite massive subsidies in the form of "free" source content and shared energy expenses, so in its current state it's completely unsustainable and leading toward a major rug pull from under everyone.
1
CriticalResist8 - 2mon
I can definitely agree to that, and I also think these are important questions to ask but a lot of the discussion I see revolves around artists specifically (and specifically illustrative artists), leaving little space for other stuff. So thanks for answering sincerely.
The function of the state is to reconcile class differences, but class differences are irreconcilable. How can you make a proletarian rationally agree that they should be paid less and the bourgeois more? All history of civilization is the history of the class struggle; in capitalism the state works for the bourgeoisie, and in socialism it works for the proletariat. So while I would also like for AI companies to struggle their own way on the market (they couldn't compete against China anyway), they wield enormous power over the state, especially the legacy companies, and get whatever they ask for. In the same way Musk gets tons of subsidies for SpaceX and Tesla - the point is to funnel money to the bourgeoisie.
Therefore objectively speaking I think China is the single major factor for destroying the oversized western AI industry. Project Stargate was announced (500 billion for AI over the next years), and literally a week later Deepseek came out and completely demolished that idea before it even took off. It was built for a fraction of the cost of GPT with a fraction of the hardware, and suddenly a 500 billion investment didn't seem like such a good idea anymore. In fact chinese models are largely open source and they are so cheap to run, they don't even charge anything. It's hard to compete against free.
What we'll see soon enough in the west however is monopolization of AI; fewer companies will remain and it will be mostly controlled by 1 or 2 company (probably microsoft and google, maybe meta). I can't really say what the consequences of this monopolization will be yet.
It's not like there's a lot of novel model creators currently either - the costs are too prohibitive and companies like openAI are hemorrhaging money and their 20$/month subscription tier is never going to fill in that massive bottomless hole (they'd need 35 million subscribers for it to break even), so they rely on both private and public funding. Even after releasing the semi open source gpt-oss which boasts 200 billion parameters, the most popular open source models remain z.ai, deepseek and qwen - all chinese models, and you can run a 20B model at most on consumer hardware (with a 1500$ GPU). So even that bet didn't take off, nobody is even using oss beyond the novelty it seems.
4
Cricket@lemmy.zip - 2mon
Thanks again for the additional analysis. There's a lot to think about. I really need to study AI more so I can understand it better and make better critiques as well as know where it's actually most useful.
2
CriticalResist8 - 2mon
no problem, thanks for reading.
3
m532 - 2mon
Humans like to think they're special somehow. So they need a way to define a human that excludes everything else.
And that's sentience!
Diogenes with a chicken "Behold, a human"
Whoops. Don't want them pesky animals in there. Let's try something else.
It's intelligence!
Diogenes with a LLM "Behold, a human"
Whoops. Well then...
It's creativity!
Well, what's that?
It's the ability to imagine whatever you want and then make that.
... Okay that sounds cool. Look what advances in AI enabled me to make.
Wait now billions of people can express their creativity and become competition. Let's dehumanize them: "you didn't make that, the machine made it, and the definition of creativity is now drawing skill."
Diogenes with a diffusion model "Behold, a human"
-5
RedSturgeon [she/her] - 2mon
AI is a tool made for accumulation of capital, it's like a factory. It's not a sentient being.
21
LeninWeave [none/use name, any] - 2mon
"Marxists" in the 21st century looking at a drill press: "behold, a human".
People unironically say shit like LLMs are intelligent and it bums me out so much. It makes me wonder what they think other people are and what they believe they're doing when they have a conversation with someone.
14
LeninWeave [none/use name, any] - 2mon
Unironically I would rather people be conventionally religious in a toxic way than say things like this. At least even the most reactionary catholics don't (or shouldn't) think humans are just sophisticated computer programs. How are tech nerds able to do worse than the fucking catholic church?
12
ComradeSalad - 2mon
AI Marxists are unironically becoming a Cult Mechanicus and all it took was the plagiarism machine telling them that they’re very special, very smart, good boys.
-1
FuckBigTech347 - 2mon
The Silicon is what gives Life; After all it carries out all those thousands of floating point Instructions
that are required for the Matrix Brain to work!
3
LeninWeave [none/use name, any] - 2mon
Humans are a specific species, it's not hard to define homo sapiens in a way that excludes everything else.
Humans (among others) are capable of generating new ideas. LLMs recycle. LLMs are like drill presses, they do no labor (and create no value) on their own.
9
ComradeSalad - 2mon
Mindlessly regurgitating Wikipedia and Reddit comments is not “intelligence”.
There’s no thinking going on there. LLMs can’t think, reason, or utilize logic. They are fundamentally incapable of doing so.
Also saying that Diffusion models create art with good drawing skills is hilarious.
7
Are_Euclidding_Me [e/em/eir] - 2mon
are you saying you believe chatgpt is sentient
3
m532 - 2mon
No its not sentient
0
10TH_OF_SEPTEMBER_CALL [any, any] - 2mon
Diogene would have pissed on the keyboard and said something about the cops being pigs
thefreepenguinalt in comradeship
Why are some people so vehemently against genAI?
And i don't mean stuff like deepfakes/sora/palantir/anything like that, im talking about why the anti-genai crowd isn't providing an alternative where you can get instant feedback when you're journaling
GenAI isn't giving you feedback. It's not a person. The entire thing is a social black hole for a society where everyone is already deeply alienated from each other.
Im not looking for opinions lol
It's a toy. I'm not against toys, but the amount of energy and resources we are pouring into this toy is alarming.
https://hexbear.net/pictrs/image/4830110d-5048-40ab-aad9-c27ce0bb2f0f.png
My impression is that a lot of people realize this tech will be used against them under capitalism, and they feel threatened by it. The real problem isn't with the tech itself, but with capitalist relations, and that's where people should direct their energy.
100%.
We are communists. We should understand the labor theory of value. Therefore, we should understand why GenAI does not create any new value: it's not a person and it does no labor. It recycles existing knowledge into a lower-average-quality slurry, which is dispersed into the body of human knowledge used to train the next model which is used to produce slop that is dispersed into the... and so on and so forth.
I don't think that's the point Marxists that are less anti-AI are making. Liberals might, but they reject the LTV. If we apply the law of value to generative AI, then we know that it's the same as all machinery, it's simply crystallized former labor that can lower the socially necessary labor time of certain commodities in certain conditions.
Take, say, a stock image for a powerpoint slide that illistrates a concept. We can either have people dedicated to making stock images in broad and unique enough situations, and have people search for and select the right image, or we can generate an image or two and be done with it. Side by side, the end products are near-identical, but the labor-time involved in the chain for each is different. The value isn't higher for the generated image, it lowers the socially necessary labor time for stock images.
We are communists, here, and while I do think there's some merit to the argument that misunderstanding the boundaries and limitations of LLMs leads to some workers and capitalists relying on it in situations it cannot be, I also think the visceral hatred I see for AI is sometimes clouding people's judgements.
TL;DR AI does have use cases. It isn't creating new value, but it can lower SNLT in certain situations, and we as communists need to properly analyze those rather than dogmatically dismiss it whole-cloth. It's over-applied in capitalism due to the AI bubble, that doesn't mean it's never usable.
I generally agree with you here, my problem is that despite this people do treat AI as though it's capable of thought and of labor. In this very thread there are some (luckily not many) people doing it. As you say, it's crystallized labor, just like a drill press.
Some people treat it that way, and I agree that it's a problem. There's also the people that take a dogmatically anti-AI stance that teeters into idealist as well. The real struggle around AI is in identifying how we as the proletariat can make use of it, identifying what its limits are, while using it to the best of our abilities for any of its actually useful use-cases. As communists, we sit at an advantage already by understanding that it cannot create new value, and is why we must do our best to take a class-focused and materialist analysis of how it changes class dynamics (and how it doesn't).
I agree with you here, although I want to make a distinction between "AI" in general (many useful use cases) and LLMs (personally, I have never seen a truly convincing use case, or at least not one that justifies the amount of development going into them). Not even LLM companies seem to be able to significantly reduce SNLT with LLMs without causing major problems for themselves.
Fundamentally, in my opinion, the mistaken way people treat it is a core part of the issue. No capitalist ever thought a drill press was a human being capable of coming up with its own ideas. The fact that this is a widespread belief about LLMs leads to widespread decision making that produces extremely harmful outcomes for all of society, including the creation of a generation of workers who are much less able to think for themselves because they're used to relying on the recycled ideas of an LLM, and a body of knowledge contaminated with garbage that's difficult to separate from genuine information.
I think any materialist analysis would have to consume that these things have very dubious use cases (maybe things like customer service chat bots) and therefore that most of the labor and resources put into their development are wasted and would have been better allocated to anything else, including the development of type of "AI" that are more useful, like medical imaging analysis applications.
This is what China is developing currently, and many other cool things with AI. Although medical imaging AI was also found to have their limitations, but maybe they need to use a different neural method.
Even if capitalist companies say that you can or should use their bot as a companion doesn't mean you have to. We don't have to listen to them. I've used AI to code stuff a lot, and it got the results -- all for volunteer and free work, where hiring someone would have been prohibitive, and AI (LLM specifically) was the difference between offering this feature or canceling the idea completely.
There's a guy on youtube who bought Unitree's top of the line humanoid robot (yes they ship to your doorstep from China lol) and with LLM help codes for it, because the documentation is not super great yet. Then with other models he can have real-time image detection, or use the LIDAR more meaningfully than without AI. I'm not sure where he's at today with his robot, he was working on getting it to fetch a beer from the fridge - baby steps, because at this stage these bots come with nothing in them except the SDK and you have to code literally everything you want it to do, including standing idle. The image recognition has an LLM in it so that it can detect any object, he showed an interesting demo: in just one second, it can detect the glass bottles in the camera fram and even their color, and adds a frame around it. This is a new-ish model and I'm not entirely sure how it works but I assume it has to have an LLM in it to describe the image.
I'm mostly on Deepseek these days, I've completely stopped using chatGPT because it just sucks at everything. It hallucinates so much less and becomes more and more reliable, although it still outputs nonsensical comparisons. But it's like with everything you don't know: double-check and exercise critical thinking. Before LLMs to ask our questions we had wikipedia, and it wasn't any better (and still isn't). edit - like when deepseek came out with reasoning, which they pioneered, it completely redefined LLM development and more work has been done from this new state of things, improving it all the time. They find new methods to improve AI. I think if there was a fundamental criticism I would make of it is that perhaps it was launched too soon (though neural networks have existed for over a decade), and of course overpromised by tech companies who rely on their AI product to survive. OpenAI is dying because they don't have anything else to offer than GPT, they don't make money on cloud solutions or hardware or anything like that. If their model dies, they die along with it. So they're in startup philosophy mode where they try to iterate as fast as possible and consider any update is a good update (even when it's not) just to try and retain users. They bleed 1 billion $ a month and live entirely on investor value, startup mode just doesn't scale that high up. It's not their 20$ subscriptions that are ever going to keep them afloat lol.
I think that's a problem general to capitalism, and the orientation of production for profit rather than utility. What we need to do as communists is take an active role in clarifying the limitations and use-cases of AI, be they generative images, LLMs, or things like imaging analysis. I often see opposition to AI become more about the tool than the use of it under capitalism, and the distortions beyond utility that that brings.
True, but like I said, companies don't seem to be able to successfully reduce labor requirements using LLMs, which makes it seem likely that they're not useful in general. This isn't an issue of capitalism, the issue of capitalism is that despite that they still get a hugely disproportionate amount of resources for development and maintenance.
I do oppose the tool (LLMs, not AI) because I have yet to see any use case that justifies the development and maintenance costs. I'll believe that this technology has useful applications once I actually see those useful applications in practice, I'm no longer giving the benefit of the doubt to technology we've seen fail repeatedly to be implemented in a useful manner. Even the few useful applications I can think of, I don't see how they could be considered proportional to the costs of producing and maintaining the models.
I don't follow. LLMs are a machine of course, what does that imply? That Something needs to be productive to exist? By the same LTV, LLMs reduce socially necessary labor time, like all machines.
That they create nothing on their own, and the way they are used currently leads to a degradation of the body of knowledge used to train the next generation of LLMs because people treat them like they're human beings capable of thought and not language recyclers, spewing their output directly into written works.
venice.ai knows how to cook meth tho
People blow themselves up enough learning to cook meth from books and from other people, I don't think they should be taking instructions from the "three Bs in blueberry" machine.
Blowing oneself up is just part of the thrill!
Sure that tells us that some of the massive investments are stupid because their end-product won't have much or any value.
You still have a bunch of workers that used to produce something of value that required a certain amount of amount of labor that is now replaced by slop.
So the conclusion of the analysis ends up fairly similar, you just sound more like a dork in the process.
A lot of the applications of AI specifically minimize worker involvement, meaning the output is 100% slop. That slop is included in the training data for the next model, leading to a cycle of degradation. In the end, the pool of human knowledge is contaminated with plausible-sounding written works that are wrong in various ways, the amount of labor required to learn anything is increased by having to filter through it, and the amount of waste due to people learning incorrect things and acting on them is also increased.
These are all historical problems of capitalism; we need to be able to cut through the veil instead of going around it, and attack the root cause, otherwise we are just reacting to new developments.
This is the case for some of the critiques but I wouldn't say all.
I didn't want to dump a point-by-point on you unprompted but if you let me know I can write one up happily. A lot of what is said about AI is just capitalism developing as it does, the technology might be novel and unprecedented (it's not entirely, a lot of what AI and AI companies do was already commonplace), but the trend is perfectly in line with historical examples and the theory.
Some less political people might say we just need better laws to steer companies correctly but of course we know where that goes, so the solution is to transform the class character of the state to transform the relations of production, and we recognized this long before AI existed. So my bigger point is that we need to keep sight on what's important, socialism; not simply reacting to new developments any time they happen as this would only keep us running circles within the existing state of things.
A lot of what happens in the western tech sphere is happening in other industries under late-stage capitalism, chasing shorter and shorter term profits and therefore shorter-term commodities as well. But there is also a big ecosystem of open-source AI that exists inside capitalism, though it's again not unique to AI and open-source under capitalism has its own contradictions.
It's like... at this point I think a DotP is more likely than outlawing AI is lol. And I think it's healthy to see it like this.
Most of the harm comes from the hype and social panic around it. We could have threaded it as the interesting gadget it is, but the crapitalists thoughts they finally had a way to get rid of human labour and crashed the work economy... again
What I don't like is that they're selling a toy as a tool, and arguably as the One And Only Tool.
You're given a black box and told to just keep prompting it to get lucky. That's fine for toys like "give me a fresh low-quality wallpaper every morning." or "pretend you're Monkey D. Luffy and write a song from his perspective."
But it's not appropriate for high-stakes work. Professional tools have documented rules, behaviours, and limits. They can be learned and steered reliably because they're deterministic to a fault. They treat the user with respect and prioritixe correctness. Emacs didn't wrap it in breathess sycopantic language when the code didn't compile. Lotus 1-2-3 didn't decide to replace half the "7's" in your spreadsheet with some random katakana becsuse it was close enough. AutoCAD didn't add a spar in the middle of your apartment building because it was statistically probable after looking at airplane wings all day.
I mean software glitches all the time, some widespread software has long-standing bugs in it that its developers or even auditors can't figure out and people just learn to work around the bug. Photoshop is made on 20 year old legacy code and also uses non-deterministic algorithms that predate AI (the spot healing brush for example which you often have to redo several times to get a different result). I agree that there's a big black box aspect to LLMs and GenAI, can't say for all AI, but I don't think it's necessarily inherent to the tech or means it shouldn't be developed more.
Actually image AI is severely simple in its methods. Provide it with the exact same inputs (including the seed number) and it will output the exact same image every time, with only very minor variations. Should it have no variations? Depends; image gen AI isn't an engineering tool and doesn't profess to have a 0.1mm margin of error like other machines might need to.
Back in 2023 already China used an AI (they didn't say what type exactly) to blueprint the electrical cabling on a new ship model, and it did it with 100% accuracy. It used to take a team of engineers one year to do this and an AI did it in 24 hours. There's a lot of toy aspects to LLMs but this is also a trap of capitalism as this is what tech companies in startup mode are banking on. It's not all neural models are capable of doing.
You might be interested that the Iranian government has recently published guidelines on AI in academia. Unfortunately I don't have a source as this comes from an Iranian compsci student I know, they say that you can use LLMs in university but need to note the specific model used, time of usage, and can prove you understand the topic then that's 100% clean for Iranian academic standards.
Iran is investing a lot in tech under heavy sanctions, and making everything locally (it is estimated 40-50% of all uni degrees in Iran are science degrees). To them AI is a potential way to improve their conditions under this context, and that's what they're exploring.
Do you have a link to the story? I ask because AI is a broad umbrella that many different technologies fall under, so it isn't necessarily synonymous with generative AI/machine learning (even if that's how the term has been used the past few years). Hell, machine learning isn't even synonymous with neural networks.
Circling back to the Chinese ship, one type of AI I could plausibly see being used is a solver for a constraint satisfaction problem. The techniques I had to learn for these in college don't even involve machine learning, let alone generative AI.
I sent the story on perplexity and looked at its sources :P (people often ask me how I find sources, I just ask perplexity and then look at its links and find one that fits)
https://asiatimes.com/2023/03/ai-warship-designer-accelerating-chinas-naval-lead/ they report here that a paper was published in a science journal, though Chinese-language.
I did find this paper: https://www.sciencedirect.com/science/article/abs/pii/S004579492400049X but it's not from the same team and seems to be about a different problem, though still in ship design (hull specifically) and mentions neural networks.
This is sort of the issue with "AI" often just meaning "good software" rather than any specific technique.
From a quick read the first one seems to refer to a knowledge-base or auto-CAD solution which is fundamentally different from any methods related to LLMs.
The second one is some actually really impressive feature engineering used to solve an optimization problem with Machine Learning tools, which is actually much closer to a statistician using linear regressions and data mining than somebody using an LLM or a GAN.
Importantly, neither method is as computationally intensive as LLMs, and the second one at least is a very involved process requiring a lot of domain knowledge, which is exactly the opposite of how GenAI markets itself.
yeah my dad can kill a dozen people if something goes wrong at work. Yet they use windows and proprietary shit.
If software isn't secured it shouldn't be used.
We can make software less prone to errors with proper guidelines and procedures to follow, as with anything. Just to add that it's not solely on software devs to make it failproof.
I would make the full switch to Linux but I need Windows for photoshop and premiere lol. And I never got Wine to work on Mint, but if I could I would ditch windows today. I think helping people get acquainted with linux is something AI can really help with, and may help more people make the switch.
yes. It's a tool that can (and must) be seized and re-appropriated imo. But it's not magic. Main issue is that capitalists are selling it as some kind of genius in a bottle.
apologies if this is annoying, but have you tried Lutris?
it's designed for games, but i use it for everything that needs wine because it makes it easy to manage prefixes etc. with a nice gui
No worries, I haven't tried it but I also don't have my Mint install anymore lol (Windows likes to delete the dual boot file when it updates and I never bothered to get it working again). I might give it another try down the line but I'm not ready to ditch Adobe yet. I'll keep it in mind for if I make the switch in the future.
ELIZA was written in the 60s. It's a natural language processor that's able to have reflective conversations with you. It's not incredible but there's been sixty years of improvements on that front and modern ones are pretty nice.
Otherwise, LLMs are a a probabilistic tool: the input doesn't determine the output. This makes them useless at things tools are good at, which is repeatable results based on consistent inputs. They generate text with an authoritative voice but all domain experts find that they're wrong more often than they're right, which makes them unsuitable as automation for white-collar jobs that require any degree of precision.
Further, LLMs have been demonstrated to degrade thinking skills, memory, and self-confidence. There are published stories about LLMs causing latent psychosis to manifest in vulnerable people, and LLMs have encouraged suicide. They present a social harm which cannot be justified by their limited use cases.
Sociopolitically, LLMs are being pushed by some of the most evil people alive and their motives must be questioned. You'll find oceans of press about all the things LLMs can do that are fascinating or scary, such as the TaskRabbit story (which was fabricated entirely). The media is culpable in the image that LLMs are more capable than they are, or that they may become more capable in the future and thus must be invested in now.
Because we can see what it does without properly regulation and also it’s very overhyped by tech companies in how much utility it actually has
Ye imo they're not regulating it in the right places They're uber-focused on making it reject making how-to guides for things they don't like that they don't see the real problem: Technofascist cults like palantir being able to kill random people with the press of a button
"And i don’t mean stuff like deepfakes/sora/palantir/anything like that" bro, we don't live in a world where LLMs are excluded from those uses
the technology itself isn't bad, but we live in a shitty capitalist world where every instance of automation, rather than liberating mankind, fucks them over. a thing that can allow one person to do the labor of many is a beautiful thing, but under capitalism increases of productivity only lead to unemployment; though, on the bright side, it consequently also causes a decrease in the rate of profit.
For myself, it is the projected environmental impact. The power demand for data centers has already been on the rise due to the growth of the internet. With the addition of AI and the training thereof, the amount of power is rising/will rise at an unsustainable rate. The amount of electricity used creates strain on existing power grids, the amount of water that goes into cooling the hardware for the data centers creates strain on water supply, and this all plays into a larger amount of carbon emissions.
Here is a good link that speaks to the environmental impact: genAI Environmental Impact
Beyond the above the threat of people losing jobs within an already brutal system is a bit terrifying to me. Though others have already wrote more in length here regarding this.
We have to be careful how we wield the environmental arguments. In the first phase, it's often used to demonize Global South countries that are developing. Many of these countries completely skipped the personal computer step and are heavy consumers of smartphones and 4G data because it came around the time they could begin to afford the infrastructure (it's why China is developing 6G already), but there's a lot of arguments people make against smartphones (how the materials for them are produced, how you have to recharge a battery, how they get disposed of, how much electricity 5G consumes etc), but if they didn't have smartphones then these countries would just not have the internet.
edit: putting it all under the spoiler dropdown because I ended up writing an essay anyway lol.
::: spoiler environmental arguments
In the second phase in regards to LLM environmental impact it really depends and can already be mitigated. I'll try not to make a huge comment because I don't want to write an essay, but the source's claims need scrutiny. Everything consumes energy - even we as human bodies release GHG. Going to work requires energy and using a computer for work requires energy too. If AI can do in 10 seconds what takes a human 2 hours, then you are certainly saving energy, if that's the only metric we're worried about.
So it has to be relativized which most AI environmental articles don't do. A chatGPT prompt consumes five times more electricity than a google search, sure, but that amount is close to 0 watts. Watching Youtube also consumes energy, a minute of youtube consumes much more energy than an LLM query does.
Some people will say that we need to stop watching Youtube, no more treats or fun for workers, which is obviously not something we take seriously (deleting your emails to make room in data centers was a huge thing on linkedin a few years ago too).
And all of this pales in comparison to the fossil fuel industry that we keep pumping money into in the west or obsolete tech that does have greener alternatives but we keep forcing on people because there's money to be made.
edit - and the meat and animal industry.... Beef is very water-intensive and polluting, it's not even close to AI. If that's the metric then those that can should become vegan.
Likewise for the water usage, there was that article about texas telling people to take fewer showers because it needs the water for data centers... I don't know if you saw it at the time, it went viral on social media. It was a satirical article against AI, that people used as a serious argument. Texas never said to take fewer showers, these datacenters don't use a lot of water at all as a share of total consumption in their respective geographical areas. In the US a bigger problem imo is the damming of the Colorado River so that almost no water reaches Mexico downstream, and the water is given out to farmers for free in arid regions so they can grow water-intensive crops like rice or dates (and US dates don't even taste good)
It also has sort of an anti-civ conclusion... Everything consumes energy and emits pollution, so the most logical conclusion is to destroy all technology and go back to living like the 13th century. And if we can keep some technology how do we choose between AI and Youtube?
Rather I believe investments in research make things better over time, and this is the case for AI too (and we would have much better, safe nuclear power plants too if we kept investing in research instead of giving in to fearmongering and halting progress but I digress). I changed a lot of my point of view on environmentalism when back in 2020 people were protesting against 5G because "microwaves" and "we don't need it" and I was on board (4G was plenty fast enough) until I saw how in some places they use 5G for remote surgery and that's a great thing that they couldn't do with 4G because there was too much latency. A doctor in China with 6G could perform remote surgery on a child in the Congo.
In China electricity is considered a solved problem; at any time the grid has 2-3x more energy than it needs. The west has decided to stop investing in public projects and instead concentrate all surplus value in the hands of a select few. We have stopped building housing, we stopped building roads and rail, but we find the money to build datacenters that could be much greener, but why would they be when that costs money and there's no laws that mandate it?
Speaking of China they use a lot of coal still (comparatively speaking) but they also see it just an outdated means of energy production that can be replaced by newer, better alternatives. It's very different, they're doing a lot of solar and wind - in the west btw chinese solar panels are tariffed to hell and back, if they weren't every single building in europe would be equipped with solar panels - and even pioneering new methods of energy production and storage, like the sodium battery or gravity storage. Gravity battery storage (raising and lowering heavy blocks of concrete over the day) is not necessarily Chinese but in Europe this is still just a prototype. In China they're already building them as part of their energy strategy. They don't demonize coal as uniquely evil like liberals might, but rather that once they're able to, they'll ditch coal because there's better alternatives now.
In regards to AI in China there's been a few articles posted on the grad and it's promising. They are careful about efficiency because they have to be. I don't know if you saw the article from a few days ago about Alibaba Cloud cutting the number of GPUs needed to host their model farm by 82%. The test was done on NVidia H20 cards which is not a coincidence, it's the best China can get by US decree. The top of the line model is the H100 (the H20 having only 20% of the capabilities) but the US has an order not to export anything above the H20 to China, so they find creative ways to stretch it. And now they're developing their own GPU industry and the US shot itself in the foot again.
Speaking of model farm... it's totally possible to run models locally. I have a 16GB GPU and I can generate realistic pictures (if that's the benchmark) in 30 seconds, the model only needs 5GB Vram but the architecture inside the card is also important for speed. For LLM generation I can run 12B models, rarely higher, and with new efficiency algorithms I think over time that will stretch to bigger and bigger models, all on the same card. They run model farms for the cloud service because so many people connect to it at the same time, but it's not a hard requirement for running LLMs. In another comment I mentioned how Iran is interested in LLMs because like 4G and other modern tech that lags a bit in the west, they see it as a way to stretch their material conditions more (being heavily sanctioned economically).
There's also stuff being done in the open source community, for example LORAs are used in image generation and help skew the generation towards a certain result. This means you don't need to train a whole model, loras are usually trained by people on their machines with like 100 images. As for training time it can be done in 30 minutes to train a lora. So what we see is comparatively few companies/groups making full models (either LLM or image gen, called checkpoints) and most people making finetunes for these models.
Meanwhile in the West there's a 500 billion $ "plan" to invest in the big tech companies that already have a ton of money, that's the best they can muster. Give them unlimited money and expect that they won't act like everything is unlimited. Deepseek actually came out shortly after that plan (called Stargate) and I think pretty much killed it before it even took off lol. It's the destiny of capitalism to con the government into giving them money, of course they were not going to say "no actually if we put some personal investment we could make a model that uses 5x less energy", because they would not get 500 billion $ if they did. They also don't care about the energy grid, that's an externality for them - the government will take care of it, from their pov.
Anyway it's not entirely a direct response to your comment because I'm sure you don't believe in all the fearmongering, but it's stuff I think is important to keep in mind and I wanted to add here. And I ended up writing an essay anyway lol. :::
Why would you want instant feedback when you're journaling? The whole point of journaling is to have something that's entirely your own thoughts.
I dont like writing my own thoughts down and just having them go into the void lol and i want a real hoomin to talk to about these things but i dont have one TwT
What does "go into the void" mean? The LLM may use them as context for a while or it may not use them as context at all, it may even periodically erase its memory of you.
I find talking about heavy or personal things way easier with strangers than with people you know. There's no stakes with a stranger you can literally walk up to someone on the street or in a park who doesn't look busy and ask them if they want to talk.
Is it okay if I push back a bit? Since your last comment just feels a little dismissive? I don't know the Free Penguin, but I will point out to other things why someone might not be able to easily talk to someone? Like for example, if someone can't can't walk or get around, they won't be able to just talk to someone like that. Mainly speaking about my mom before she died since she had copd and her health decline after something happened to her at her former work place. But anyways she really hurt her spine and couldn't really get around. I remember her being very upset with how alone she felt.
Then also speaking for myself, I have a speech impediment, + anxiety, so it is really difficult for me to just approach someone and talk to them depending on various factors. along with that, another thing to, but some strangers can be outright hostile and make things worse and someone else might just have a lot of bad interactions with strangers. Since to go back to myself, people do judge how someone speaks and tends to see little of you, like if you have an accent or have trouble speaking.
Chronic loneliness and anxiety are a function of societal arrangements that are exacerbated by capitalist solutions, not inherent and unavoidable parts of the human condition until they are cured by a panacea ex machina.
Believe it or not, before 2022 we did have lots of different approaches around the world to these things. And we are poorer for turning away from all those approaches.
I am a rather awkward person in many ways, I am instantly recognizable by many people as "weird", I have my own share of anxiety that I've gotten better at masking over the years. If I spent ages 19-25 interacting with a digital yes-man instead of with humans, I would have no social skills.
Your response sounds closely analogous to when car proponents use the disabled as a shield. We don't need everyone to drive, we need to minimize the distance between each other, and making driving (or LLM usage) a necessity for getting by in society only creates bigger problems, because the root problem is not being adequately addressed.
I feel like you might be taking me at bad faith here or misinterpreting me.
I agree? I'm very aware.
I would argue that depends. Not everywhere has a lot of different approaches to these things. If anything, if we go to LLM's, all they did was take inherit contradictions and brought them to new heights, but that these things were already there to begin with, maybe smaller in form.
Again, where do I say that besides being taken at bad faith or misread into? All I'm simply is trying to point out that there usually reasons why someone would turn to something like an LLM or might not easily talk to someone else. As you said, the root problem is not being addressed. To add, it also just leaves a bad taste in my mouth and kind of hurts, to be that what I said sounds closely analogous to using the disable as a shield, especially when I was talking about myself or my mom.
Since for example, when my mom was in the hospital before the last few weeks she died. She had to communicate on a white board for staff since they couldn't understand her. I also had to use the same white board to because staff couldn't understand what I was saying either. Just to give you an idea of how I have trouble speaking to others. I'm not saying someone shouldn't try to interact with others you know and just go talk to a chatbot. People should have another person to talk to.
Yeah but what if i make em uncomfyyyyy
The ability to self-actualize and shape the world belongs to those who are willing to potentially cause momentary discomfort.
Also the default status of many people is lonely and/or anxious; receiving social energy from someone often at least takes their mind off that.
Advancements in material technology in the past half century have often ended up stunting our social development and well-being.
Yeah but the things i ask the robot about a real hoomin would find it really creepy to talk to a stranger about
I would be extremely cautious about that sort of usage of AI. Commercial AI's are psychopathic sychophants and have been known to drive people insane by constantly gassing them up.
Like you clearly want someone to talk to about your life and such (who doesn't?) and I understand not having someone to talk to (fewer and fewer do these days). But you're opting for a corporate machine which certainly has instructions to encourage your dependence on it.
Also i delete my convos about these things after 1 prompt so i dont have a lasting convo on that But tbh exposure to the raw terms of the topic has let me go from tech allegories to T9 cipher to where i am now where i can at least prompt a robot using A1Z26 or hex to obscure the raw terms a bit
Have there been cases of deepseek causing ai psychosis or is it just chatgpt
No idea. But I'd say its less likely. Especially if you're running a local model with Ollama.
I think key here is to prevent the AI from developing a "profile" on you and self controlled ollama sessions are the surest bet for that.
Do you worry about AI psychosis?
there's the people who hate it bc they have petit-bourgueois leanings and think at the stuff as "stealing content" and "copyrighted material" like artist people, "code monkeys" or writers
and there's the people that hate it because it's an obvious grift made to siphon resources and "try" to be a big replacement for proles, and a huge wasteful technology that dries water sources and rises electricity bills with their data centers
yeah, it's kinda useful to make a drawing, fill blank space in a document, or being a dumb assistant who hallucinates anything to pretend that it knows stuff
It's actually not petty bourgeois for proletarians in already precarious positions to object to the blatant theft of their labor product by massive corporations to feed into a computer program intended to replace them (by producing substandard recycled slop). Even in the cases where these people are so-called "self-employed" (usually not actually petty bourgeois, but rather precarious contract labor), they're still correct to complain about this - though the framing of "copyrighted material" is flawed (you can't use the master's tools to dismantle his house). No offense, but dismissing them like this is a bad take. I agree with the rest of your comment.
I agree with you, a friend of mine was illustrator, she lost all her jobs. Now she does scenery.
But they lost their jobs because there's a downward pressure to cost everywhere in society. That money is, once again, stolen.
Also you forgot to mention the 2 dollars-a-day kenyan labourer that trained chatGPT in the first place. Or the awful water consumption.
I dont think the tech itself is evil, or even special, it's just a big array of number at the end. The issues are the techbros and their new hype.
I am/was in visual arts (or rather graphic design) and it's been a long time coming. Spec work was all the rage to rally against back in the day, and despite the protests it hasn't gone anywhere. I'm sure that back in the 90s some old-school designers were against Photoshop too. And of course we are the first person to lose our jobs when a crisis hits, because marketing takes a backseat.
For years Photoshop was the standard in website mockups which is just wild to me as it's not what it's meant for. Today we have tools like Figma, which unfortunately exist as SaaS (Software-as-a-service, with a monthly subscription and on the 'cloud'). The practice still endures despite the fact that you have to code for mobile now and monitors don't come in the standard 4:3 aspect ratio anymore but in many variations.
Oh, I could add SaaS too to the difficulties designers face. For example Wordpress has a lot of prefab themes, so you don't even need to mockup anything in Photoshop or figma anymore. You just pick one and start building your website - it's how I made all my websites, I don't need to duplicate elements in image files and I honestly have no idea how I would even start making a modern website on Photoshop. The footer for example which is the same on all pages is easily editable from Wordpress. I feel like I would be wasting time making a footer on Photoshop when I can just edit it on the website directly in a visual editor and it will update itself on every page.
I don't see that any of the above as a bad thing overall. What we see is that society adapts to changing conditions. The fight is still for socialism.
Yes, CGI in general was demonised and at some point even came close to the current shitstorm, with the same arguments about death of the art and human creativity etc. in reality it just vastly increased the output of the art, especially commercial one.
The commodification of art has been a disaster for the human race
yea coz artists are useless for he machine minded.
Those aren’t petit bourgeois tendencies, those are pre-capitalist artisanal tendencies.
Except for the copyright aspect, but independent artists rarely clamour about their “copyrights”, as their issue is more about how their work, whether copyrighted or not, is getting fed into a capitalist black hole machine designed to replace workers to benefit no one but a few capitalists.
Most within the art world could care less if someone took the time to learn their style one to one by studying their work. That’s the entire point of art to a degree, as the end product is still an expression of labour value. Something that can’t be said about GenAI
GenAI really is taking people's jobs. It might not do it better. It might be less safe. It might even be less cost efficient. It's still happening.
Its not even a case of Do you think you can be replaced by AI? Instead its, Does your employer think you can be replaced with AI? Any white collar worker would be foolish to think that's something their employer has not considered. Corporate advertising is pleading them to reconsider multiple times a day.
Exactly, and this process keeps happening in capitalism making AI neither unique nor truly new in its social repercussions. Therefore the answer is socialism so that tech frees us instead of creating crises.
Although the companies that bought on the promise of replacing labor are now walking it back as they realize it doesn't replace but enhances labor. It's like Zuckerberg not allowing his kids on Facebook, AI companies are not replacing their employees with AI either but they sell the package because capitalism needs to make money, not social good.
You sure about that? I mean they're obviously hiring more because they have the investors at the moment but that doesn't mean they arent using AI internally.
They're rehiring now, for example Klarna laid off their customer service reps to replace with AI but they're walking it back and rehiring human reps (tbh klarna has other problems right now lol).
Klarna is not an AI company. I was asking if AI companies really weren't replacing their own employees with AI.
I read too fast lol. I was talking about the engineers that work on the models in this case; tech companies would never replace them with AI because they know it wouldn't work out.
But I looked more broadly into it couldn't find any source that says the mass layoffs we are seeing in tech currently are replacing jobs with AI, rather it seems they're getting rid of the job entirely as happens routinely in the industry (as they are shifting to another focus which currently is AI). There's Amazon who is building automated warehouses but YMMV; they also started on these before AI and have been at it for a while.
For new AI companies like openAI, the jobs they are giving to AI (such as customer service) were never created in the first place, so it's not replacing a worker, since the job never existed.
I mean my employer can replace me with AI. It WILL break.
Allegedly the AWS outage was after they replaced a chunk of their team with AI.
Good.
It's not feedback. That's not what the tool is for. It doesn't have an opinion. There's no one on the other side of the screen. The "A" stands for Artificial.
no the "a" stands for "a piece of shit marketing guy scammed a bunch of people"
That's still feedback.
Feedback can be based on very simple rules and still be feedback. Or in case of LLMs piles of statistics.
I don't see that as feedback but it depends on your definition of feedback. Just having something come out of the AI is not feedback to me.
A writer writes something. An AUDIENCE provides feedback on their writing. An AI can be processing the writing but can't be an audience because it is just a tool. Just because the AI returned some text back won't change that fact regardless of the content of the text.
Data is feedback for example. If you change something on a web page and notice a huge drop in visits then that provides actionable information, i.e. feedback. The visitors didn't vocalize it, you only see it as numbers on a spreadsheet.
True but OP isn't using AI to collate or analyse data of the visitors to his website.
As I said, it's how you use the tool. Every usecase is not valid. In a LOT of cases AI is not useful or efficient and it's sometimes doing more harm than good.
A recipient responds. If you consider the response to adapt your work (current or future), that's feedback.
Not to me. As I said, my definition of feedback is a lot tighter.
And tbh im not looking for opinions or human interaction on there im just looking for something that says my posts in another way for idek what reason but a human would get uncomfy reading them so yeah
Side bar: This is a very specific usage of GenAI. Are you like writing your diary into ChatGPT?
There doesn't need to be an alternative option to offer. I don't support genAI because its flooded the internet with fake content thatnhas no label to differentiate it. It's unreversible.
There is stuff like spellcheck and languagetool which can give you a specific variety of feedback.
It has a flattening effect. The things that come out the other end don't sound human. They sound like the collective mouth of reddit and blog spam.
I don't know why you'd use it for journaling. what feedback do you even need for journaling? Shouldn't that be your thoughts and not your thoughts filtered through the machine of averages and disembodied?
That's kind of like the how people where talking about how genai always fucks up the hands in pictures. I don't think that's permanent
Yea so i type my natural thoughts in and tbh i have some of em that i currently dont share w humans cuz they're kinda sensitive and i dont use genai for emotional attachment just to see them written in a different way i guess
genai turned the internet into a hellhole. nothing is genuine. information became worthless. facts don't matter anymore.
it carries itself into the world outside the internet. slopaganda, decision making and policymaking are affected by genai and will make your life actively worse.
welcome to the post-fact world where you can't even trust yourself.
This is the correct take from an ML perspective (essentially an extension of the fact that we should not lament the weaver for the loom):
https://redsails.org/artisanal-intelligence/
The problem is not the technology per se (criticisms such as energy consumption or limitations of the tools just means there's room for improvement for the tech or how we use it) but capitalism. If you want a flavour of opinions on this this click on my username and order comments by most controversial for the relevant threads.
https://lemmygrad.ml/post/9364892/7113860
lol
do not use an LLM for whatever the heck you think you're doing with it.
Why not?
you do not seem to understand what the technology is or its limitations. Other commenters have corrected your farcical assertion about
Please come back when you can comment without insulting people's intelligence, this doesn't add anything to the conversation. This isn't reddit.
i didn't insult their intelligence at all. plenty of people are fooled by the talking scrabble bag, this is a matter of education and credulity.
Stop projecting yourself onto others. The other commenters that you are referencing were corrected as well for their own farcical and fallacious assessments.
I have one that knows a lot of things about how to spy on the cops during an uprising
Those people are usually Westerners that take the easy route which is to blame a tool for the issues caused by capitalism.
However, if you look beyond the small western world into countries like China, Cuba, Vietnam and others in the global South, AI, including genai, is celebrated. You can find plenty of content in Xiaohongshu with comments fascinated with the inventions of people.
One example of this is this song created by a person that used AI for the production of it:
This is another where someone produces a video regarding neoliberalism to educate:
There is even a Yotube channel called Dialectical Fire that posts incredible content using AI.
All I know is that this new form of luddism will disipate into history similarly to the past luddism of past century.
You're aware that the luddites were correct, right? They weren't vulgar technology haters, they had valid concerns about their pay and the quality of the products produced (actually an excellent comparison to many people who oppose LLMs), which turned out to be accurate. The idea of luddites as you use it here is explicitly liberal propaganda used to smear labor movements for expressing valid concerns, and they didn't dissipate into history, there were and are subsequent similar labor movements.
The point is that even though the concerns the luddites had were correct, their methods were not. Hence why they failed. Now, people are trying to do the same things that we know don't work.
Can you elaborate dear? why do you say their methods were not?
Luddites attacked machinery, blaming it on their declining quality of life. The correct approach is to attack the capital relations directly, ie to attack the capitalists themselves, and take hold of the productive forces built up by capitalism already, directing it for the good of all rather than the profits of the few.
I mean, the combative communist cells here have blown weapon industry plants. It's true they "failed" but it gave people hope
That's different entirely though, sabotaging the war machine is entirely different from attacking factories for consumer commodities.
Of course. I'm just saying sabotaging the tool of production cant be "that" useless, would it.
Their methods failed to effect structural change, and their whole movement was ultimately swept away.
I think Yogthos and Cowbee said it well, but I just wanted to add some of Marx's thoughts from vol 1, chapter 15 in regards to luddites if you haven't read it, forgive me if you already have.
::: spoiler spoiler About 1630, a wind-sawmill, erected near London by a Dutchman, succumbed to the excesses of the populace. Even as late as the beginning of the 18th century, sawmills driven by water overcame the opposition of the people, supported as it was by Parliament, only with great difficulty. No sooner had Everet in 1758 erected the first wool-shearing machine that was driven by water-power, than it was set on fire by 100,000 people who had been thrown out of work. Fifty thousand workpeople, who had previously lived by carding wool, petitioned Parliament against Arkwright’s scribbling mills and carding engines. The enormous destruction of machinery that occurred in the English manufacturing districts during the first 15 years of this century, chiefly caused by the employment of the power-loom, and known as the Luddite movement, gave the anti-Jacobin governments of a Sidmouth, a Castlereagh, and the like, a pretext for the most reactionary and forcible measures. It took both time and experience before the workpeople learnt to distinguish between machinery and its employment by capital, and to direct their attacks, not against the material instruments of production, but against the mode in which they are used. :::
Sadly, a lot of them still can't, as evidenced even in this thread.
I hope AI discussion in the future start to blossom further into something better, because as it stands, it just feels rather disheartening when things get like bad? I'm not sure what it is with AI that leads to like, other comrades willing to call other comrades "dumb" elsewhere or other dejecting statements. It just gets very demoralizing.
I think it will pass in few years max. AI bubble will either break or be deflated, Chinese will improve their tech even more, online shitstorm artisans will either find new niche or get a job, AI creations will get less recognizable and so on.
It will improve but I feel that some comrades are disconnected from stories where AI is having a positive impact in people's life. Stories that will make them question: "Why are there good stories in China but not in the West? What is missing for the West to have similar stories?".
I try to post a lot of AI news in c/Technology but it clearly is not working. Sorry for the ping but @CriticalResist8@lemmygrad.ml by pure coincidence do you have any ideas on how to improve visibility regarding AI?
I appreciate the view and the tactical point, there is of course truth in there. Now there's the matter of ability as well. Imagine being a resistant during fascist takeover. Every bit would count wouldn't it? I can't blame the ghetto kids for burning cars, if you understand me.
I'm not sure if I entirely understand? What your describing sounds like a different situation compared to talking about ai and means of production in general? I just mean it just sounds like your describing war or guerilla warfare in that context? in which, in terms of war, every bit does count like you said. and to go to ai, fascist states are using it for fascistic purposes like what the ruling class in the united states is doing with it. but I feel like that has less to do with the tool considering ai has a a wide variety of use like with what China showing what to do with it, and more to do with the people employing it for fascistic purposes? and in turn that has to do more with resisting oppressing and fascism in general and less to do with ai? I'm not sure if I properly articulated my thought.
I'll answer more thoughoughfully later but don't worry about it, I was just thinking aloud
The luddites were dead fucking wrong. Instead of seizing the means of production, they thought smashing them would solve their woes. It doesn't matter that the luddites were skilled machine operators with a rudimentary form of class consciousness; their understanding of the issue was idealist and therefore opposed to Marxism. Luddism is liberalism.
Adding to what other comrades eloquently explained, we should not repeat the mistakes from the past but actually learn from them.
Instead of focusing on the tool and projecting the evil from capitalists into it, we have to build a labor movement with the intention of seizing the means of production and to fight the capitalists and the imperialists. Socialism is the only way out and the proof of its success is already showing in places like China. Examples:
Working toward a world where the workers own the means of productions is a better endeavor rather than destroying the means of productions and sabotaging the tools as those luddites did.
Why would you want to outsource one of the last vestiges of being a human we have left (thinking) to a 3rd party of any kind?
I don't care if it's an AI or an underprivileged person in another region of the world, get that shit out of here. The internet and similar tools of isolation are bad enough, now we're being handed keys to an artificial friend keen on severing our social connections and ability to think on our own.
I think about it too i only ask the robot after i alr thought about it
People fear that they're gonna lose their job that consists 99% of sending and receiving emails and doing zoom meetings. They know their job is bullshit and replaceable.
GenAI can be kind of useful when you use it on purpose. But it will be used to make anything bureaucratic an even bigger nightmare. After sale customer support, talking to humans in a call center at least incurs costs to the corporation, with gen AI they can keep you in the loop forever for cents.
Unemployment claims, immigration, disability pay, hiring are also all made worse by AI.
They are coming for our jobs. Or at least they making our jobs worse.
Energy, water, computation ...
I think this is one of the weaker arguments tbh.
Corporations get away with blatant mass theft of intellectual property
Destruction of social reasoning
Science & academia already was in a bad spot with reproducibility crisis, fake/bad studies. Now this is automated.
Instead of letting humans do creative work, too much attention will be taken up reviewing slop.
This problem also exists in social and traditional media.
People also put a lot of implicit trust in AI answers when the answer might just be based on a whitewashed shitpost or wrong for other reasons. With websearch it's easier to judge yourself whether the source is to be trusted.
This will be worse in the future when either companies learn how to manipulate datasets to get ahead (similar to search engine optimization). Or when AI companies will just straight up places advertisements in AI answers.
People replacing their human friends with AI friends and partners isn't healthy.
Also IP as a commodity needs to be abolished
Private property should be abolished.
eeeeeh disability claim is already a nightmare without AI, they never have enough people for the amount of cases with new ones coming in every day. As someone waiting for my disability claim putting AI in the process can't possibly be worse than what I'm going through already lol. it took them 3 months to send me a first update and it was to ask me some details about my past employment and nothing else. at worst the AI will deny me which a human will also do because they hope you don't appeal, but i assure you i will appeal any denial lol
And tbh i hate that i hate this AI-fueled loneliness epidemic i want to go outside and meet people rather than being stuck behind a screen talking with a nobody because everyone is abandoning human connection for a syncophantic robot
I think you're thinking of LLMs. GenAI is the stuff that makes deepfakes and "art"; LLMs make text.
I thought LLMs are just one type of GenAI. They all generate new media content based on existing media content. LLMs just specialize on text, while the other types of GenAI specialize in other types of media. At least that's how I understand the terms.
You might be right. Looks like LLMs are a subset of GenAI.
Thanks for confirming. I wasn't completely sure.
The Kavernacle has videos on this. He talks about how it's eroding emotional connection in society and having people offload their thikcing onto chatgpt. I think this is a problem But my main issue i'm most passionate about is the issue of misinformation. In the process of writing thsi post i did an experiment and asked it some questiosn about autism. I asked them waht autsitic burnout is. They gave an explanation that's incorrect, and furthers the incorrect assumption alot of pepole make that i'ts something specific to autistic people. But it's a wider phenomon of physiological neurocongitive burnout. I confronted them on this they refined their position then I asked them why they said it. It constnatly contradicts itself and will just be like yeah you are correct i am wrong, while continuing to not repeat the same incorrect claim. https://i.imgur.com/KINH7lV.png https://i.imgur.com/EHtDwNj.png According to chatgpt their own sentence contradicts itself. They also proceeded to tell invent a new usage of a very obscure medical term that is not widely used then try to gaslight me into believing it's a commonly jused term among autsitic people whne it isn't https://i.imgur.com/LStZdNg.png
And what frustrates me even more is a couple months ago i had someone swear to me up and down that, the hallucinations in chatgpt were fixed and they ain't that bad anymore. Granted, they were far worse in the past. It litaerlly tol dme autims level system was something that no longer exists despite it being currently widely used.
But here's the problem. I am an expert on this topic. Most people aren't asking chatgpt questions about things they are an expert in, and they also are using it as a therapist.
All in all i wasn't expecting it to have no hallucinations but i was atelast expecting it to not still be a massive issue in just basic information retrival on topics that aren't even super obscure and information si widely available about.
Ultimately here's the issue. The vast majority of pro-genai people don't know what genai actually is and why it is bad to use it in the way they are as a result. GenAI is a very advanced from of predictive text function. It just predicts what it thinks the words following that queery is based on the tereabytes maybe evne petabytes of infromation it's scrapped from the internet. Which means it's not really useful for anything beyond very basic things like asking it to generate simple ideas or summarize an article or video and very basic coding. I only dabble very lighlty in programming but frmo hwat i've heard actaul experienced programmers say it trying to use chatgpt for major coding just means having to rewrite most of the code.
Imgur links were detected in your comment. Here are links to the same locations on alternative frontends that protect your privacy.
Link 1:
Link 2:
Link 3:
And honestly i think the main reason is that i feel like i worry too much about my own self-image and have had bad experiences with other hoomins on the interwebs being absolute assholes to me cuz i told them im a commie in confidence only for them to share it with their friends who then went on to harass me and idk i just feel like because the robot wont talk about me behind my back i dont feel as much weighing me down talking about more sensitive stuff to it
I thought you were a penguin? You're a fake penguin, A huamn pretend to be one.
Mr. The Fake Penguin the answer is to find better friends. maybe try joining an online communist group or something. Also i'm ckinda confused cause you say and like you are refreencing something but ikd what you are referencing.
I had some drama in the past with someone from the Roblox Elevator Community (i alr made posts about what a shithole that place is) and i thought someone was cool but they turned on me
Wait a sec, what you said up there sounds exactly like what that guy said to me that I'm "pretending to be Russian"
It was a joke, a la "in the internet nobody knows you're a dog". But adding to his point, yes, you need better friends.
Maybe join the Genzedong matrix server? It was a pretty chill place back when I used it.
Also if you have access, consider therapy. One of the greatest advantages psychologists have over LLMs is that they are able to disagree with you. That could help you with whatever thoughts you're struggling with, without having to care about being judged.
yeah i was in communist groups and still am, it was just that the roblox elevator community is a shithole filled with people who make being anticommunist their whole personality and harass anyone who dares to say anything positive about the CPC
You are, you admitted it you are a human pretending to be a penguin.
ok why are you so eerily sounding so similar to that guy saying i'm pretending to be russian
not sure who you are talking about, i just don't like pretenders.
"Providing and alternative where you can get instant feedback while you're journaling" Forgive me, could you elaborate? I'm a little confused
I sometimes put some of my more sensitive thoughts into an LLM mainly cuz i dont want em going into the void and i dont want a hoomin getting uncomfy reading them either
GenAI is the highest form of commodification of culture so far. It treats all text, images, videos, songs, speech and all other forms of organic cultural expression as slop to be generated over and over without its original context. It provides little to no serious improvement in industry, and is only propped up despite no profits due to either artificial growth in internet platforms or unrealistic expectations from the AGI folks.
And it's inneficient. We could easily have more therapists rather than wasteful chatbots that cost billions. Such technology can only exist as a bandage to the ailments of neoliberalism, and is not a solution to anything. And that's not even going into the worsening impact of cultural imperialism due to the tendency of these models to reproduce Northwestern cultural hegemony.
The alternative is actually pretty simple: measures to lower unemployment. Most capitalist countries have issues with unemployment or underemployment. And most tasks of Gen"AI" can be done by paid humans quite well, possibly even at actually lower costs than what the informatics cartel is tanking in order to ride the bubble.
Human labour is what produces value. All else is secondary.
They are against it because anything bad that could come of further adoption of AI will happen if it's profitable for the capitalists. Even now NVIDIA is continuing to pump the bubble when everyone knows it's a bubble, because of the profits.
The prospects are entirely different in China, because capitalists are regulated. But nobody who is vehemently against AI is aware of the difference and likely will tell you China is capitalist. So all the problems that arise from capitalist control of AI are ascribed to AI alone as well.
The potential consequences are severely reduced because of this, but up to a point the government will allow capitalists to do the same harmful things they do in the west. They just won't allow them to collapse the whole economy and destroy the lives of half the country. That doesn't necessarily mean that the prospects of LLMs are fundamentally different, just that their potential fallout is much more limited because China is not capitalist.
China objectively treats AI differently than the West. Whether that constitutes a 'fundamental' difference I don't know, I'm not big on that word philosophically, but the fact that Chinese private companies are providing their models open source, and you can use them free of charge (the most you will pay is a very low fee for API usage) shows a huge difference, they don't consider it a product to make a profit on, i.e. A commodity.
AI is also not held solely by the capitalists in China, the government develops on it. Just recently Beijing schools started a pilot program to teach middle and high school students AI through science projects.
What we need to ask is what do they see in it, what this fills for them and helps them with.
the situation is fundamentally different in China https://dialecticaldispatches.substack.com/p/the-ghost-in-the-machine
Can you please give any examples of the Chinese government allowing capitalists to do harmful things to the Chinese people?
Commodification of housing is a recent example. They were allowed to do harm, up to a point. When the bubble popped, the state caught it to prevent it from doing massive harm.
This is a strange question to even ask. China has labor exploitation, China has capitalists, China has rent seeking behavior, these are all things that do harm people and the Chinese government acknowledges them. That's what the socialist market economy means, you take the bad with the good.
It might seem like a strange question but people tend to have wildly different pespectives and understanding about China depending on what part of the world they're in, what the media they consume says about it, and whether they've lived there or not. So even if we agreed with each other it'd still be worth going through because we're in a public forum where it's helpful for others to be more illustrative.
"up to a point" is doing massive work here.
China has capitalism: in Taiwan, in Hong Kong, and in Macao. When you cross the boundary between Hong Kong and Shenzhen it's like night and day. Shenzhen ain't no backwater either, but the cost of living, quality of life, environment, air pollution (the prevalence of electric cars makes so much difference that despite being less strict about smoking in public, it's still cleaner in SZ), advertising, cleanliness of public facilities, parks in walking distance of everywhere... when they say 'take the bad with the good' that doesn't mean not doing anything about the bad, and the difference a government can make in dealing with the bad is huge.
I agree with you, however they still tend to act mostly after some damage has been done. That's the purpose of the socialist market economy, to allow development of productive forces but catch it when it goes wrong. That doesn't mean capitalists are prevented from making harmful decisions to begin with. The difference with Hong Kong is that there's basically no catching, it just gets worse and worse.
As I said, a typical recent example was the housing bubble the government had to step in to catch. It was allowed to go on for a while before it was stopped, and it did harm people during that time.
It's not getting worse here in Hong Kong, it's slowly getting better. The British brainrot is taking a long time to unfuck; there are full grown adults who were babies at the handover passing down romanticised stories about British rule to their kids that they were told by their boomer parents, and the eldest generation who remember how it was practically apartheid pre-Maclehose reforms are all elderly. But the education system and the reality checks from what's happening in the States, Ukraine and Palestine are all pushing back.
genAI tries to computerise the only thing we can truly call human. Abstract thought in creativity. So it’s bad because it feels cold and inhuman and doesn’t even do its job that well.
I scanned the thread and am not sure if anyone else has mentioned that GenAI is obliterating the value of creators' (i.e., visual artists, musicians, writers, etc.) labor. This is ruining people's livelihoods and will drastically reduce the amount of new human artistic output.
This is on top of the various other economic, environmental, ethical, and inaccuracy issues with it. This is all why I won't touch the stuff.
Meh, if your art is matched by genAI, it's time to do something else. Also genAI could at the same time that it destroys the livelihood of certain people, also increases the livelihood of other people, think of someone that recently started selling a tomato sauce, with genAI they don't have to pay thousands of dollars to a graphic designer for a brand logo or a label.
I must be missing something and would like to understand this line of thinking. Do communists really think that the availability of easy knock-off logos and labels outweighs artists getting paid for their labor?
One would have to explain why it is not capitalism but the technology itself that is at fault. Marxists should be for the socialisation and automation of all labour; under capitalism this may mean unemployment so for artisans that leaves only one source of income which is the defense of proprietorship which is again reactionary.
For marxists the above should be relatively straight forward but for the uninitiated it may take more reading around. In practice, though, our opinions often reflect our relative class positions or aspirations ie under capitalism often "emancipation" or just protection of income in the long-run often ends up being seeking to become a labour aristocrat (which here could involve gatekeeping skilled labour) or bourgoisie aspiration (protection of intellectual property).
One should not lament the weaver for the loom; society should advance to give each person freedom to enjoy non-paid activities and should increasingly advance so that we should not have to be paid in order to survive ie working towards the abolishment of wage slavery. However, under capital there is no mechanism for this so results in the increasing contradictions and immiseration of society.
(Marxism is a science, and in the above context it is the understanding of the mechanisms of capital and how we could potentially build socialism from that understanding)
Edited to add, in case you are not aware-
Thanks for the explanation.
One problem I see with comparing GenAI with earlier automation like the loom, etc. is that GenAI is in no way paying for itself so far or any time in the forseeable future. It currently loses money hand over fist despite massive breaks in the form of "free" source content and subsidized electricity. I imagine that those previous forms of automation were self-sustaining financially from fairly early after their invention.
Thank you!
There is an economic phenomenon called the Tendancy of the rate of profit to fall (https://en.prolewiki.org/wiki/Tendency_of_the_rate_of_profit_to_fall) which correlates with what you are saying; under capitalism there is no clean way out of this which is why you see AI market bubbles in the west that you don't see it in a socialist country like China.
And it is also, as you have rightly pointed out, why the return on investment is generally signifcantly worse than it was before (as a general trend). If one does a course in business administration in the west they do mental gymnastics why the rate of operational profit (ie not fictious capital of share price inflation) in the IT sector as a whole is so abysmal and have ad-hoc theories with no predictive value to explain the phenomenon (ie they don't have a scientific approach).
You'll find, and pretty much every marxist will testify to this, that seemingly puzzling politico-economical phenomenon/contradictions under liberal economics not only have robust explanations in marxism but also very good predictive powers - as a science does, and continually refines theory to reflect better observations seen as a science should.
Edited to add - some further reading just in case you need them if you read the article in the above link:
Thank you for the additional analysis and readings! I need to read a lot more about all this and other relevant topics and theory.
Yes? I certainly prefer abundance of stuff over scarcity. If you had asked me that the availability of mass produced furniture outweighs artisan carpenters getting paid for their labor i also would've said yes.
GenAI can be used to generate the boring art that no one wants to do so artists can dedicate themselves to making novel art.
Fair enough, I get your point.
That's a loaded question; 'knock-off' is a value judgment.
Artists don't form a class - no profession does. Between artists, you will find proletariat, bourgeois and petit-bourgeois artists. So we must first ask, which artists are we talking about protecting exactly?
Artistry is also not the first job/skill that's being automated. If the only problem was 'livelihood' (that is, per your usage of the word not the labor-power one is able to provide but the current work for which they are paid) then we might as well destroy any piece of technology that makes a job more accessible so that people can remain experts in their field and ensure they keep a job in their very specialized skill, and that would immediately revert us back to the feudal period where everyone worked the fields and the women also mended clothes on the side for some petty cash. And you died at 30 of dysentery.
But that's not how capitalism works; capitalism tends towards monopolization. None of what AI is doing right now is entirely new or unique. Capitalism itself started by obliterating many jobs (proletarianizing people in the process, getting them from their subsistence* farms and looms and into factories working for a wage), and yet we still consider it progressive because it lays the conditions down for socialism and then communism. It created a proletariat for which the ideology could be laid down: marxism.
I think the question no one has asked yet is what would people who don't like AI want to see done about AI? You can certainly try to legislate, but we know how laws go. Get another party in at the next elections and the laws change entirely. You can try to ban AI, but other countries will keep using it and something else will eventually pop up and spark the same debate. The mechanical loom and the steam machine are just two historical examples. Internet was considered a novelty in the early 90s and people thought it wouldn't last.
But you can't undo the contradictions, and the wheel keeps turning. Whether AI will endure is something that is not dependent on our arguments, whether pro or against. As communists we understand automation will make communism possible, and it's only under capitalism that it leads to the destruction of jobs (AI hasn't really led to job destruction but that's not the main point here). Under communism automation replacing your job means you still get the result of this automation and also you don't have to work anymore, in a nutshell.
These are all contradictions of capitalism Marx highlighted before and this is why the solution is socialism.
I also highly recommend this essay which is easy to follow and highlights these contradictions about AI, artists and capitalism: https://polclarissou.com/boudoir/posts/2023-02-03-Artisanal-Intelligence.html
There is certainly a lot to say about AI (and technological advancements) in capitalism, but our sights are on socialism.
Thanks for the explanation and for the link.
Regarding this question, one possible answer for me is that I would like AI to pay its own way. Right now, AI is losing money hand over fist despite massive subsidies in the form of "free" source content and shared energy expenses, so in its current state it's completely unsustainable and leading toward a major rug pull from under everyone.
I can definitely agree to that, and I also think these are important questions to ask but a lot of the discussion I see revolves around artists specifically (and specifically illustrative artists), leaving little space for other stuff. So thanks for answering sincerely.
The function of the state is to reconcile class differences, but class differences are irreconcilable. How can you make a proletarian rationally agree that they should be paid less and the bourgeois more? All history of civilization is the history of the class struggle; in capitalism the state works for the bourgeoisie, and in socialism it works for the proletariat. So while I would also like for AI companies to struggle their own way on the market (they couldn't compete against China anyway), they wield enormous power over the state, especially the legacy companies, and get whatever they ask for. In the same way Musk gets tons of subsidies for SpaceX and Tesla - the point is to funnel money to the bourgeoisie.
Therefore objectively speaking I think China is the single major factor for destroying the oversized western AI industry. Project Stargate was announced (500 billion for AI over the next years), and literally a week later Deepseek came out and completely demolished that idea before it even took off. It was built for a fraction of the cost of GPT with a fraction of the hardware, and suddenly a 500 billion investment didn't seem like such a good idea anymore. In fact chinese models are largely open source and they are so cheap to run, they don't even charge anything. It's hard to compete against free.
What we'll see soon enough in the west however is monopolization of AI; fewer companies will remain and it will be mostly controlled by 1 or 2 company (probably microsoft and google, maybe meta). I can't really say what the consequences of this monopolization will be yet.
It's not like there's a lot of novel model creators currently either - the costs are too prohibitive and companies like openAI are hemorrhaging money and their 20$/month subscription tier is never going to fill in that massive bottomless hole (they'd need 35 million subscribers for it to break even), so they rely on both private and public funding. Even after releasing the semi open source gpt-oss which boasts 200 billion parameters, the most popular open source models remain z.ai, deepseek and qwen - all chinese models, and you can run a 20B model at most on consumer hardware (with a 1500$ GPU). So even that bet didn't take off, nobody is even using oss beyond the novelty it seems.
Thanks again for the additional analysis. There's a lot to think about. I really need to study AI more so I can understand it better and make better critiques as well as know where it's actually most useful.
no problem, thanks for reading.
Humans like to think they're special somehow. So they need a way to define a human that excludes everything else.
And that's sentience!
Diogenes with a chicken "Behold, a human"
Whoops. Don't want them pesky animals in there. Let's try something else.
It's intelligence!
Diogenes with a LLM "Behold, a human"
Whoops. Well then...
It's creativity!
Well, what's that?
It's the ability to imagine whatever you want and then make that.
... Okay that sounds cool. Look what advances in AI enabled me to make.
Wait now billions of people can express their creativity and become competition. Let's dehumanize them: "you didn't make that, the machine made it, and the definition of creativity is now drawing skill."
Diogenes with a diffusion model "Behold, a human"
AI is a tool made for accumulation of capital, it's like a factory. It's not a sentient being.
"Marxists" in the 21st century looking at a drill press: "behold, a human".
We're fucking doomed lmao. https://hexbear.net/pictrs/image/c6fbec04-59d4-42be-a2dd-f2a035aafa9c.png
Its not sentient I never said its sentient
cringe take, sorry
People unironically say shit like LLMs are intelligent and it bums me out so much. It makes me wonder what they think other people are and what they believe they're doing when they have a conversation with someone.
Unironically I would rather people be conventionally religious in a toxic way than say things like this. At least even the most reactionary catholics don't (or shouldn't) think humans are just sophisticated computer programs. How are tech nerds able to do worse than the fucking catholic church?
AI Marxists are unironically becoming a Cult Mechanicus and all it took was the plagiarism machine telling them that they’re very special, very smart, good boys.
The Silicon is what gives Life; After all it carries out all those thousands of floating point Instructions that are required for the
MatrixBrain to work!Humans are a specific species, it's not hard to define homo sapiens in a way that excludes everything else.
Humans (among others) are capable of generating new ideas. LLMs recycle. LLMs are like drill presses, they do no labor (and create no value) on their own.
Mindlessly regurgitating Wikipedia and Reddit comments is not “intelligence”.
There’s no thinking going on there. LLMs can’t think, reason, or utilize logic. They are fundamentally incapable of doing so.
Also saying that Diffusion models create art with good drawing skills is hilarious.
are you saying you believe chatgpt is sentient
No its not sentient
Diogene would have pissed on the keyboard and said something about the cops being pigs