Crush+Deepseek diagnosed problems with my linux system, and as a bonus I learned it is prevented from running some commands (+ tips for windows switcher and new linux users)
Yet another "small" post to help people out.
Since deepseek released 3.2 -- you are automatically upgraded to it with your same API key, you can tell you're on it because it thinks even during tool calls -- I decided to try some more down-to-earth tests for it. The Math and IT Olympiad benchmarks are probably great but they're not really how people like you or I are going to use an LLM.
I opened up crush in Documents/System report, and gave it the following prompt:
Can you find any problems, errors, mistakes or anything that could be fixed on my system? As you are on a LIVE system, you are NOT ALLOWED to make any changes or fixes yourself - you are only tasked with finding these problems out and writing a fixing_report.md file. This report will contain every problem you find as an individual entry. You will write: how you found it (including the commands to find it again), the problem you found, what this problem impacts, the severity of impact (your pick between Almost none, Low, high, critical) and what the fix is. Again, no fixes yourself - just a list. To begin, confirm the directive and then run a system check to check what system you are on.
(the caps are probably not needed anymore with these new models but it doesn't hurt to do it. It generally makes them pay more attention to the term.)
And it did it perfectly well. Of course, I only want a list because I want to review the diagnosis point-by-point and determine myself if this is actually a problem that I need/want to fix.
As a bonus I noticed that it was simply prevented from running some commands. Sudo is just blocked, and some like 'apt' whether it uses sudo or not is also blocked. So that's generally good and shows it can't wreck your system... at least, I think it can't. I don't know the extent of which commands it's allowed to just run.
for those commands I had it write them another file, manual_verif.md so I could review and run them myself. There YMMV as to how you want to achieve that, because in my testing it seemed to just write commands it thought it could not run itself, without actually taking them from its previous output. I think this is a downgrade from 3.1 because it encloses more in thinking and less in actual terminal output; Deepseek cannot "see" its own thinking or act on it later. Definitely wasted tokens on that, I think you should tell it from the get-go in the first prompt: if you are prevented from running a command, write it to manual-diagnosis.md for manual review later.
Btw if you're on linux install timeshift. It saved me already (flathub pgp key error the other day, not sure if it was just flathub being slow or what, but timeshifting to just a few hours earlier fixed it completely). Then use another backup app to backup your home folder which is where your personal files live. Get an ssd just for it, it doesn't need to be big - even with hourly backups timeshift saves barely take any space on that SSD since they are incremental and only save actual changes from one save to the next. 100% peace of mind and you won't even notice it's running.
Linux with LLMs makes the switch from Windows achievable to any person and I'm slowly learning the new system day-by-day. This makes the learning curve a very gentle slope, esp. on Zorin which is made for windows switcher; I got my bearings very quickly on that OS and in just a couple days I was already tweaking the gnome DE with extensions and giving wider perms to flatpak apps so I could install Steam games outside of the wine sandbox in ~/ -- and if you don't know these terms I legitimately learned them in just a couple days when I needed to for my needs.
No need for a long beginner's tutorial before you even touch the OS to get on linux these days with good distros and LLMs (though I already had minimal experience e.g. using the app store to install apps and not downloading them from the internet like you'd do on steam). Although the installation was kind of a mess not because of the partitioning, but because you actually don't want to connect to Wifi and install updates during the install even if they say you can. you just do it yourself after install with sudo apt upgrade and sudo apt update once it's installed.
optissima @lemmy.ml - 2w
Okay but what "issues" did it find?
6
CriticalResist8 - 1w
I won't post actual issues but it was very in-depth and although some of it was overkill (if it ain't broke don't fix it as they say), it did find a bunch of varied stuff -- all from terminal commands, so nothing that wouldn't be reproducible or had to invent tools for it.
4
CriticalResist8 - 2w
As for the diagnosis if you want to go further, I decided not to waste more context window and tokens on this (although it only cost something like 5 cents on the API since a lot of it was cache hit, i.e. repeated tokens sent to the API). It got completely bonked and I could just redo a new session and start from scratch but eh, I just pressed escape to stop it from running commands over and over again. And tbh that is a blemish. It started running and re-running commands instead of using both files. This can assuredly be fixed by just a tweak in the prompt, but it still bothers me that I ran so many tokens for something that ultimately 50% failed (the manual diagnosis part). It's not a huge cost or anything, it's just something it should be able to take into account itself since it knew that automated diagnosis file existed.
Anyway, at this time I have a list of diagnoses as per the initial prompt - both diagnoses it could run itself, and diagnosis commands I need to run myself. I won't post an example here because it's just good security practice but it followed the prompt when making the list: how it found it, what the problem is, the impact, severity as per my 4 categories, and how to fix.
From there it's up to your preference how you want to go through this list: either your own linux knowledge, google or more LLM. You can just go back in crush and tell it "fix error number X" and I'm sure it will do it, unless of course it runs into permissions issues running commands. But personally I want some manual overview to learn the system and what I'm doing.
And some of these commands I have no idea how many years of linux you'd need to learn to write them yourself let's be honest. And how many hours of googling you'd do to just find them in the first place! With agents you can do something else like watch a video or play a video game while it writes the file.
Manual diagnosis.md was more of a mess, because it didn't really understand I wanted it to write the commands it was unable to run earlier, not "assume" it couldn't write some commands out of thin air. I can still use this file since it collects sudo diagnosis commands but yeah it kinda failed on that part and I decided to stop wasting tokens. What I'm going to do now is check and run these commands, write down the result, and have crush analyze just this output and what it can tell me about it using the first prompt.
3
haui - 2w
I can see the appeal of this to a user who does not have essentially the same mind as the machine (i do somehow, i understand computers a lot better than people) and i have come around to legitimate uses of ai like getting through to people on the emotional level for example by making it print propaganda or making marx and lenin sing a song together.
i do however think that the more interesting find is how the core problem of "normal" users is that they dont fit with the machines way of doing things, which could ultimately stem from capitalist exclusionism, hear me out:
This is a thesis, not against ai but for a different view on its inception and use in the west.
Machines normally use binary (i always compare it to light switches in a row) and above that they use assembler, a really interesting but unfinitely verbose way of writing code. This is even too bothersome for me as it really means you need to repeat a ton of stuff.
but there lies the kicker. the machine code (assembler) is not just logical. it was thought up by a human (woman iirc) and therefore does represent the inner workings of that persons mind. If you go down the autism road with this and accept that a ton of programmers are in fact gifted autistics, this kind of gives a hint at what we are looking at.
The rise and hype of ai (besides billionaires pushing for it with insane amounts of money) might be in fact based on the messy interface that has not really been developed in a sane way but by infinite abstractions which help more "normal" users and devs understand how things work. if the language concepts and education were instead more geared towards "typical" users and their way of perceiving the world, all these abstractions would not be needed but a more wholesome way of interacting would surface.
I think the underlying reason why this hasnt been done, why for example alternative and less purist shells like fish or zsh havent found more widespread adoption is because the vast majority is being kept stupid for the exact reason to extract the maximum amount of capital from them. Because if you taught children to use linux because its essentially a magic wand, you would have a really bad time trying to make them use your hastily thrown together app which is slow and messy and extracts all their data.
And I think ai is kind of being used to get around this issue without actually tackling its core (diseducation). this is being done with a lot of issues that capitalism produces and socialism actually solves. Instead of proper education, we have (badly) artificed the part of the brain that we dont allow people to actually develop. it has been outsourced to a computer that can be turned off by the wealthy. there is a movie that portraies this in a different way. it is "in time".
the core reason why I came up with this was because my first thought when I read this post was "why is this person spending so much time, effort and resources on so benign an issue?" and then it hit me. because for a grown up, learning about computers can be impossible or at least insanely boring, the core concepts are missing (same with communism btw) and finding quality education under the mountains of (not ai, this time) slop is also a huge issue.
This also perfectly explains why china is on a whole different plane of existence in terms of ai. since education is not being kept away from people to not be able to revolt or see their misery, the people are not as helpless and AI is only used to make their advanced needs being met because their basic needs are being met easily.
TL;DR: Not saying AI bad. I'm saying i might be seeing the reason why ai is so successful in parts of society where it absolutely shouldnt (imo) and where its application is rarely a net positive. The answer is capitalist diseducation (imo).
3
CriticalResist8 - 1w
I started typing up a huge post but i want to keep it more to the point; the tl;dr is these tools allow us to solve problems differently than we could before and this is where they find their place, and everything else is on us to figure out. Like I don't think frustrations and bottlenecks will disappear under communism, they might certainly look different than they do under capitalism but they won't go away entirely. We've had word-of-mouth, then books, then web pages, then search engines, and now LLMs. And whereas books and web pages only contain the information they do and nothing else (and web pages can be updated in real time but books cannot), LLMs can present that information in a way that speaks to you personally, and allows you to make further queries on it.
As a designer, we are trained to identify problems and think of solutions we can enact to solve them: this is the essence of design work. And eventually, we learn that there is always at least one solution to a problem. If there really isn't, it means the problem needs to be reframed. Delivering information is one thing LLMs can do, but they can also do translation work, statistical work - or as we see here, help people switch from Windows to Linux much more easily than was previously possible.
Personally at this point the way I use LLMs and AI models is mostly to try and push them to their limits. The usual arguments I read about how "they think for you so you stop thinking", or "but you could have found the answer on google" or "it's not actually art" are so far behind where the goalposts are currently at! They're already obsolete arguments because when I generate an AI image, I know it has errors, but I generate it to test a specific thing - like a way to prompt, or if it can do pixel art, or if it understands complex prompting (e.g. a picture within a picture within a picture all in different styles). I don't even care if it's art or not, it doesn't make the image "unexist" and I'm trying out a tech task, so questions such as "is this illustration art or did I put effort into it" don't even enter my mind.
Some people might see the image and dismiss it as "it looks bad here or there" or "I don't care because it's not actually art" but imo they are closing themselves off - what can you do knowing that you can fix a linux system? What ideas does it give you to solve your problems with? This is the real value. And ultimately we see that it doesn't matter where a given piece of information came from so long as we have it in our arsenal. It's like how some comrades close themselves off learning from liberals because they're liberals, when we see that the US army makes its officers read Mao and Che among other generals.
Before LLMs some people probably said "I don't care about google, I have my books" (or even "I don't care about these new car things, I have a horse"). But google can help you find more books, and can also do a lot of other stuff I can't think of right now - but just because we can't conceive of these uses yet doesn't mean they don't objectively exist: our job is to find them out.
Finally to close this off, speaking generally -- I mean that I didn't get this impression from your comment -- I think there is a tendency among even marxists to actually 'shut off their brain' as soon as anything AI pops up. "It's AI so I don't like it goodbye". Like a tendency to scold comrades for using AI instead of directing that anger towards the meat or fossil fuel industry (biggest polluters on the planet if we care about the environment), or proprietary models that all have to compete against each other instead of pulling their forces together. This is what I mean by "closing ourselves off". We leave this technology in the hands of the bourgeoisie because we think we're too good for it. They don't seem to think they're too good for it though, and they control the state.
3
Maeve - 1w
I actually agree with you. It's even the way we are educated in basic arithmetic. The first time I saw a 10 minute video of a primary school teacher teaching multiplication using common core concepts, which teaches children the reasoning of each step, my immediate reaction was "I might be better at math I'd been taught this way!” instead I learned multiplication by rote, and of course common core takes longer to teach than "just add this amount each iteration until 12, then memorize the outcome by Friday's quiz."
In college (university) soc 101, of course we learned the actual goal of capitalist education was to educate people enough to follow orders, not enough to question those orders. Don't get me started on the so called "gifted" programs, which in my area at least, did nothing for further development of budding intelligence, but granted a couple of hours a week to do paper crafts so it broke up the insane boredom.
CriticalResist8 in crushagent
Crush+Deepseek diagnosed problems with my linux system, and as a bonus I learned it is prevented from running some commands (+ tips for windows switcher and new linux users)
Yet another "small" post to help people out.
Since deepseek released 3.2 -- you are automatically upgraded to it with your same API key, you can tell you're on it because it thinks even during tool calls -- I decided to try some more down-to-earth tests for it. The Math and IT Olympiad benchmarks are probably great but they're not really how people like you or I are going to use an LLM.
I opened up crush in Documents/System report, and gave it the following prompt:
(the caps are probably not needed anymore with these new models but it doesn't hurt to do it. It generally makes them pay more attention to the term.)
And it did it perfectly well. Of course, I only want a list because I want to review the diagnosis point-by-point and determine myself if this is actually a problem that I need/want to fix.
As a bonus I noticed that it was simply prevented from running some commands. Sudo is just blocked, and some like 'apt' whether it uses sudo or not is also blocked. So that's generally good and shows it can't wreck your system... at least, I think it can't. I don't know the extent of which commands it's allowed to just run.
for those commands I had it write them another file, manual_verif.md so I could review and run them myself. There YMMV as to how you want to achieve that, because in my testing it seemed to just write commands it thought it could not run itself, without actually taking them from its previous output. I think this is a downgrade from 3.1 because it encloses more in thinking and less in actual terminal output; Deepseek cannot "see" its own thinking or act on it later. Definitely wasted tokens on that, I think you should tell it from the get-go in the first prompt: if you are prevented from running a command, write it to manual-diagnosis.md for manual review later.
Btw if you're on linux install timeshift. It saved me already (flathub pgp key error the other day, not sure if it was just flathub being slow or what, but timeshifting to just a few hours earlier fixed it completely). Then use another backup app to backup your home folder which is where your personal files live. Get an ssd just for it, it doesn't need to be big - even with hourly backups timeshift saves barely take any space on that SSD since they are incremental and only save actual changes from one save to the next. 100% peace of mind and you won't even notice it's running.
Linux with LLMs makes the switch from Windows achievable to any person and I'm slowly learning the new system day-by-day. This makes the learning curve a very gentle slope, esp. on Zorin which is made for windows switcher; I got my bearings very quickly on that OS and in just a couple days I was already tweaking the gnome DE with extensions and giving wider perms to flatpak apps so I could install Steam games outside of the wine sandbox in ~/ -- and if you don't know these terms I legitimately learned them in just a couple days when I needed to for my needs.
No need for a long beginner's tutorial before you even touch the OS to get on linux these days with good distros and LLMs (though I already had minimal experience e.g. using the app store to install apps and not downloading them from the internet like you'd do on steam). Although the installation was kind of a mess not because of the partitioning, but because you actually don't want to connect to Wifi and install updates during the install even if they say you can. you just do it yourself after install with
sudo apt upgradeandsudo apt updateonce it's installed.Okay but what "issues" did it find?
I won't post actual issues but it was very in-depth and although some of it was overkill (if it ain't broke don't fix it as they say), it did find a bunch of varied stuff -- all from terminal commands, so nothing that wouldn't be reproducible or had to invent tools for it.
As for the diagnosis if you want to go further, I decided not to waste more context window and tokens on this (although it only cost something like 5 cents on the API since a lot of it was cache hit, i.e. repeated tokens sent to the API). It got completely bonked and I could just redo a new session and start from scratch but eh, I just pressed escape to stop it from running commands over and over again. And tbh that is a blemish. It started running and re-running commands instead of using both files. This can assuredly be fixed by just a tweak in the prompt, but it still bothers me that I ran so many tokens for something that ultimately 50% failed (the manual diagnosis part). It's not a huge cost or anything, it's just something it should be able to take into account itself since it knew that automated diagnosis file existed.
Anyway, at this time I have a list of diagnoses as per the initial prompt - both diagnoses it could run itself, and diagnosis commands I need to run myself. I won't post an example here because it's just good security practice but it followed the prompt when making the list: how it found it, what the problem is, the impact, severity as per my 4 categories, and how to fix.
From there it's up to your preference how you want to go through this list: either your own linux knowledge, google or more LLM. You can just go back in crush and tell it "fix error number X" and I'm sure it will do it, unless of course it runs into permissions issues running commands. But personally I want some manual overview to learn the system and what I'm doing.
And some of these commands I have no idea how many years of linux you'd need to learn to write them yourself let's be honest. And how many hours of googling you'd do to just find them in the first place! With agents you can do something else like watch a video or play a video game while it writes the file.
Manual diagnosis.md was more of a mess, because it didn't really understand I wanted it to write the commands it was unable to run earlier, not "assume" it couldn't write some commands out of thin air. I can still use this file since it collects sudo diagnosis commands but yeah it kinda failed on that part and I decided to stop wasting tokens. What I'm going to do now is check and run these commands, write down the result, and have crush analyze just this output and what it can tell me about it using the first prompt.
I can see the appeal of this to a user who does not have essentially the same mind as the machine (i do somehow, i understand computers a lot better than people) and i have come around to legitimate uses of ai like getting through to people on the emotional level for example by making it print propaganda or making marx and lenin sing a song together.
i do however think that the more interesting find is how the core problem of "normal" users is that they dont fit with the machines way of doing things, which could ultimately stem from capitalist exclusionism, hear me out:
This is a thesis, not against ai but for a different view on its inception and use in the west.
Machines normally use binary (i always compare it to light switches in a row) and above that they use assembler, a really interesting but unfinitely verbose way of writing code. This is even too bothersome for me as it really means you need to repeat a ton of stuff.
but there lies the kicker. the machine code (assembler) is not just logical. it was thought up by a human (woman iirc) and therefore does represent the inner workings of that persons mind. If you go down the autism road with this and accept that a ton of programmers are in fact gifted autistics, this kind of gives a hint at what we are looking at.
The rise and hype of ai (besides billionaires pushing for it with insane amounts of money) might be in fact based on the messy interface that has not really been developed in a sane way but by infinite abstractions which help more "normal" users and devs understand how things work. if the language concepts and education were instead more geared towards "typical" users and their way of perceiving the world, all these abstractions would not be needed but a more wholesome way of interacting would surface.
I think the underlying reason why this hasnt been done, why for example alternative and less purist shells like fish or zsh havent found more widespread adoption is because the vast majority is being kept stupid for the exact reason to extract the maximum amount of capital from them. Because if you taught children to use linux because its essentially a magic wand, you would have a really bad time trying to make them use your hastily thrown together app which is slow and messy and extracts all their data.
And I think ai is kind of being used to get around this issue without actually tackling its core (diseducation). this is being done with a lot of issues that capitalism produces and socialism actually solves. Instead of proper education, we have (badly) artificed the part of the brain that we dont allow people to actually develop. it has been outsourced to a computer that can be turned off by the wealthy. there is a movie that portraies this in a different way. it is "in time".
the core reason why I came up with this was because my first thought when I read this post was "why is this person spending so much time, effort and resources on so benign an issue?" and then it hit me. because for a grown up, learning about computers can be impossible or at least insanely boring, the core concepts are missing (same with communism btw) and finding quality education under the mountains of (not ai, this time) slop is also a huge issue.
This also perfectly explains why china is on a whole different plane of existence in terms of ai. since education is not being kept away from people to not be able to revolt or see their misery, the people are not as helpless and AI is only used to make their advanced needs being met because their basic needs are being met easily.
TL;DR: Not saying AI bad. I'm saying i might be seeing the reason why ai is so successful in parts of society where it absolutely shouldnt (imo) and where its application is rarely a net positive. The answer is capitalist diseducation (imo).
I started typing up a huge post but i want to keep it more to the point; the tl;dr is these tools allow us to solve problems differently than we could before and this is where they find their place, and everything else is on us to figure out. Like I don't think frustrations and bottlenecks will disappear under communism, they might certainly look different than they do under capitalism but they won't go away entirely. We've had word-of-mouth, then books, then web pages, then search engines, and now LLMs. And whereas books and web pages only contain the information they do and nothing else (and web pages can be updated in real time but books cannot), LLMs can present that information in a way that speaks to you personally, and allows you to make further queries on it.
As a designer, we are trained to identify problems and think of solutions we can enact to solve them: this is the essence of design work. And eventually, we learn that there is always at least one solution to a problem. If there really isn't, it means the problem needs to be reframed. Delivering information is one thing LLMs can do, but they can also do translation work, statistical work - or as we see here, help people switch from Windows to Linux much more easily than was previously possible.
Personally at this point the way I use LLMs and AI models is mostly to try and push them to their limits. The usual arguments I read about how "they think for you so you stop thinking", or "but you could have found the answer on google" or "it's not actually art" are so far behind where the goalposts are currently at! They're already obsolete arguments because when I generate an AI image, I know it has errors, but I generate it to test a specific thing - like a way to prompt, or if it can do pixel art, or if it understands complex prompting (e.g. a picture within a picture within a picture all in different styles). I don't even care if it's art or not, it doesn't make the image "unexist" and I'm trying out a tech task, so questions such as "is this illustration art or did I put effort into it" don't even enter my mind.
Some people might see the image and dismiss it as "it looks bad here or there" or "I don't care because it's not actually art" but imo they are closing themselves off - what can you do knowing that you can fix a linux system? What ideas does it give you to solve your problems with? This is the real value. And ultimately we see that it doesn't matter where a given piece of information came from so long as we have it in our arsenal. It's like how some comrades close themselves off learning from liberals because they're liberals, when we see that the US army makes its officers read Mao and Che among other generals.
Before LLMs some people probably said "I don't care about google, I have my books" (or even "I don't care about these new car things, I have a horse"). But google can help you find more books, and can also do a lot of other stuff I can't think of right now - but just because we can't conceive of these uses yet doesn't mean they don't objectively exist: our job is to find them out.
Finally to close this off, speaking generally -- I mean that I didn't get this impression from your comment -- I think there is a tendency among even marxists to actually 'shut off their brain' as soon as anything AI pops up. "It's AI so I don't like it goodbye". Like a tendency to scold comrades for using AI instead of directing that anger towards the meat or fossil fuel industry (biggest polluters on the planet if we care about the environment), or proprietary models that all have to compete against each other instead of pulling their forces together. This is what I mean by "closing ourselves off". We leave this technology in the hands of the bourgeoisie because we think we're too good for it. They don't seem to think they're too good for it though, and they control the state.
I actually agree with you. It's even the way we are educated in basic arithmetic. The first time I saw a 10 minute video of a primary school teacher teaching multiplication using common core concepts, which teaches children the reasoning of each step, my immediate reaction was "I might be better at math I'd been taught this way!” instead I learned multiplication by rote, and of course common core takes longer to teach than "just add this amount each iteration until 12, then memorize the outcome by Friday's quiz."
In college (university) soc 101, of course we learned the actual goal of capitalist education was to educate people enough to follow orders, not enough to question those orders. Don't get me started on the so called "gifted" programs, which in my area at least, did nothing for further development of budding intelligence, but granted a couple of hours a week to do paper crafts so it broke up the insane boredom.