Thaura is not just more energy efficient than other AI platforms, it also uses less energy than humans doing the same task. While traditional web research might take you 10 minutes of clicking through pages and consuming cookies from major search engines, Thaura uses a fraction of the energy and provides the same information.
35
RedWizard [he/him, comrade/them] - 3day
Wait, you guys are getting cookies!?
8
WokePalpatine [he/him] - 3day
'It's not X, it's Y' two times in a row.
4
ZeroHora @lemmy.ml - 3day
This shit screams scam. German based company with 3 people that made an ethical AI that consumes less energy than the competitors.
27
gay_king_prince_charles [she/her, he/him] - 3day
This is absolutely a scam. It's just a GLM-4.5 air fine-tune (or maybe just a wrapper) for $0.5/$2 (input/output, in millions of tokens). The base model they used is free on OpenRouter, and the decision to keep the weights closed makes me suspicious. Also, anyone who pays for a 12B active parameter model has more dollars than sense.
8
BountifulEggnog [she/her] - 3day
Double the price of z.ai's official api. glm 4.6 (non air) is only a little more then this at .6/$2.2. What a meme.
2
unmagical @lemmy.ml - 3day
Where does it's training data come from?
18
gay_king_prince_charles [she/her, he/him] - 3day
GLM
2
ClathrateG [none/use name] - 3day
Sorry kid I already have DeepSeek
13
SouffleHuman @lemmy.ml - 3day
Most AI platforms use massive models with trillions of parameters that activate all their computational power for every single query.
The first part is probably right, frontier models are likely around a trillion parameters total, though we don’t know for sure, but I don’t think the second part is correct. It’s almost certain the big proprietary models, like all the recent large open models, use a Mixture-of-Experts system like Thuara, because it’s just cheaper to train and run.
While traditional web research might take you 10 minutes of clicking through pages and consuming cookies from major search engines, Thaura uses a fraction of the energy and provides the same information.
This part is pretty misleading. It is very unclear how much an LLM query compares to a search in terms of energy use, but even assuming they’re about equal (most estimates put LLM queries higher), the LLM also has to do their own web searches to find the information, if you’re using it for research purposes, so that point is fairly moot. Also the “consuming cookies” part isn’t really an energy problem, but a privacy problem, so I’m not sure why it’s used in this context.
Thaura uses a “mixture of expert” models with 100 billion total parameters, but only activates 12 billion per query.
Going to the actual website, it does credit the “GLM-4.5 Air architecture”, but the article doesn’t mention GLM, or the company behind it (Z.ai) at all. Given that this is likely a finetune of the GLM model that was freely released, it feels weird how the Thaura team seem reluctant to give credit to the Z.ai team.
These companies are often controlled by US-based corporations whose political stance supports occupation, apartheid, and Western hegemony - not human rights or global justice.
Reading below and also looking at their website, the hosting and inference is done by US firms (DigitalOcean, TogetherAI) in datacenters hosted in the EU. That’s not inherently bad from a privacy standpoint due to encryption, but it does feel disjointed that they are railing against US firms and western hegemony while simultaneously using their services for Thaura.
While I don’t think the Thaura team had bad intentions in fine-tuning their model and building the service, I feel that this is a pretty misleading article that also doesn’t give any significant details on Thaura, like it’s performance. They also haven’t given back to the community by releasing their model weights, despite building on an open model themselves. Personally, I think it’s better to stick to Z.ai, Qwen, Deepseek, etc, who actually release their models to the community and pretrain their models themselves.
12
abc [he/him, comrade/them] - 3day
ethical AI is an oxymoron
11
9to5 [any, comrade/them] - 3day
I dont really doubt that this model "might" be more ethical than big models . What I doubt though is that it tells the truth
9
21Gramsci [he/him] - 3day
Really fucking disappointed that this is what Tech4Palestine is doing with their resources. I had a lot of respect for Paul Biggar (the biggest name associated with T4P) being one of the few tech execs who actually sacrificed his job and a lot of his network to speak up for Palestine. I liked the idea of getting volunteer tech workers together to make useful resources for pro-Pal activism, and I was even thinking of reaching out to volunteer when I had some spare time...
But this is what they're doing? Another fucking chatbot? Fuck this shit man. How the absolute fuck does this help anybody in Palestine? I'm going to lose my mind.
It's incredible how LLMs have completely colonized the imagination space of innovation. Everyone in the tech field, even the most well-meaning people, seem to have completely forgotten how to imagine anything new that isn't somehow a chatbot. It's a situation so stupid and bleak that I don't know of any work of sci-fi that posited such a future, no matter how dystopian.
Burn this industry to the ground, salt the ruins, and send every programmer to the re-education camps, me included. Every tech worker should be forced to read books until they can come up with a single innovative idea that isn't a chatbot.
ahrienby in technology
Announcing: Thaura - Your Ethical ChatGPT Alternative
https://updates.techforpalestine.org/announcing-thaura-your-ethical-chatgpt-alternative/Wait, you guys are getting cookies!?
'It's not X, it's Y' two times in a row.
This shit screams scam. German based company with 3 people that made an ethical AI that consumes less energy than the competitors.
This is absolutely a scam. It's just a GLM-4.5 air fine-tune (or maybe just a wrapper) for $0.5/$2 (input/output, in millions of tokens). The base model they used is free on OpenRouter, and the decision to keep the weights closed makes me suspicious. Also, anyone who pays for a 12B active parameter model has more dollars than sense.
Double the price of z.ai's official api. glm 4.6 (non air) is only a little more then this at .6/$2.2. What a meme.
Where does it's training data come from?
GLM
Sorry kid I already have DeepSeek
The first part is probably right, frontier models are likely around a trillion parameters total, though we don’t know for sure, but I don’t think the second part is correct. It’s almost certain the big proprietary models, like all the recent large open models, use a Mixture-of-Experts system like Thuara, because it’s just cheaper to train and run.
This part is pretty misleading. It is very unclear how much an LLM query compares to a search in terms of energy use, but even assuming they’re about equal (most estimates put LLM queries higher), the LLM also has to do their own web searches to find the information, if you’re using it for research purposes, so that point is fairly moot. Also the “consuming cookies” part isn’t really an energy problem, but a privacy problem, so I’m not sure why it’s used in this context.
Going to the actual website, it does credit the “GLM-4.5 Air architecture”, but the article doesn’t mention GLM, or the company behind it (Z.ai) at all. Given that this is likely a finetune of the GLM model that was freely released, it feels weird how the Thaura team seem reluctant to give credit to the Z.ai team.
Reading below and also looking at their website, the hosting and inference is done by US firms (DigitalOcean, TogetherAI) in datacenters hosted in the EU. That’s not inherently bad from a privacy standpoint due to encryption, but it does feel disjointed that they are railing against US firms and western hegemony while simultaneously using their services for Thaura.
While I don’t think the Thaura team had bad intentions in fine-tuning their model and building the service, I feel that this is a pretty misleading article that also doesn’t give any significant details on Thaura, like it’s performance. They also haven’t given back to the community by releasing their model weights, despite building on an open model themselves. Personally, I think it’s better to stick to Z.ai, Qwen, Deepseek, etc, who actually release their models to the community and pretrain their models themselves.
ethical AI is an oxymoron
I dont really doubt that this model "might" be more ethical than big models . What I doubt though is that it tells the truth
Really fucking disappointed that this is what Tech4Palestine is doing with their resources. I had a lot of respect for Paul Biggar (the biggest name associated with T4P) being one of the few tech execs who actually sacrificed his job and a lot of his network to speak up for Palestine. I liked the idea of getting volunteer tech workers together to make useful resources for pro-Pal activism, and I was even thinking of reaching out to volunteer when I had some spare time...
But this is what they're doing? Another fucking chatbot? Fuck this shit man. How the absolute fuck does this help anybody in Palestine? I'm going to lose my mind.
It's incredible how LLMs have completely colonized the imagination space of innovation. Everyone in the tech field, even the most well-meaning people, seem to have completely forgotten how to imagine anything new that isn't somehow a chatbot. It's a situation so stupid and bleak that I don't know of any work of sci-fi that posited such a future, no matter how dystopian.
Burn this industry to the ground, salt the ruins, and send every programmer to the re-education camps, me included. Every tech worker should be forced to read books until they can come up with a single innovative idea that isn't a chatbot.
All I need it to do is understand SQL