58
2mon
10

DeepSeek AI Models Are Unsafe and Unreliable, Finds NIST-Backed Study

https://www.techrepublic.com/article/news-deepseek-security-gaps-caisi-study/

The article says that DeepSeek was easier to unalign to obey the users instruction. It has less refusals and they make that sound like a bad thing.

If anything, it's a glowing positive praise for the model. Looks like Western media is starting a campaign to gaslight people into thinking that users being able to tune the model to work the way they want is somehow a negative.

Rom [he/him] - 2mon

Communism bad, finds CIA-backed study.

51
amemorablename - 2mon

Hey, I just thought of a more accurate meaning for the CIA acronym. Colonial Invasion Agency.

29
bunbun - 2mon

users being able to tune the model to work the way they want is somehow a negative

32
☆ Yσɠƚԋσʂ ☆ - 2mon

This is the problem with closed models controlled by corps in a nutshell.

29
LadyCajAsca [she/her, comrade/them] - 2mon

I find it funny that Musk keeps tweaking Grok to spout his beliefs but like, one answer that is remotely out of his 'reality' and there he goes in again.

9
☆ Yσɠƚԋσʂ ☆ - 2mon

that's absolutely fucking hilarious

17
ArcticFoxSmiles - 2mon

Sinophobes are bad that people are flocking to DeepSeek because it is open source unlike ChatGPT

14
Tofutefisk - 2mon

relevant-ish

Propaganda is all you need

“As ML is still a (relatively) recent field of study, especially outside the realm of abstract mathematics, few works have been conducted on the political aspect of LLMs, and more particularly about the alignment process and its political dimension. This process can be as simple as prompt engineering but is also very complex and can affect completely unrelated questions. For example, politically directed alignment has a very strong impact on an LLM’s embedding space and the relative position of political notions in such a space. Using special tools to evaluate general political bias and analyze the effects of alignment, we can gather new data to understand its causes and possible consequences on society. Indeed, by taking a socio-political approach, we can hypothesize that most big LLMs are aligned with what Marxist philosophy calls the ’dominant ideology.’ As AI’s role in political decision-making—at the citizen’s scale but also in government agencies—such biases can have huge effects on societal change, either by creating new and insidious pathways for societal uniformity or by allowing disguised extremist views to gain traction among the people.”

11
☆ Yσɠƚԋσʂ ☆ - 2mon

a good read

8