Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
Ask it about Tienanmen square. Ask about anything from 1980 to 1990 in China.
So does it talk about dancing Israelis? -
Maybe if the US spent more time on innovation, mathematics, etc rather than pronouns...
-
retoor30702d@fruitfcker what happened to your user score? You had a lot of points right? Or aren't you the only one having smth like fruitfucker?. I remember a user being called that. I assume an apple fucker.
Haha, would be amazing that all models together would be actually a honest model. So, what does it say about 9/11? I would've expected that it came to same conclusions as GPT because it can reason all it want but all his information is in favor of that certain narative where a black box and complete wings of and airplane are missing but the passports of the terrorists was found. -
@retoor I went to China for a project. I had to delete all traces of my online shenanigans. ;-)
That's a lie. -
retoor30702dI just tried, fake news dude. Did you try yourself or believed someone on internet? It gives just the expected answer. But I expected that, see my post above. With the info what it has, he can reason or whatever but it will conclude this. Censorship is it's core. His essence.
DeepSeek R1 has a nice design too, It feels a bit familiar and is very comfortable. -
@retoor Why are you blaming the model for your poor prompting skills? Try to reason with it.
-
retoor30701d@kanyewest I do not think it actually can reason the way OpenAI can. Tell me what to prompt to get a different narrative because I doubt it in general. Doubt that more than my 'prompting skills'. Omg, it's a word now huh. I would only call instructing a GPT bot a kinda skill maybe, but just prompting is putting the bar too low.
-
Hazarth96031dHot take: no models can reason.
The point of the <think> sequence is to rewrite your poor prompts in thr first place to expand it to the proper detailed format.
It fills in details and assumptions that were missing from the original prompt so that the rest of the generation is better. It's a built-in chain of thought, except instead of telling it "first reason about all the detail" It's the default behaviour.
Saves time, gives better answers, but that's Just because most people can't be bothered to write a tech spec as the prompt. -
jestdotty618519h@retoor all models together = objective is how humans figure out truth
you go to multiple news sites, to multiple people in the village to verify the rumour
this is why propaganda also works though. it's about repeating something enough times and from enough slightly different sources that everyone thinks it's real and never really critically analyzes it
and "powerful" people know this so pollute the data with one angle in multiple "independent" sources. not all data goes through this treatment, but more advanced ones do. it's effectively a "campaign", but they don't have to disclose they're an ad / shill. cults do this also and tech took note
---
it's more odd to me people believe rumour instead of going to verify something with their own eyes. the urge for such a thing just doesn't exist in them. hence everyone still saying dumb shit about trump. you'd think it would stop by now cuz you can readily find clips of him. guess humans will keep embarrassing themselves instead though
DeepSeek r1 shows clear reasoning instead of hardcoded answers on topics like 9/11.
Never thought the day I would say this but well done China.
Big power countries are shitty, but at least we have handful of them keeping each other in check. Imagine if USA was the ONLY power. Fucking dystopia.
rant