

They’re already also offering 6-day certs, so capacity isn’t a problem.
They’re already also offering 6-day certs, so capacity isn’t a problem.
How does it compare to NixOS?
I’d be very skeptical of claims that Debian maintainers actually audit the code of each piece of software they package. Perhaps they make some brief reviews, but actually scrutinizing every line for hidden backdoors is just not feasible.
Any accessibility service will also see the “hidden links”, and while a blind person with a screen reader will notice if they wonder off into generated pages, it will waste their time too. Especially if they don’t know about such “feature” they’ll be very confused.
Also, I don’t know about you, but I absolutely have a use for crawling X, Google maps, Reddit, YouTube, and getting information from there without interacting with the service myself.
I would love to think so. But the word “verified” suggests more.
while allowing legitimate users and verified crawlers to browse normally.
What is a “verified crawler” though? What I worry about is, is it only big companies like Google that are allowed to have them now?
I agree that it’s difficult to enforce such a requirement on individuals. That said, I don’t agree that nobody cares for the content they post. If they have “something cool they made with AI generation” - then it’s not a big deal to have to mark it as AI-generated.
An intelligence service monitors social media. They may as well have said, “The sky is blue.”
More interesting is,
Sharing as a force multiplier
– OpenAI
Do you know of a provider is actually private? The few privacy policies I checked all had something like “We might keep some of your data for some time for anti-abuse or other reasons”…
It works fine for me on Hyprland.
I don’t think any kind of “poisoning” actually works. It’s well known by now that data quality is more important than data quantity, so nobody just feeds training data in indiscriminately. At best it would hamper some FOSS AI researchers that don’t have the resources to curate a dataset.
What makes these consumer-oriented models different is that that rather than being trained on raw data, they are trained on synthetic data from pre-existing models. That’s what the “Qwen” or “Llama” parts mean in the name. The 7B model is trained on synthetic data produced by Qwen, so it is effectively a compressed version of Qen. However, neither Qwen nor Llama can “reason,” they do not have an internal monologue.
You got that backwards. They’re other models - qwen or llama - fine-tuned on synthetic data generated by Deepseek-R1. Specifically, reasoning data, so that they can learn some of its reasoning ability.
But the base model - and so the base capability there - is that of the corresponding qwen or llama model. Calling them “Deepseek-R1-something” doesn’t change what they fundamentally are, it’s just marketing.
There are already other providers like Deepinfra offering DeepSeek. So while the the average person (like me) couldn’t run it themselves, they do have alternative options.
A server grade CPU with a lot of RAM and memory bandwidth would work reasonable well, and cost “only” ~$10k rather than 100k+…
To be fair, most people can’t actually self-host Deepseek, but there already are other providers offering API access to it.
I’m confused, isn’t Fedora atomic immutable? Shouldn’t that make it stateless automatically?
Why are you surprised? They are called cock-roaches, after all…
Why is this downvoted? If it’s true it’s a valid criticism, and if it’s false, I couldn’t find a mention of anonymity either.
Wary reader, learn from my cautionary tale
I’m not sure what to learn exactly. I don’t get what went wrong or why, just that the files hit deleted somehow…
What a great way to reduce external dependencies and mitigate supply chain attacks!