Doing the Lord’s work in the Devil’s basement

  • 0 Posts
  • 68 Comments
Joined 1 year ago
cake
Cake day: May 8th, 2024

help-circle


  • In french, the most common name is probably “beu” which is a contraction of “beuher” which is verlan (a type of slang where you reverse the word) for “herbe”.

    This was distorted in a movie to “beuze” , which was then given the verlan treatment again (this happens often) to “zeb”, hence my personal favorite “zebuline”. Mind you, that’s not a common name that’s a personal one I use often.





  • When you read that stuff on reddit there’s a parameter you need to keep in mind : these people are not really discussing Lemmy. They’re rationalizing and justifying why they are not on Lemmy. Totally different conversation.

    Nobody wants to come out and say “I know mainstream platforms are shit and destroying the fabric of reality but I can’t bring myself to be on a platform except it is the Hip Place to Be”. So they’ll invent stuff that paints them in a good light.

    You’ll still see people claiming that Mastodon is unusable because you have to select an instance - even though you don’t have to, you can just type Mastodon on Google, click the first link, and create an account in 2 clicks. It’s been ages. But the people still using Twitter need the excuse because otherwise what does it make them?



  • Zos_Kia@lemmynsfw.comtoMicroblog Memes@lemmy.worlddeepseek
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 months ago

    If you take into account the optimizations described in the paper, then the cost they announce is in line with the rest of the world’s research into sparse models.

    Of course, the training cost is not the whole picture, which the DS paper readily acknowledges. Before arriving at 1 successful model you have to train and throw away n unsuccessful attempts. Of course that’s also true of any other LLM provider, the training cost is used to compare technical trade-offs that alter training efficiency, not business models.




  • So for the first 20 years or 2/3 of the entire history of the company, they were unprofitable or barely profitable.

    We must have a wildly different definition of “barely profitable”. Half a billion in 2004 money is a lot of profit, a billion back to back in 2009 and 2010 is a lot of profit.

    I think you’re confusing Amazon with the next generation of loss-leader companies. Let’s talk Uber, let’s talk Twitter, if we want to point at “hugely unprofitable” companies. But Amazon is a beast of its own, they have a very coherent financial story. Even during their money-losing decade they posted insane results, frequently multiplying revenue while barely increasing operating costs.


  • Oh thanks for clarifying in even more excruciating details how a subtraction works that is really helpful.

    Why would you repeat the lie that they’re “usually unprofitable” when the information is publically available in a million places on the internet ? In 2023 Amazon made :

    • 575B$ in sales
    • If you remove costs of goods that’s 270B$ in gross profit
    • If you remove operating expenses (including R&D) that’s 30B$ in net income

    Amazon is factually not “usually unprofitable”, they have in fact made profit (as in money which actually goes into your pocket after discounting all expenses) every year for the last 15 years except in 2022 and some tiny losses in 2014 and 2012.



  • I’m not much of a reddit defender as i pretty much left this place same time as everybody else. However there’s always been a very clear trend in this kind of subreddits. Places like “noahgettheboat” or “iamapieceofshit” or “thatsinsane” systematically attract the worst kind of misanthropic low-lifes. People will see the most abject violence and laugh “haha he fucked around and found out” and of course this makes the most fascistic types feel at home.

    I don’t think they are representative of the overall slant of the community. Most places are progressive by default.



  • There’s absolutely no doubt that lower-end models are going to keep improving and that inference will keep getting cheaper. It won’t be on a Raspberry but my money’s with you. In 6 years you’ll be able to buy some cheap-ish specialized hardware to run open models on and they’re gonna be at least as capable as today’s frontier models while burning a fraction of the energy.

    In fact i wouldn’t be surprised if frontier models were somehow overtaken by vastly cheaper models in the long run. The whole “trillion parameter count” paradigm feels very hacky and ripe for radical simplification. And wouldn’t it be hilarious ? All those suckers spending billions building a moat only to see it swept under their feet.



  • I have to keep my LinkedIn for business reasons but recently I noticed a big uptick in fascist-adjacent posting. At first it depressed me big time but then I started blocking and a couple weeks and dozen blocks later it was over. Turns out a small number of people can really give the impression of a crowd and fuck with the whole experience.