• 1 Post
  • 15 Comments
Joined 1 year ago
cake
Cake day: December 11th, 2023

help-circle
  • Yes its clear that the path of throwing more and more resource at LLMS to improve quality has been a lazy growth focused approach that we could do better if we actually try a design focussed approach.

    For me though it comes back to the fact we are facing a polycrisis and most of our resource should be focused on looking for solutions to that and I’m not sure what problem* this technology solves yet alone what problem relating to the polycrisis.

    *I realise what they are designed to solve is a capitalist problem. How can we avoid paying staff for service and creative type jobs to increase profit.



  • Yes that’s fair. I guess my comment wasn’t a direct response to yours other than it made me think this desire that all the difficult issues (like bias) just disappear if you remove all the humans from the process* is flawed and any anticapitalist society should really start from that understanding. One that understands that conflict will emerge and pro-social “convivial” systems and structures need to emerge to handle them.

    *You are right to point out that the “AI” we are talking about is statistical models built from humans that includes bias where as the hype is that we have Data from Star Trek and therefore these systems hide the human inputs but don’t remove them.



  • zerakith@lemmy.mltoFuck AI@lemmy.worldProtestation
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    13 days ago

    I get the sentiment and I wish it were true.

    Some of the issues stem from material and energy limitations regardless of human organisation structures. Fossil Fuels are stored sunlight over a long period of time that means that burning them has a high yield and that’s given us a very high EROI society (one where there’s an abundance of energy for purposes that aren’t basic functioning).

    I recommend reading The Collapse of Complex Societies by Tainter who discussing the energy limitations of society. Its before our understanding of energy limitations of technology and he’s by no means a leftist but it is still a good introductory text to it.



  • zerakith@lemmy.mltoFuck AI@lemmy.worldProtestation
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    13 days ago

    Let’s say its true that doing that would stop the problem getting worse (e.g. no more emissions after 5 years)*.

    We still have the legacy issues to deal with and I need anticaps who are thinking seriously about what can replace capitalism to take seriously how dependent we are on natural systems that are very close to collapse. We are already passed the point where just stopping the harm is job done. The climate is not the one we have evolved and developed civilisation under its far less stable.

    • There are material and energy constraints that aren’t instantly solvable and electricity production is far from the only cause of climate harm (land use and manufacturing) and some of those have major question marks remaining as to how they can be removed or electrified.

  • zerakith@lemmy.mltoFuck AI@lemmy.worldProtestation
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    13 days ago

    They don’t disappear if capitalism disappears. I agree with you capitalism needs to end in order to deal with them but there are hard issues that we have to deal with even with capitalism gone.

    Even if the causes ceased we would still be left with residual emissions and degraded natural systems to try and deal with and a lower EROI society to do it.



  • I think this inherently accepts the narrative that the work women were doing before had no or little value.

    That care and emotional labour should not fall solely on women and we should all have the opportunity to partake in meaningful work but we shouldn’t accept having to accept less time for care (and leisure) on some trumped up definition of what’s productive/economic or not.


  • According to that site you can downgrade the firmware (some people really disliked the various UI changes and the firmware is getting quite prescriptive)

    You can also run your own homebrew apps so I found someone who installed KOReader which they claim is better experience than default reader especially for pdfs and also link better to personal cloud storage.

    There’s also ability to use locally stored Web Application Frameworks but I’m not 100% sure what the use case would be.


  • Agreed that the studios need to be held more accountable and their usage of AI is more problematic than open source last resort type work. I have noticed a degradation of quality in the last five years on mainstream sources.

    However, the existence of this last resort tool will shift the dynamics of the “market” for the work that should be being done. Even in the open source community. There used to be an active community of people giving their voluntary labour to writing subtitles for those that lacked them (they may still be active I don’t know). Are they as likely to do that if they think oh well it can be automatically done now?

    The real challenge with the argument that it helps editors is the same as the challenge for Automated Driving. If something performs at 95% you actually end up deskilling and stepping down the attention focus and make it more likely to miss that 5% that requires manual intervention. I think it also has a material impact on the wellbeing of those doing the labour.

    To be clear I’m not anti this at all but think we need to think carefully about the structures and processes around it to ensure it does lead to improvement in quality not just an improvement in quantity at the cost of quality.


  • It is probably good that OS community are exploring this however I’m not sure the technology is ready (or will ever be maybe) and it potentially undermines the labour intensive activity of producing high quality subtitling for accessibility.

    I use them quite a lot and I’ve noticed they really struggle on key things like regional/national dialects, subject specific words and situations where context would allow improvement (e.g. a word invented solely in the universe of the media). So it’s probably managing 95% accuracy which is that danger zone where its good enough that no one checks it but bad enough that it can be really confusing if you are reliant on then. If we care about accessibility we need to care about it being high quality.



  • I won’t rehash the arguments around “AI” that others are best placed to make.

    My main issue is AI as a term is basically a marketing one to convince people that these tools do something they don’t and its causing real harm. Its redirecting resources and attention onto a very narrow subset of tools replacing other less intensive tools. There are significant impacts to these tools (during an existential crisis around our use and consumption of energy). There are some really good targeted uses of machine learning techniques but they are being drowned out by a hype train that is determined to make the general public think that we have or are near Data from Star Trek.

    Addtionally, as others have said the current state of “AI” has a very anti FOSS ethos. With big firms using and misusing their monopolies to steal, borrow and coopt data that isn’t theirs to build something that contains that’s data but is their copyright. Some of this data is intensely personal and sensitive and the original intent behind the sharing is not for training a model which may in certain circumstances spit out that data verbatim.

    Lastly, since you use the term Luddite. Its worth actually engaging with what that movement was about. Whilst its pitched now as generic anti-technology backlash in fact it was a movement of people who saw what the priorities and choices in the new technology meant for them: the people that didn’t own the technology and would get worse living and work conditions as a result. As it turned out they were almost exactly correct in thier predictions. They are indeed worth thinking about as allegory for the moment we find ourselves in. How do ordinary people want this technology to change our lives? Who do we want to control it? Given its implications for our climate needs can we afford to use it now, if so for what purposes?

    Personally, I can’t wait for the hype train to pop (or maybe depart?) so we can get back to rational discussions about the best uses of machine learning (and computing in general) for the betterment of all rather than the enrichment of a few.