• yarr@feddit.nl
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 days ago

    These “AI Computers” are a solution looking for a problem. The marketing people naming these “AI” computers think that AI is just some magic fairy dust term you can add to a product and it will increase demand.

    What’s the “killer features” of these new laptops, and what % price increase is it worth?

  • merdaverse@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 days ago

    What is even the point of an AI coprocessor for an end user (excluding ML devs)? Most of the AI features run in the cloud and even if they could run locally, companies are very happy to ask you rent for services and keep you vendor locked in.

  • Mwa@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    7 days ago

    Please stop shoving ai into everything,please give us opt out from AI icons and stuff /srs

  • yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    168
    ·
    9 days ago

    Even non tech people I talk to know AI is bad because the companies are pushing it so hard. They intuit that if the product was good, they wouldn’t be giving it away, much less begging you to use it.

    • jonhendry@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 days ago

      It’s partly that and partly a mad dash for market share in case the get it to work usefully. Although this is kind of pointless because AI isn’t very sticky. There’s not much to keep you from using another company’s AI service. And only the early adopter nerds are figuring out how to run it on their own hardware.

    • lev@slrpnk.net
      link
      fedilink
      English
      arrow-up
      86
      ·
      9 days ago

      You’re right - and even if the user is not conscious of this observation, many are subconsciously behaving in accordance with it. Having AI shoved into everything is offputting.

      • k0e3@lemmy.ca
        link
        fedilink
        English
        arrow-up
        18
        ·
        8 days ago

        Speaking of off-putting, that friggin copilot logo floating around on my Word document is so annoying. And the menu that pops up when I paste text — wtf does “paste with Copilot” even mean?

        • Rekorse@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          11
          ·
          edit-2
          8 days ago

          They are trying to saturate the user base with the word copilot. At least microsoft isnt very sneaky about anything.

    • Ledericas@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      ·
      8 days ago

      customers dont want AI, but only thhe corporation heads seem obssed with it.

  • magnetosphere@fedia.io
    link
    fedilink
    arrow-up
    125
    ·
    9 days ago

    One of the mistakes they made with AI was introducing it before it was ready (I’m making a generous assumption by suggesting that “ready” is even possible). It will be extremely difficult for any AI product to shake the reputation that AI is half-baked and makes absurd, nonsensical mistakes.

    This is a great example of capitalism working against itself. Investors want a return on their investment now, and advertisers/salespeople made unrealistic claims. AI simply isn’t ready for prime time. Now they’ll be fighting a bad reputation for years. Because of the situation tech companies created for themselves, getting users to trust AI will be an uphill battle.

    • wise_pancake@lemmy.ca
      link
      fedilink
      English
      arrow-up
      66
      ·
      edit-2
      9 days ago

      Apple Intelligence and the first versions of Gemini are the perfect examples of this.

      iOS still doesn’t do what was sold in the ads, almost a full year later.

      Edit: also things like email summary don’t work, the email categories are awful, notification summaries are straight up unhinged, and I don’t think anyone asked for image playground.

      • SomeoneSomewhere@lemmy.nz
        link
        fedilink
        English
        arrow-up
        50
        ·
        edit-2
        9 days ago

        Insert ‘Full Self Driving’ Here.

        Also, outlook’s auto alt text function told me that a conveyor belt was a picture of someone’s screen today.

      • Buelldozer@lemmy.today
        link
        fedilink
        English
        arrow-up
        13
        ·
        9 days ago

        Apple Intelligence and the first versions of Gemini are the perfect examples of this.

        Add Amazon’s Alexa+ to that list. It’s nearly a year overdue and still nowhere in sight.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      49
      ·
      9 days ago

      capitalism working against itself

      More like: capitalism reaching its own logical conclusion

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      9 days ago

      (I’m making a generous assumption by suggesting that “ready” is even possible)

      It was ready for some specific purposes but it is being jammed into everything. The problem is they are marketing it as AGI when it is still at the random fun but not expected to be accurate phase.

      The current marketing for AI won’t apply to anything that meets the marketing in the foreseeable future. The desired complexity isn’t going to exist in silicone at a reasonable scale.

    • luciole (he/him)@beehaw.org
      link
      fedilink
      English
      arrow-up
      19
      ·
      9 days ago

      I’m making a generous assumption by suggesting that “ready” is even possible

      To be honest it feels more and more like this is simply not possible, especially regarding the chatbots. Under those are LLMs, which are built by training neural networks, and for the pudding to stick there absolutely needs to have this emergent magic going on where sense spontaneously generates. Because any entity lining up words into sentences will charm unsuspecting folks horribly efficiently, it’s easy to be fooled into believing it’s happened. But whenever in a moment of despair I try and get Copilot to do any sort of task, it becomes abundantly clear it’s unable to reliably respect any form of requirement or directive. It just regurgitates some word soup loosely connected to whatever I’m rambling about. LLMs have been shoehorned into an ill-fitted use case. Its sole proven usefulness so far is fraud.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        19
        ·
        9 days ago

        There was research showing that every linear jump in capabilities needed exponentially more data fed into the models, so seems likely it isn’t going to be possible to get where they want to go.

    • Jimmycakes@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      9 days ago

      Yeah but first to market is sooooo good for stock price. Then you can sell at the top and gtfo before people find out it’s trash

    • calcopiritus@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 days ago

      I they didn’t over promise, they wouldn’t have had mountain loads of money to burn, so they wouldn’t have advanced the technology as much.

      Tech giants can’t wait decades until the technology is ready, they want their VC money now.

      • sexy_peach@feddit.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 days ago

        Sure, but if the tech in the end doesn’t deliver it’s all that money burnt.

        If it does deliver it’s still oligarchs deciding what tech we get.

        • calcopiritus@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 days ago

          Yes. The ones that have power are the ones that decide. And oligarchs by definition have a lot of power.

    • UltraGiGaGigantic@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 days ago

      The battle is easy. Buy out and collude with the competition so the customer has no choice but to purchase a AI device.

  • TheThrillOfTime@lemmy.ml
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    1
    ·
    9 days ago

    AI is going to be this eras Betamax, HD-Dvd, or 3d TV glasses. It doesn’t do what was promised and nobody gives a shit.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      46
      arrow-down
      1
      ·
      9 days ago

      Betamax had better image and sound, but was limited by running time and then VHS doubled down with even lower quality to increase how many hours would fit on a tape. VHS was simply more convenient without being that much lower quality for normal tape length.

      HD-DVD was comparable to BluRay and just happened to lose out because the industry won’t allow two similar technologies to exist at the same time.

      Neither failed to do what they promised. They were both perfectly fine technologies that lost in a competition that only allows a single winner.

      • xkbx@startrek.website
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        1
        ·
        9 days ago

        BluRay was slightly better if I recall correctly. With the rise in higher definition televisions, people wanted to max out the quality possible, even if most people (still) can’t tell the difference

        • philycheeze@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          30
          arrow-down
          1
          ·
          9 days ago

          Blu-ray also had the advantage of PS3 supporting the format without the need for an external disc drive.

        • bus_factor@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          9 days ago

          That’s not why it won, though. It won because the industry wanted zone restrictions, which only Blu-Ray supported. They suck for the user, but allows the industry to stagger releases in different markets. In reality it just means that I can’t get discs of most foreign films, because they won’t work in my player.

          • Revan343@lemmy.ca
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            9 days ago

            I’m sure that was a factor, but Blu-ray won because the most popular Blu-ray player practically sold itself

            • bus_factor@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              9 days ago

              It’s hard to say what was the final nail in the coffin, but it is true that Blu-Ray went from underdog to outselling HD-DVD around the time the PlayStation 3 came out. I’m not sure how much those early sales numbers matter, though, because I’m sure both were still miniscule compared to DVD.

              When 20th Century Fox dropped support for HD-DVD, they cited superior copy protection as the reason. Lionsgate gave similar sentiment.

              When Warner later announced they were dropping HD-DVD, they did cite customer adoption as the reason for their choice, but they also did it right before CES, so I’m pretty sure there were some backroom deals at play as well.

              I think the biggest impact of the PlayStation 3 was accelerating adoption of Blu-Ray over DVD. Back when DVD came out, VHS remained a major player for years, until the year there was a DVD player so dirt cheap that everyone who didn’t already have a player got one for Christmas.

          • Gerudo@lemm.ee
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            9 days ago

            The big plus for HD DVD was it was far cheaper to produce, it didn’t need massive retooling for manufacturing.

        • BeNotAfraid@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          9 days ago

          Not just that, space. BluRays have way more space than DVD’s. Remember how many 360 games came with multiple discs? Not a single PS3 game did, unless it was a bonus behind the scenes type thing.

          • Rose@slrpnk.net
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            8 days ago

            Xbox 360 used DVDs for game discs and could play video DVDs. They “supported” HDDVDs - you needed an addon which had a separate optical drive in it. Unsurprisingly this didn’t sell well.

      • GenosseFlosse@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        9 days ago

        Afaik betamax did not have any porn content, which might have contributed to the sale of VHS systems.

    • RedSnt 👓♂️🖥️@feddit.dk
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      9 days ago

      I was just about to mention porn and how each new format of the past came down to that very same factor.
      If AI computers were incredible at making AI porn I bet you they’d be selling a lot better haha

    • cubism_pitta@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      9 days ago

      Betamax actually found use in Television broadcast until the switch to HDTV occurred in 2009

    • blarth@thelemmy.club
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      7
      ·
      8 days ago

      No, I’m sorry. It is very useful and isn’t going away. This threads is either full of Luddites or disingenuous people.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 days ago

        nobody asked you to post in this thread. you came and posted this shit in here because the thread is very popular, because lots and lots of people correctly fucking hate generative AI

        so I guess please enjoy being the only “non-disingenuous” bootlicker you know outside of work, where everyone’s required (under implicit threat to their livelihood) to love this shitty fucking technology

        but most of all: don’t fucking come back, none of us Luddites need your mid ass

      • TheThrillOfTime@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        I have friends who are computer engineers and they say that it does a pretty good job of generating code, but that’s not a general population use case. For most people, AI is a nearly useless product. It makes Google searches worse. It makes your phone voice assistant worse. It’s not as good as human artists. And it’s mostly used to create dumbass posts on Reddit to farm engagement. In my life, AI has not made anything better.

  • RvTV95XBeo@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    60
    ·
    8 days ago

    Maybe I’m just getting old, but I honestly can’t think of any practical use case for AI in my day-to-day routine.

    ML algorithms are just fancy statistics machines, and to that end, I can see plenty of research and industry applications where large datasets need to be assessed (weather, medicine, …) with human oversight.

    But for me in my day to day?

    I don’t need a statistics bot making decisions for me at work, because if it was that easy I wouldn’t be getting paid to do it.

    I don’t need a giant calculator telling me when to eat or sleep or what game to play.

    I don’t need a Roomba with a graphics card automatically replying to my text messages.

    Handing over my entire life’s data just so a ML algorithm might be able to tell me what that one website I visited 3 years ago that sold kangaroo testicles was isn’t a filing system. There’s nothing I care about losing enough to go the effort of setting up copilot, but not enough to just, you know, bookmark it, or save it with a clear enough file name.

    Long rant, but really, what does copilot actually do for me?

    • Don_alForno@feddit.org
      link
      fedilink
      English
      arrow-up
      15
      ·
      8 days ago

      Our boss all but ordered us to have IT set this shit up on our PCs. So far I’ve been stalling, but I don’t know how long I can keep doing it.

    • Ledericas@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      ·
      8 days ago

      same here, i mostly dont even use it on the phone. my bro is into it thought, thinking ai generate dpicture is good.

      • RvTV95XBeo@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        12
        ·
        8 days ago

        It’s a fun party trick for like a second, but at no point today did I need a picture of a goat in a sweater smoking three cigarettes while playing tic-tac-toe with a llama dressed as the Dalai Lama.

        • bampop@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          11
          ·
          8 days ago

          It’s great if you want to do a kids party invitation or something like that

          • meowMix2525@lemm.ee
            link
            fedilink
            English
            arrow-up
            9
            ·
            8 days ago

            That wasn’t that hard to do in the first place, and certainly isn’t worth the drinking water to cool whatever computer made that calculation for you.

    • AbsentBird@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      8 days ago

      The only feature that actually seems useful for on-device AI is voice to text that doesn’t need an Internet connection.

      • RvTV95XBeo@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        ·
        8 days ago

        As someone who hates orally dictating my thoughts, that’s a no from me dawg, but I can kinda understand the appeal (though I’ll note offline TTS has been around for like a decade pre-AI)

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          8 days ago

          longer: dragon dictate and similar go back to the mid 90s (and I bet the research goes back slightly earlier, not gonna check now)

          similar for TTS

    • sem@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      9
      ·
      8 days ago

      Before ChatGPT was invented, everyone kind of liked how you could type in “bird” into Google Photos, and it would show you some of your photos that had birds.

    • ByteJunk@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      8
      ·
      8 days ago

      I use it to speed up my work.

      For example, I can give it a database schema and ask it for what I need to achieve and most of the time it will throw out a pretty good approximation or even get it right on the first go, depending on complexity and how well I phrase the request. I could write these myself, of course, but not in 2 seconds.

      Same with text formatting, for example. I regularly need to format long strings in specific ways, adding brackets and changing upper/lower capitalization. It does it in a second, and really well.

      Then there’s just convenience things. At what date and time will something end if it starts in two weeks and takes 400h to do? There’s tools for that, or I could figure it out myself, but I mean the AI is just there and does it in a sec…

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        32
        ·
        8 days ago

        it’s really embarrassing when the promptfans come here to brag about how they’re using the technology that’s burning the earth and it’s just basic editor shit they never learned. and then you watch these fuckers “work” and it’s miserably slow cause they’re prompting the piece of shit model in English, waiting for the cloud service to burn enough methane to generate a response, correcting the output and re-prompting, all to do the same task that’s just a fucking key combo.

        Same with text formatting, for example. I regularly need to format long strings in specific ways, adding brackets and changing upper/lower capitalization. It does it in a second, and really well.

        how in fuck do you work with strings and have this shit not be muscle memory or an editor macro? oh yeah, by giving the fuck up.

        • CarrotsHaveEars@lemmy.ml
          link
          fedilink
          English
          arrow-up
          12
          ·
          edit-2
          8 days ago

          (100% natural rant)

          I can change a whole fucking sentence to FUCKING UPPERCASE by just pressing vf.gU in fucking vim with a fraction of the amount of the energy that’s enough to run a fucking marathon, which in turn, only need to consume a fraction of the energy the fucking AI cloud cluster uses to spit out the same shit. The comparison is like a ping pong ball to the Earth, then to the fucking sun!

          Alright, bros, listen up. All these great tasks you claim AI does it faster and better, I can write up a script or something to do it even faster and better. Fucking A! This surge of high when you use AI comes from you not knowing how to do it or if even it’s possible. You!

          You prompt bros are blasting shit tons of energy just to achieve the same quality of work, if not worse, in a much fucking longer time.

          And somehow these executives claim AI improves fucking productivity‽

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            10
            ·
            8 days ago

            exactly. in Doom Emacs (and an appropriately configured vim), you can surround the word under the cursor with brackets with ysiw] where the last character is the bracket you want. it’s incredibly fast (especially combined with motion commands, you can do these faster than you can think) and very easy to learn, if you know vim.

            and I think that last bit is where the educational branch of our industry massively fucked up. a good editor that works exactly how you like (and I like the vim command language for realtime control and lisp for configuration) is like an electrician’s screwdriver or another semi-specialized tool. there’s a million things you can do with it, but we don’t teach any of them to programmers. there’s no vim or emacs class, and I’ve seen the quality of your average bootcamp’s vscode material. your average programmer bounces between fad editors depending on what’s being marketed at the time, and right now LLMs are it. learning to use your tools is considered a snobby elitist thing, but it really shouldn’t be — I’d gladly trade all of my freshman CS classes for a couple semesters learning how to make vim and emacs sing and dance.

            and now we’re trapped in this industry where our professionals never learned to use a screwdriver properly, so instead they bring their nephew to test for live voltage by licking the wires. and when you tell them to stop electrocuting their nephew and get the fuck out of your house, they get this faraway look in their eyes and start mumbling about how you’re just jealous that their nephew is going to become god first, because of course it’s also a weirdo cult underneath it all, that’s what happens when you vilify the concept of knowing fuck all about anything.

          • Hexarei@programming.dev
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            8 days ago

            The only things I’ve seen it do better than I could manage with a script or in Vim are things that require natural language comprehension. Like, “here’s an email forwarded to an app, find anything that sounds like a deadline” or “given this job description, come up with a reasonable title summary for the page it shows up on”… But even then those are small things that could be entirely omitted from the functionality of an app without any trouble on the user. And there’s also the hallucinations and being super wrong sometimes.

            The whole thing is a mess

      • Samskara@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        12
        ·
        8 days ago

        adding brackets and changing upper/lower capitalization

        I have used a system wide service in macOS for that for decades by now.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        8 days ago

        changing upper/lower capitalization

        That’s literally a built-in VSCode command my dude, it does it in milliseconds and doesn’t require switching a window or even a conscious thought from you

      • morbidcactus@lemmy.ca
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        8 days ago

        Gotta be real, LLMs for queries makes me uneasy. We’re already in a place where data modeling isn’t as common and people don’t put indexes or relationships between tables (and some tools didn’t really support those either), they might be alright at describing tables (Databricks has it baked in for better or worse for example, it’s usually pretty good at a quick summary of what a table is for), throwing an LLM on that doesn’t really inspire confidence.

        If your data model is highly normalised, with fks everywhere, good naming and well documented, yeah totally I could see that helping, but if that’s the case you already have good governance practices (which all ML tools benefit from AFAIK). Without that, I’m totally dreading the queries, people already are totally capable of generating stuff that gives DBAs a headache, simple cases yeah maybe, but complex queries idk I’m not sold.

        Data understanding is part of the job anyhow, that’s largely conceptual which maybe LLMs could work as an extension for, but I really wouldn’t trust it to generate full on queries in most of the environments I’ve seen, data is overwhelmingly super messy and orgs don’t love putting effort towards governance.

        • jacksilver@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          8 days ago

          I’ve done some work on natural language to SQL, both with older (like Bert) and current LLMs. It can do alright if there is a good schema and reasonable column names, but otherwise it can break down pretty quickly.

          Thats before you get into the fact that SQL dialects are a really big issue for LLMs to begin with. They all looks so similar I’ve found it common for them to switch between them without warning.

          • morbidcactus@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            4
            ·
            7 days ago

            Yeah I can totally understand that, Genie is databricks’ one and apparently it’s surprisingly decent at that, but it has access to a governance platform that traces column lineage on top of whatever descriptions and other metadata you give it, was pretty surprised with the accuracy in some of its auto generated descriptions though.

            • jacksilver@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              6
              ·
              7 days ago

              Yeah, the more data you have around the database the better, but that’s always been the issue with data governance - you need to stay on top of that or things start to degrade quickly.

              When the governance is good, the LLM may be able to keep up, but will you know when things start to slip?

      • sem@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        8 days ago

        The first two examples I really like since you’re able to verify them easily before using them, but for the math one, how to you know it gave you the right answer?

      • Hudell@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        7
        ·
        7 days ago

        I use it to parse log files, compare logs from successful and failed requests and that sort of stuff.

      • zurohki@aussie.zone
        link
        fedilink
        English
        arrow-up
        14
        ·
        8 days ago

        I tried feeding Japanese audio to an LLM to generate English subs and it started translating silence and music as requests to donate to anime fansubbers.

        No, really. Fansubbed anime would put their donation message over the intro music or when there wasn’t any speech to sub and the LLM learned that.

      • Dragonstaff@leminal.space
        link
        fedilink
        English
        arrow-up
        9
        ·
        8 days ago

        We’ve had speech to text since the 90s. Current iterations have improved, like most technology has improved since the 90s. But, no, I wouldn’t buy a new computer with glaring privacy concerns for real time subtitles in movies.

      • Bytemeister@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        8 days ago

        You’re thinking too small. AI could automatically dub the entire movie while mimicking the actors voice while simultaneously moving their lips and mouth to form the words correctly.

        It would just take your daily home power usage to do a single 2hr movie.

    • wetbeardhairs@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      6
      ·
      edit-2
      8 days ago

      They’re great for document management. You can let it build indices, locally on your machine with no internet connection. Then when you want to find things you can ask it in human terms. I’ve got a few gb of documents and finding things is a bitch - I’m actually waiting on the miniforums a1 pro whatever the fuck to be released with an option to buy it without windows (because fuck m$) to do exactly this for our home documents.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        8 days ago

        a local search engine but shitty, stochastic, and needs way too much compute for “a few gb of documents”, got it, thanks for chiming in

      • RvTV95XBeo@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        ·
        7 days ago

        Offline indexing has been working just fine for me for years. I don’t think I’ve ever needed to search for something esoteric like “the report with the blue header and the photo of 3 goats having an orgy”, if I really can’t remember the file name, or what it’s associated with in my filing system, I can still remember some key words from the text.

        Better indexing / automatic tagging of my photos could be nice, but that’s a rare occurrence, not a “I NEED a button for this POS on my keyboard and also want it always listening to everything I do” kind of situation

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 days ago

          I wish that offline indexing and archiving were normalized and more accessible, because it’s a fucking amazing thing to have

    • Flipper@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      7
      ·
      8 days ago

      Apparently it’s useful for extraction of information out of a text to a format you specify. A Friend is using it to extract transactions out of 500 year old texts. However to get rid of hallucinations the temperature reds to be 0. So the only way is to self host.

      • OhNoMoreLemmy@lemmy.ml
        link
        fedilink
        English
        arrow-up
        13
        ·
        8 days ago

        Setting the temperature to 0 doesn’t get rid of hallucinations.

        It might slightly increase accuracy, but it’s still going to go wrong.

      • daellat@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        8 days ago

        Well, LLMs are capable (but hallucinant) and cost an absolute fuckton of energy. There have been purpose trained efficient ML models that we’ve used for years. Document Understanding and Computer Vision are great, just don’t use a LLM for them.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      8 days ago

      now that you mention it, kinda surprised I haven’t ever seen a spate of custom 3D-printed turbo buttons from overclocker circles

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        ·
        8 days ago

        it could turn on the RGB! though that would imply that the RGB could be turned off in the first place, which is optimistic on my part

          • Korhaka@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            7
            ·
            8 days ago

            Better option: An array of flip switches for throttling to different speeds.

            Best option: Mount these flip switches above you on an overhead control panel.

            • sem@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              2
              ·
              8 days ago

              I thought it makes the game tick faster or slower, such that you have to have it set correctly or it’s unplayable.

              • Hexarei@programming.dev
                link
                fedilink
                English
                arrow-up
                4
                ·
                7 days ago

                Kind of, though it’s about the CPU’s clock speed rather than the details of the game.

                So, pedantically? no.

                Experientially? yes.

              • toddestan@lemm.ee
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                7 days ago

                Some early PC software, mostly games, were written expecting the computer ran at a fixed speed which was the speed of the original IBM PC which used an Intel 8088 that ran at 4.77 MHz. If the IBM PC was more like computers such as the Commodore 64 which changed little during its production run, that would have been fine. But eventually faster PC’s were released that ran on 286, 386, 486, etc. CPUs that were considerably faster and hence software that expected the original IBM PC hardware ran way too fast.

                The turbo button was a bit of a misnomer since you would normally have it on and leave it on, only turning it off as sort of a compatibility mode to run older software. How effective it was varied quite a bit - some computers turning it off would get you pretty close to the original IBM PC in terms of speed, but others would just slow the computer down, but not nearly enough, making it mostly useless for what it was intended for.

  • RaptorBenn@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    ·
    8 days ago

    Imagine that, a new fledgingly technology hamfistedly inserted into every part of the user experience, while offering meager functionality in exchange for the most aggressive data privacy invasion ever attempted on this scale, and no one likes it.

  • smiletolerantly@awful.systems
    link
    fedilink
    English
    arrow-up
    52
    ·
    edit-2
    9 days ago

    That’s not fair! I care! A lot!

    Just had to buy a new laptop for new place of employment. It took real time, effort, and care, but I’ve finally found a recent laptop matching my hardware requirements and sense of aesthetics at a reasonable price, without that hideous copilot button :)

      • thermal_shock@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        9 days ago

        How are they bootlocked? Just need the right iso. I have done it, because I didn’t know they came with Linux for this particular client and they put windows on it, had to get a specific iso to reinstall when they borked it.

      • smiletolerantly@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        8 days ago

        Decided on this:

        Still had some issues under Linux / NixOS a couple of weeks ago (hardware-wise everything worked; but specific programs, esp. Librewolf, will randomly start eating CPU and battery out of nowhere, with what looks like noops. Haven’t investigated further, yet.

        • JayGray91@piefed.social
          link
          fedilink
          English
          arrow-up
          5
          ·
          8 days ago

          sweet, glad to know it generally works with linux. this is available in my part of the world. been shopping around for a personal for-work laptop since my company is stingy. And I plan to move on anyways.

          • smiletolerantly@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            8 days ago

            It generally works, yes, but I’d hold off for another month or two in the hopes of the issues being resolved in the kernel

        • psivchaz@reddthat.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 days ago

          I really wanted to like that laptop but the screen is so incredibly glossy that unless you’re in a totally dark room it becomes a mirror.

          • smiletolerantly@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            8 days ago

            I think it’s a matter of preference. Haven’t noticed the screen being a mirror yet, but then again I feel like any even mildly matte screen looks like it’s being viewed through a veil…

            I am a bit worried/curious about how the oled will deal with my very static waybars though, lol

          • smiletolerantly@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 days ago

            Numpad/pin input. Utterly useless in my opinion. Also apparently activates itself pretty regularly by accident from palms resting when typing. YouTube comments are full of people desperate for a windows/driver update which lets you deactivate this thing.

            Oh, btw, I did not go through the trouble of enabling support under Linux (you can, but it’s optional, because, well… Linux)