- cross-posted to:
- [email protected]
How did this thread blow up so much?
These “AI Computers” are a solution looking for a problem. The marketing people naming these “AI” computers think that AI is just some magic fairy dust term you can add to a product and it will increase demand.
What’s the “killer features” of these new laptops, and what % price increase is it worth?
What is even the point of an AI coprocessor for an end user (excluding ML devs)? Most of the AI features run in the cloud and even if they could run locally, companies are very happy to ask you rent for services and keep you vendor locked in.
Please stop shoving ai into everything,please give us opt out from AI icons and stuff /srs
What the hell is an AI computer? Like one with a beefy GPU?
No thanks. I’m perfectly capable of coming up with incorrect answers on my own.
you’re right tho
Even non tech people I talk to know AI is bad because the companies are pushing it so hard. They intuit that if the product was good, they wouldn’t be giving it away, much less begging you to use it.
It’s partly that and partly a mad dash for market share in case the get it to work usefully. Although this is kind of pointless because AI isn’t very sticky. There’s not much to keep you from using another company’s AI service. And only the early adopter nerds are figuring out how to run it on their own hardware.
You’re right - and even if the user is not conscious of this observation, many are subconsciously behaving in accordance with it. Having AI shoved into everything is offputting.
Speaking of off-putting, that friggin copilot logo floating around on my Word document is so annoying. And the menu that pops up when I paste text — wtf does “paste with Copilot” even mean?
They are trying to saturate the user base with the word copilot. At least microsoft isnt very sneaky about anything.
customers dont want AI, but only thhe corporation heads seem obssed with it.
One of the mistakes they made with AI was introducing it before it was ready (I’m making a generous assumption by suggesting that “ready” is even possible). It will be extremely difficult for any AI product to shake the reputation that AI is half-baked and makes absurd, nonsensical mistakes.
This is a great example of capitalism working against itself. Investors want a return on their investment now, and advertisers/salespeople made unrealistic claims. AI simply isn’t ready for prime time. Now they’ll be fighting a bad reputation for years. Because of the situation tech companies created for themselves, getting users to trust AI will be an uphill battle.
Apple Intelligence and the first versions of Gemini are the perfect examples of this.
iOS still doesn’t do what was sold in the ads, almost a full year later.
Edit: also things like email summary don’t work, the email categories are awful, notification summaries are straight up unhinged, and I don’t think anyone asked for image playground.
Insert ‘Full Self Driving’ Here.
Also, outlook’s auto alt text function told me that a conveyor belt was a picture of someone’s screen today.
Calling it “Full Self Driving” is such blatant false advertising.
Apple Intelligence and the first versions of Gemini are the perfect examples of this.
Add Amazon’s Alexa+ to that list. It’s nearly a year overdue and still nowhere in sight.
capitalism working against itself
More like: capitalism reaching its own logical conclusion
(I’m making a generous assumption by suggesting that “ready” is even possible)
It was ready for some specific purposes but it is being jammed into everything. The problem is they are marketing it as AGI when it is still at the random fun but not expected to be accurate phase.
The current marketing for AI won’t apply to anything that meets the marketing in the foreseeable future. The desired complexity isn’t going to exist in silicone at a reasonable scale.
I’m making a generous assumption by suggesting that “ready” is even possible
To be honest it feels more and more like this is simply not possible, especially regarding the chatbots. Under those are LLMs, which are built by training neural networks, and for the pudding to stick there absolutely needs to have this emergent magic going on where sense spontaneously generates. Because any entity lining up words into sentences will charm unsuspecting folks horribly efficiently, it’s easy to be fooled into believing it’s happened. But whenever in a moment of despair I try and get Copilot to do any sort of task, it becomes abundantly clear it’s unable to reliably respect any form of requirement or directive. It just regurgitates some word soup loosely connected to whatever I’m rambling about. LLMs have been shoehorned into an ill-fitted use case. Its sole proven usefulness so far is fraud.
There was research showing that every linear jump in capabilities needed exponentially more data fed into the models, so seems likely it isn’t going to be possible to get where they want to go.
OpenAI admitted that with o1! they included graphs directly showing gains taking exponential effort
do you have any articles on this? i have heard this claim quite a few times, but im wondering how they put numbers on the capabilities of those models.
Sorry nope didnt keep a link.
Yeah but first to market is sooooo good for stock price. Then you can sell at the top and gtfo before people find out it’s trash
I they didn’t over promise, they wouldn’t have had mountain loads of money to burn, so they wouldn’t have advanced the technology as much.
Tech giants can’t wait decades until the technology is ready, they want their VC money now.
Sure, but if the tech in the end doesn’t deliver it’s all that money burnt.
If it does deliver it’s still oligarchs deciding what tech we get.
Yes. The ones that have power are the ones that decide. And oligarchs by definition have a lot of power.
The battle is easy. Buy out and collude with the competition so the customer has no choice but to purchase a AI device.
This would only work for a service that customers want or need
Ah, like with the TPM blackbox?
I think people care.
They care so much they actively avoid them.
Oh we care alright. We care about keeping it OUT of our FUCKING LIVES.
AI is going to be this eras Betamax, HD-Dvd, or 3d TV glasses. It doesn’t do what was promised and nobody gives a shit.
Betamax had better image and sound, but was limited by running time and then VHS doubled down with even lower quality to increase how many hours would fit on a tape. VHS was simply more convenient without being that much lower quality for normal tape length.
HD-DVD was comparable to BluRay and just happened to lose out because the industry won’t allow two similar technologies to exist at the same time.
Neither failed to do what they promised. They were both perfectly fine technologies that lost in a competition that only allows a single winner.
BluRay was slightly better if I recall correctly. With the rise in higher definition televisions, people wanted to max out the quality possible, even if most people (still) can’t tell the difference
Blu-ray also had the advantage of PS3 supporting the format without the need for an external disc drive.
@philycheeze @xkbx yes, I think Microslop’s fumble of selling the HD DVD drive only as an external add-on really hindered the format
@philycheeze @xkbx I bought one anyway. 10 years later, mind you :p
They’re not necessarily bad, it’s just an extra barrier to entry.
Blue ray also had the advantage of not having multiple D’s in its name.
That’s not why it won, though. It won because the industry wanted zone restrictions, which only Blu-Ray supported. They suck for the user, but allows the industry to stagger releases in different markets. In reality it just means that I can’t get discs of most foreign films, because they won’t work in my player.
I’m sure that was a factor, but Blu-ray won because the most popular Blu-ray player practically sold itself
It’s hard to say what was the final nail in the coffin, but it is true that Blu-Ray went from underdog to outselling HD-DVD around the time the PlayStation 3 came out. I’m not sure how much those early sales numbers matter, though, because I’m sure both were still miniscule compared to DVD.
When 20th Century Fox dropped support for HD-DVD, they cited superior copy protection as the reason. Lionsgate gave similar sentiment.
When Warner later announced they were dropping HD-DVD, they did cite customer adoption as the reason for their choice, but they also did it right before CES, so I’m pretty sure there were some backroom deals at play as well.
I think the biggest impact of the PlayStation 3 was accelerating adoption of Blu-Ray over DVD. Back when DVD came out, VHS remained a major player for years, until the year there was a DVD player so dirt cheap that everyone who didn’t already have a player got one for Christmas.
Nah Blu-ray was significantly better, 50gb capacity vs 30gb
The big plus for HD DVD was it was far cheaper to produce, it didn’t need massive retooling for manufacturing.
Not just that, space. BluRays have way more space than DVD’s. Remember how many 360 games came with multiple discs? Not a single PS3 game did, unless it was a bonus behind the scenes type thing.
Xbox 360 used DVDs for game discs and could play video DVDs. They “supported” HDDVDs - you needed an addon which had a separate optical drive in it. Unsurprisingly this didn’t sell well.
Afaik betamax did not have any porn content, which might have contributed to the sale of VHS systems.
Dude don’t throw Betamax in there, that was a better product than the VHS. AI is just ass.
I was just about to mention porn and how each new format of the past came down to that very same factor.
If AI computers were incredible at making AI porn I bet you they’d be selling a lot better hahaBetamax actually found use in Television broadcast until the switch to HDTV occurred in 2009
the later digital variants of beta weren’t retired by sony until ~ 2016.
I had no clue that they did digital betamax…
That would make senes though…
There was at one point an HDVHS as well it was essentially a 1080P MPEG stream on a VHS tape
No, I’m sorry. It is very useful and isn’t going away. This threads is either full of Luddites or disingenuous people.
nobody asked you to post in this thread. you came and posted this shit in here because the thread is very popular, because lots and lots of people correctly fucking hate generative AI
so I guess please enjoy being the only “non-disingenuous” bootlicker you know outside of work, where everyone’s required (under implicit threat to their livelihood) to love this shitty fucking technology
but most of all: don’t fucking come back, none of us Luddites need your mid ass
@blarth @TheThrillOfTime huh. You totally name at least one use case then, huh
You only didn’t because it’s so blindingly obvious(It’s BS)
Also, learn about Luddites, manI have friends who are computer engineers and they say that it does a pretty good job of generating code, but that’s not a general population use case. For most people, AI is a nearly useless product. It makes Google searches worse. It makes your phone voice assistant worse. It’s not as good as human artists. And it’s mostly used to create dumbass posts on Reddit to farm engagement. In my life, AI has not made anything better.
Maybe I’m just getting old, but I honestly can’t think of any practical use case for AI in my day-to-day routine.
ML algorithms are just fancy statistics machines, and to that end, I can see plenty of research and industry applications where large datasets need to be assessed (weather, medicine, …) with human oversight.
But for me in my day to day?
I don’t need a statistics bot making decisions for me at work, because if it was that easy I wouldn’t be getting paid to do it.
I don’t need a giant calculator telling me when to eat or sleep or what game to play.
I don’t need a Roomba with a graphics card automatically replying to my text messages.
Handing over my entire life’s data just so a ML algorithm might be able to tell me what that one website I visited 3 years ago that sold kangaroo testicles was isn’t a filing system. There’s nothing I care about losing enough to go the effort of setting up copilot, but not enough to just, you know, bookmark it, or save it with a clear enough file name.
Long rant, but really, what does copilot actually do for me?
Our boss all but ordered us to have IT set this shit up on our PCs. So far I’ve been stalling, but I don’t know how long I can keep doing it.
Tell your boss you talked to legal and they caution that all copilot data is potentially discoverable.
Set it up. People have to find out by themselves.
same here, i mostly dont even use it on the phone. my bro is into it thought, thinking ai generate dpicture is good.
It’s a fun party trick for like a second, but at no point today did I need a picture of a goat in a sweater smoking three cigarettes while playing tic-tac-toe with a llama dressed as the Dalai Lama.
It’s great if you want to do a kids party invitation or something like that
That wasn’t that hard to do in the first place, and certainly isn’t worth the drinking water to cool whatever computer made that calculation for you.
The only feature that actually seems useful for on-device AI is voice to text that doesn’t need an Internet connection.
As someone who hates orally dictating my thoughts, that’s a no from me dawg, but I can kinda understand the appeal (though I’ll note offline TTS has been around for like a decade pre-AI)
longer: dragon dictate and similar go back to the mid 90s (and I bet the research goes back slightly earlier, not gonna check now)
similar for TTS
deleted by creator
Before ChatGPT was invented, everyone kind of liked how you could type in “bird” into Google Photos, and it would show you some of your photos that had birds.
I use it to speed up my work.
For example, I can give it a database schema and ask it for what I need to achieve and most of the time it will throw out a pretty good approximation or even get it right on the first go, depending on complexity and how well I phrase the request. I could write these myself, of course, but not in 2 seconds.
Same with text formatting, for example. I regularly need to format long strings in specific ways, adding brackets and changing upper/lower capitalization. It does it in a second, and really well.
Then there’s just convenience things. At what date and time will something end if it starts in two weeks and takes 400h to do? There’s tools for that, or I could figure it out myself, but I mean the AI is just there and does it in a sec…
it’s really embarrassing when the promptfans come here to brag about how they’re using the technology that’s burning the earth and it’s just basic editor shit they never learned. and then you watch these fuckers “work” and it’s miserably slow cause they’re prompting the piece of shit model in English, waiting for the cloud service to burn enough methane to generate a response, correcting the output and re-prompting, all to do the same task that’s just a fucking key combo.
Same with text formatting, for example. I regularly need to format long strings in specific ways, adding brackets and changing upper/lower capitalization. It does it in a second, and really well.
how in fuck do you work with strings and have this shit not be muscle memory or an editor macro? oh yeah, by giving the fuck up.
(100% natural rant)
I can change a whole fucking sentence to FUCKING UPPERCASE by just pressing
vf.gU
in fucking vim with a fraction of the amount of the energy that’s enough to run a fucking marathon, which in turn, only need to consume a fraction of the energy the fucking AI cloud cluster uses to spit out the same shit. The comparison is like a ping pong ball to the Earth, then to the fucking sun!Alright, bros, listen up. All these great tasks you claim AI does it faster and better, I can write up a script or something to do it even faster and better. Fucking A! This surge of high when you use AI comes from you not knowing how to do it or if even it’s possible. You!
You prompt bros are blasting shit tons of energy just to achieve the same quality of work, if not worse, in a much fucking longer time.
And somehow these executives claim AI improves fucking productivity‽
exactly. in Doom Emacs (and an appropriately configured vim), you can surround the word under the cursor with brackets with
ysiw]
where the last character is the bracket you want. it’s incredibly fast (especially combined with motion commands, you can do these faster than you can think) and very easy to learn, if you know vim.and I think that last bit is where the educational branch of our industry massively fucked up. a good editor that works exactly how you like (and I like the vim command language for realtime control and lisp for configuration) is like an electrician’s screwdriver or another semi-specialized tool. there’s a million things you can do with it, but we don’t teach any of them to programmers. there’s no vim or emacs class, and I’ve seen the quality of your average bootcamp’s vscode material. your average programmer bounces between fad editors depending on what’s being marketed at the time, and right now LLMs are it. learning to use your tools is considered a snobby elitist thing, but it really shouldn’t be — I’d gladly trade all of my freshman CS classes for a couple semesters learning how to make vim and emacs sing and dance.
and now we’re trapped in this industry where our professionals never learned to use a screwdriver properly, so instead they bring their nephew to test for live voltage by licking the wires. and when you tell them to stop electrocuting their nephew and get the fuck out of your house, they get this faraway look in their eyes and start mumbling about how you’re just jealous that their nephew is going to become god first, because of course it’s also a weirdo cult underneath it all, that’s what happens when you vilify the concept of knowing fuck all about anything.
The only things I’ve seen it do better than I could manage with a script or in Vim are things that require natural language comprehension. Like, “here’s an email forwarded to an app, find anything that sounds like a deadline” or “given this job description, come up with a reasonable title summary for the page it shows up on”… But even then those are small things that could be entirely omitted from the functionality of an app without any trouble on the user. And there’s also the hallucinations and being super wrong sometimes.
The whole thing is a mess
presumably everyone who has to work with you spits in your coffee/tea, too?
adding brackets and changing upper/lower capitalization
I have used a system wide service in macOS for that for decades by now.
changing upper/lower capitalization
That’s literally a built-in VSCode command my dude, it does it in milliseconds and doesn’t require switching a window or even a conscious thought from you
Gotta be real, LLMs for queries makes me uneasy. We’re already in a place where data modeling isn’t as common and people don’t put indexes or relationships between tables (and some tools didn’t really support those either), they might be alright at describing tables (Databricks has it baked in for better or worse for example, it’s usually pretty good at a quick summary of what a table is for), throwing an LLM on that doesn’t really inspire confidence.
If your data model is highly normalised, with fks everywhere, good naming and well documented, yeah totally I could see that helping, but if that’s the case you already have good governance practices (which all ML tools benefit from AFAIK). Without that, I’m totally dreading the queries, people already are totally capable of generating stuff that gives DBAs a headache, simple cases yeah maybe, but complex queries idk I’m not sold.
Data understanding is part of the job anyhow, that’s largely conceptual which maybe LLMs could work as an extension for, but I really wouldn’t trust it to generate full on queries in most of the environments I’ve seen, data is overwhelmingly super messy and orgs don’t love putting effort towards governance.
I’ve done some work on natural language to SQL, both with older (like Bert) and current LLMs. It can do alright if there is a good schema and reasonable column names, but otherwise it can break down pretty quickly.
Thats before you get into the fact that SQL dialects are a really big issue for LLMs to begin with. They all looks so similar I’ve found it common for them to switch between them without warning.
Yeah I can totally understand that, Genie is databricks’ one and apparently it’s surprisingly decent at that, but it has access to a governance platform that traces column lineage on top of whatever descriptions and other metadata you give it, was pretty surprised with the accuracy in some of its auto generated descriptions though.
Yeah, the more data you have around the database the better, but that’s always been the issue with data governance - you need to stay on top of that or things start to degrade quickly.
When the governance is good, the LLM may be able to keep up, but will you know when things start to slip?
what in the utter fuck is this post
The first two examples I really like since you’re able to verify them easily before using them, but for the math one, how to you know it gave you the right answer?
they don’t verify any of it
I use it to parse log files, compare logs from successful and failed requests and that sort of stuff.
and now we’re up to inaccurate, stochastic
diff
. fucking marvelous.Stay tuned for inaccurate, stochastic
ls
.
How about real-time subtitles on movies in any language you want that are always synced?
VLC is working on that with the use of LLMs
I tried feeding Japanese audio to an LLM to generate English subs and it started translating silence and music as requests to donate to anime fansubbers.
No, really. Fansubbed anime would put their donation message over the intro music or when there wasn’t any speech to sub and the LLM learned that.
All according to k-AI-kaku!
We’ve had speech to text since the 90s. Current iterations have improved, like most technology has improved since the 90s. But, no, I wouldn’t buy a new computer with glaring privacy concerns for real time subtitles in movies.
You’re thinking too small. AI could automatically dub the entire movie while mimicking the actors voice while simultaneously moving their lips and mouth to form the words correctly.
It would just take your daily home power usage to do a single 2hr movie.
They’re great for document management. You can let it build indices, locally on your machine with no internet connection. Then when you want to find things you can ask it in human terms. I’ve got a few gb of documents and finding things is a bitch - I’m actually waiting on the miniforums a1 pro whatever the fuck to be released with an option to buy it without windows (because fuck m$) to do exactly this for our home documents.
a local search engine but shitty, stochastic, and needs way too much compute for “a few gb of documents”, got it, thanks for chiming in
Offline indexing has been working just fine for me for years. I don’t think I’ve ever needed to search for something esoteric like “the report with the blue header and the photo of 3 goats having an orgy”, if I really can’t remember the file name, or what it’s associated with in my filing system, I can still remember some key words from the text.
Better indexing / automatic tagging of my photos could be nice, but that’s a rare occurrence, not a “I NEED a button for this POS on my keyboard and also want it always listening to everything I do” kind of situation
I wish that offline indexing and archiving were normalized and more accessible, because it’s a fucking amazing thing to have
Apparently it’s useful for extraction of information out of a text to a format you specify. A Friend is using it to extract transactions out of 500 year old texts. However to get rid of hallucinations the temperature reds to be 0. So the only way is to self host.
Setting the temperature to 0 doesn’t get rid of hallucinations.
It might slightly increase accuracy, but it’s still going to go wrong.
Well, LLMs are capable (but hallucinant) and cost an absolute fuckton of energy. There have been purpose trained efficient ML models that we’ve used for years. Document Understanding and Computer Vision are great, just don’t use a LLM for them.
Reducing computer performance:
Turbo button 🤝 AI button
now that you mention it, kinda surprised I haven’t ever seen a spate of custom 3D-printed turbo buttons from overclocker circles
it could turn on the RGB! though that would imply that the RGB could be turned off in the first place, which is optimistic on my part
it’s the button for more RGB
saw a microphone with RGB and i’m like wtf is this thing supposed to do, flash disco lights when you’re on stream shouting slurs at your esteemed fellow gamers
shouting slurs at your esteemed fellow gamers
They’re called “heated gaming moments” /j
reheated gaming moments
Fresh(?) off the PUBG Bridge
nah, just call a fuckwit a fuckwit. even jokingly giving them breathing room is something they know how to abuse.
Same issue from when we had turbo buttons: why have a button for something you don’t turn off?
your comment demonstrates a remarkable lack of imagination
Better option: An array of flip switches for throttling to different speeds.
Best option: Mount these flip switches above you on an overhead control panel.
And a clear lack of understanding of what the turbo button actually did
I thought it makes the game tick faster or slower, such that you have to have it set correctly or it’s unplayable.
Kind of, though it’s about the CPU’s clock speed rather than the details of the game.
So, pedantically? no.
Experientially? yes.
Some early PC software, mostly games, were written expecting the computer ran at a fixed speed which was the speed of the original IBM PC which used an Intel 8088 that ran at 4.77 MHz. If the IBM PC was more like computers such as the Commodore 64 which changed little during its production run, that would have been fine. But eventually faster PC’s were released that ran on 286, 386, 486, etc. CPUs that were considerably faster and hence software that expected the original IBM PC hardware ran way too fast.
The turbo button was a bit of a misnomer since you would normally have it on and leave it on, only turning it off as sort of a compatibility mode to run older software. How effective it was varied quite a bit - some computers turning it off would get you pretty close to the original IBM PC in terms of speed, but others would just slow the computer down, but not nearly enough, making it mostly useless for what it was intended for.
I had one on my PC in the late 90s, early 2000s.
Imagine that, a new fledgingly technology hamfistedly inserted into every part of the user experience, while offering meager functionality in exchange for the most aggressive data privacy invasion ever attempted on this scale, and no one likes it.
That’s not fair! I care! A lot!
Just had to buy a new laptop for new place of employment. It took real time, effort, and care, but I’ve finally found a recent laptop matching my hardware requirements and sense of aesthetics at a reasonable price, without that hideous copilot button :)
quite annoyed that the Snapdragon laptops are bootlocked cos they’d make great Linux boxes
How are they bootlocked? Just need the right iso. I have done it, because I didn’t know they came with Linux for this particular client and they put windows on it, had to get a specific iso to reinstall when they borked it.
oh really? I thought MS had demanded boot locking for the ARM laptops.
I’m not 100% sure, I just know I did it once. Let me see if I can get the iso I used for Linux.
yeah looks like I’m thankfully wrong!
Which laptop did you buy if you don’t mind sharing?
Decided on this:
Still had some issues under Linux / NixOS a couple of weeks ago (hardware-wise everything worked; but specific programs, esp. Librewolf, will randomly start eating CPU and battery out of nowhere, with what looks like noops. Haven’t investigated further, yet.
sweet, glad to know it generally works with linux. this is available in my part of the world. been shopping around for a personal for-work laptop since my company is stingy. And I plan to move on anyways.
It generally works, yes, but I’d hold off for another month or two in the hopes of the issues being resolved in the kernel
I really wanted to like that laptop but the screen is so incredibly glossy that unless you’re in a totally dark room it becomes a mirror.
I think it’s a matter of preference. Haven’t noticed the screen being a mirror yet, but then again I feel like any even mildly matte screen looks like it’s being viewed through a veil…
I am a bit worried/curious about how the oled will deal with my very static waybars though, lol
wtf is going on with that touchpad - is it a tap calculator input?
Numpad/pin input. Utterly useless in my opinion. Also apparently activates itself pretty regularly by accident from palms resting when typing. YouTube comments are full of people desperate for a windows/driver update which lets you deactivate this thing.
Oh, btw, I did not go through the trouble of enabling support under Linux (you can, but it’s optional, because, well… Linux)