you could just do it yourself.
Personally, I think that wholly depends on the context.
For example, if someone’s having part of their email rewritten because they feel the tone was a bit off, they’re usually doing that because their own attempts to do so weren’t working for them, and they wanted a secondary… not exactly opinion, since it’s a machine obviously, but at least an attempt that’s outside whatever their brain might currently be locked into trying to do.
I know I’ve gotten stuck for way too long wondering why my writing felt so off, only to have someone give me a quick suggestion that cleared it all up, so I can see how this would be helpful, while also not always being something they can easily or quickly do themselves.
Also, there are legitimately just many use cases for applications using LLMs to parse small pieces of data on behalf of an application better than simple regex equations, for instance.
For example, Linkwarden, a popular open source link management software, (on an opt-in basis) uses LLMs to just automatically tag your links based on the contents of the page. When I’m importing thousands of bookmarks for the first time, even though each individual task is short to do, in terms of just looking at the link and assigning the proper tags, and is not something that takes significant mental effort on its own, I don’t want to do that thousands of times if the LLM will get it done much faster with accuracy that’s good enough for my use case.
I can definitely agree with you in a broader sense though, since at this point I’ve seen people write 2 sentence emails and short comments using AI before, using prompts even longer than the output, and that I can 100% agree is entirely pointless.
While true, it doesn’t keep you safe from sleeper agent attacks.
These can essentially allow the creator of your model to inject (seamlessly, undetectably until the desired response is triggered) behaviors into a model that will only trigger when given a specific prompt, or when a certain condition is met. (such as a date in time having passed)
https://arxiv.org/pdf/2401.05566
It’s obviously not as likely as a company simply tweaking their models when they feel like it, and it prevents them from changing anything on the fly after the training is complete and the model is distributed, (although I could see a model designed to pull from the internet being given a vulnerability where it queries a specific URL on the company’s servers that can then be updated with any given additional payload) but I personally think we’ll see vulnerabilities like this become evident over time, as I have no doubts it will become a target, especially for nation state actors, to simply slip some faulty data into training datasets or fine-tuning processes that get picked up by many models.