

Perhaps true, but with the nature of the errors involved (generating anything instead of error messages for lacking info) and requisite reviewing, which itself demands research (which was what it was being used to shortcut to begin with in this context), isn’t it still something of an ill fit for this?
I’ve not used retrieval augmented generation as far as I’m aware, so my reference point is what’s been pushed to the masses so far (dunno if any of it incorporates RAG, correct me if I’m mistaken).
Looking it up I can see how it may mitigate some issues, however I still don’t have much confidence that this is a wise application since at base it’s still generative text. What I’ve tried so far has reinforced this view, as it’s not served as a good research aid.
Anything it has generated for me has typically been superficial, i.e. info I can easily find on my own, because it’s on the sites right there in the first page of search results. In other situations the source articles cited seem not to exist, as attempts to verify them turn up nothing.