• scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    5 days ago

    Unlike with coding, there are no simple “tests” to try out whether an AI’s answer is correct or not.

    So for most actual practical software development, writing tests is in fact an entire job in and of itself and its a tricky one because covering even a fraction of the use cases and complexity the software will actually face when deployed is really hard. So simply letting the LLMs brute force trial-and-error their code through a bunch of tests won’t actually get you good working code.

    AlphaEvolve kind of did this, but it was testing very specific, well defined, well constrained algorithms that could have very specific evaluation written for them and it was using an evolutionary algorithm to guide the trial and error process. They don’t say exactly in their paper, but that probably meant generating code hundreds or thousands or even tens of thousands of times to generate relatively short sections of code.

    I’ve noticed a trend where people assume other fields have problems LLMs can handle, but the actually competent experts in that field know why LLMs fail at key pieces.

    • buddascrayon@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      I think they meant that coders can run their code and find out if it works or not but lawyers would have to stand in front of a judge or other legally powerful entity and discover the hard way that the LLM outputted statements were essentially gobbledygook.

      • zogwarg@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        4 days ago

        But code that doesn’t crash isn’t necessarily code that works. And even for code made by humans, we sometimes do find out the hard way, and it can sometimes impact an arbitrarily large number of people.

    • HedyL@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 days ago

      I’ve noticed a trend where people assume other fields have problems LLMs can handle, but the actually competent experts in that field know why LLMs fail at key pieces.

      I am fully aware of this. However, in my experience, it is sometimes the IT departments themselves that push these chatbots onto others in the most aggressive way. I don’t know whether they found them to be useful for their own purposes (and therefore assume this must apply to everyone else as well) or whether they are just pushing LLMs because this is what management expects them to do.

      • mountainriver@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        From experience in an IT-department, I would say mainly a combination of management pressure and need to make security problems manageable by choosing AI tools to push on users before too many users start using third party tools.

        Yes, they will create security problems anyway, but maybe, just maybe, users won’t copy paste sensitive business documents into third party web pages?

        • HedyL@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 days ago

          Yes, they will create security problems anyway, but maybe, just maybe, users won’t copy paste sensitive business documents into third party web pages?

          I can see that. It becomes kind of a protection racket: Pay our subscription fees, or data breaches are going to befall you, and you will only have yourself (and your chatbot-addicted employees) to blame.