cracks knuckles

okay so you know how the media has a problem with taking snippets of scientific studies out of context and writing articles which mislead the public?

now imagine a computer program is doing that when you ask it questions about scientific literature but also it makes stuff up.

83

those who have made this connection and understand its implications are who are most opposed to LLMs invading everything.

setting aside what could be done to improve the tech, we're always back to it being a shortcut for thinking.

Replies

  1. Me and my wife are trying to stop just using the internet everytime we have a question. I gulp may buy an encyclopedia

    0
  2. Tbf they are improving the tech, all the time

    The thing is, with chatbots, getting better just means being a more convincing liar

    1
  3. On the one hand, this is entirely correct, poorly understood, and subject of a growing body of evidence pointing to LLM users losing capacity for certain kinds of cognitive tasks. On the other hand, if we can make things easy, we're going to, despite the entirely predictable costs.

    0
  4. As someone who is both fascinated by the technology and resents the foolhardy way it is being injected into our lives, I find more interesting how likely it is that future tech derived from all this will cure cancer or something equally terrifying in its insurmountability by just "thinking".

    0
  5. People are offloading any small capacity they have to think and reason and ceding that whole process to an AI. It’s crazy how fast it has happened. It comes up all the time, especially with young people, where the refrain of “I asked Chat GPT” is all but ubiquitous.

    1
  6. r6e.co profile picture

    Rob T. @r6e.co

    Fancy pattern-recognition and synthesis, horrifically applied.

    0
  7. It is so obvious when giving it even a cursory thought. How could anyone (let alone entire institutions of "higher learning") get duped into believing this is a viable way forward :/

    2
  8. I want to clarify that my example here is actually what I think is one of the best use cases of LLMs (as a sort of search and parsing tool).

    The problem is handing everyone the keys to a very confident-sounding implementation of that. It takes subject matter experts to verify the results.

    25
  9. the pool of future training data is already becoming polluted with these 'hallucinations' it'll soon get reinforced as primary sources. , within a few years it'll be impossible to tell what's accurate and what is LLM BS.

    1