cracks knuckles

okay so you know how the media has a problem with taking snippets of scientific studies out of context and writing articles which mislead the public?

now imagine a computer program is doing that when you ask it questions about scientific literature but also it makes stuff up.

Replies

  1. I think it was AR4 (2007) in which the IPCC made a very judicious & nuanced projection (with several caveats) of the future potential for tropical storm frequencies & intensities in terms of decade averages.

    The breathless MSM headline was, "Scientists predict lots more hurricanes!!"

    0
  2. Take out the middleman!

    [Potential client] So it'll be cheaper then!

    Oh no, it'll be incredibly expensive, ruin our air and water, and destroy students' ability to think.

    [Pc] interesting... strokes chin

    0
  3. Imagine -

    LLMs with a functional conscience

    Developed from a culture which seems to breed so many sociopaths along with its inordinate number of “convenient truths”

    0
  4. Always verify the underlying data before you blindly follow anything. Too many people just take it for a fact if it "sounds plausible" to their viewpoint lol.

    0
  5. Now imagine some grifting asshole like Andreesen convincing doctors it can help make medical decisions.

    0
  6. It makes up for these minor shortcomings by inventing fun new quotes from people who don’t even exist

    0
  7. Computers are incapable of experimentation and the abstract thought required to form a cohesive hypothesis. They can't tell what's real because they lack senses in order to distinguish reality from fantasy.

    We would do well to remember that.

    0
  8. Omg a Tech Connections video about how when you ask ChatGPT why it messed up and it says "I panicked" it didn't actually panic, it's just that the algorithm decided that the most likely reply a human would make given the context is to say they panicked?

    0
  9. I have a favorite fraudulent scientific paper that deliberately misleads the reader. Published in JAMA in 2019. The fun thing is that anyone can spot the fraud. Still not withdrawn. Shall I link it? If 3 people LIKE this comment I'll link the paper along with a clue to help spot the fraud.

    1
  10. nnnilabs.net profile picture

    @nnnilabs.net

    day number (I don't remember) of waiting for LLMs (and whatever else people choose to call "AI") to just die

    0
  11. The worst part is that we humans have a bias of trusting a source of information because "it looks legit" because we can't be skeptical of every claim we read or hear. So, if the LLM tell you a few facts that are true, they prime you to believe the future lies it hallucinates.

    3
  12. That's the wild card in all of this, our corporate overlords have their machine learning black boxes and they have a lot of power and resources, but this is very much a GIGO situation, Garbage In Garbage Out.

    If your premises are wrong you can only be right by accident.

    1
  13. In a lecture about using AI in my business, I was told not to ask it facts. Use it for creative. Example, I asked it, "You're William Goldman. Why should a person buy travel from me?" The results were gold.

    0
  14. r6e.co profile picture

    Rob T. @r6e.co

    Machine learning, LLMs, "AI" or whatever you call it, can be an incredibly useful tool if you understand how it works, and what its weaknesses are. But it seems many are promoting it for uses that are particularly harmful due to those weaknesses. Another dangerous tech trend powered by hucksters.

    2
  15. You forgot "...and it does so with the utmost confidence in its answer. At least, that is, until you point out that it's completely wrong, at which point it'll apologize then make something new and freshly incorrect up, confidently."

    1
  16. I've found a new thong I had about how they're implemented:

    Not only will it confidently argue with me for no reason;

    Now it will confidently tell me there's no results after not searching.

    0
  17. That's because it's modelling language. It's the hollow shell of surface communication. It has nothing below it - such as thought, intellect, emotion, learning, cognition, etc.

    An LLM is an examination of language structure in order to model results. It's just a probability engine.

    0
  18. A good start would be forcing them to preface their every response with:

    “This response is from an AI that can easily get things wrong: …”

    0
  19. I have spent my life looking up facts. First in dictionaries and atlases. Then Google when it became available.i love learning new things. In the age of AI, nothing I read on the Internet now can be trusted. I find this terribly depressing. Guess I'm back to paper dictionaries. I'll need a new atlas

    0
  20. I also find the bitterest part of this whole LLM debacle to be its impact on coding. It should be a great tool for autocomplete, debugging, and natural language programming! The holy grail! And yet, thats all been poisoned. Copilot code is dogshit!

    0
  21. Science: Lets invent a smart program that can iterate through trillions of changing combinations in a few hours, to help with tedious tasks, and extremely large datasets. 💡

    OpenAI: Best I can do is a crappy copy-painter that draws too many fingers. Here's a cat on the moon. 🤷‍♂️

    0
  22. I had a job writing scripts for online ads and the bosses were actively telling us to try out AI to get our assignments done....then telling us you have to fact check because AI will just make stuff up sometimes. Oh, and also, we had to stop using the em dash because clients were convinced it was AI

    0
  23. Wow that sounds awful, hopefully nobody focused on short term profits made that the focus of every tech and financial investment for the last like 5 years with no signs of stopping.

    1
  24. That's why I don't trust AI at all. Too much hallucinating and BS from what it spews.

    I've been a software engineer for 40+ years and as such I treat everything any computer tells me as suspect until I have independent verification.

    0
  25. First thing I did when I got my hands on an LLM was to ask it very specific questions about things I knew very well.

    They still continue to get the details wrong

    0
  26. That remainds me of the phrase "vegetative electron microscopy", that started as either text recognition error or a translation mistake, and was spread into many scientific papers and articles by LLMs used by journalists, students and scientists. Disturbing stuff.

    0
  27. It’s fucking insane my college professors are out here this year having to accommodate it in the rules while trying to not give students a free pass and I’m just like WHY ARE YOU ALLOWING THIS WHAT

    0
  28. those who have made this connection and understand its implications are who are most opposed to LLMs invading everything.

    setting aside what could be done to improve the tech, we're always back to it being a shortcut for thinking.

    11
  29. It's so annoying. People should talk to chatGPT about something they're an expert in. Quiz it thoroughly and it's easy to see that it makes stuff up constantly. I don't even trust it to tell me what armor I should use in RuneScape because it just says anything.

    2
  30. Honestly, ChatGPT is just the modern day Wikipedia problem, except it has its own internal editors with no accountability.

    I'd be fine with AI if people used it in the same vein. It is a starting point to research, but you still need to do your own damned work from there.

    0
  31. I try to explain to people it's a text prediction model meaning it's very good at generating an output that sounds like a real human response. Note being accurate is not required for that goal

    0
  32. Even the idea that it "makes stuff up" is giving it too much credit. It literally can't think. Calling it "intelligence" of any kind is ceding ground to its misleading marketing.

    It's all just fancy autocomplete. Nobody in their right mind would ask their phone's autocomplete for advice.

    1
  33. I have noticed that Googles AI answer to searches has become extremely error prone in recent weeks. Just keeps learning more false information

    0
  34. This is the base level concern. The real paranoia comes in when we realize these are all corporate controlled and subject to manipulation. We were lucky for a few years there when Google gave us non-manipulated answers to searches, and we assume that will continue in an LLM world? Nope.

    0
  35. A.I. is THE MOST DANGEROUS INVENTION TO DATE.In my world A.I.would be the savior of humanity because I would utilize this powerful tool to attenuate the financial economic model of America to where this system serves 90% instead of the greedy 30% of the present system.I can do this in one generation

    0
  36. If one believes AI always has the correct answers, remember this: just as the Bible is manmade, so is AI.

    Both are written, shaped, and interpreted by people. And wherever people are involved, there will always be flaws, bias, and limits.

    Truth isn’t found in blind trust.

    0