Edit: After reading the discussion, I figured I’d let GPT4 speak for itself:

The quest to identify true artificial intelligence (AI) indeed presents challenges, especially as AI models become more sophisticated. Let’s explore some indicators that researchers and practitioners consider when assessing AI systems:

  1. Understanding Context and Meaning:

    • True AI should demonstrate an understanding of context and meaning. It should not merely generate plausible-sounding sentences but also comprehend the content it produces.
    • A system that can engage in nuanced conversations, infer intent, and provide relevant responses based on context would be a strong indicator of advanced AI.
  2. Generalization and Adaptability:

    • An AI system that can generalize from its training data and adapt to novel situations is valuable.
    • True AI should not be limited to memorizing specific examples but should exhibit flexibility in handling diverse scenarios.
  3. Creativity and Novelty:

    • Creativity is a hallmark of human intelligence. An AI system that generates novel ideas, art, or solutions beyond its training data suggests progress.
    • For instance, AI-generated poetry, music, or visual art that resonates with human emotions could be an indicator.
  4. Ethical Decision-Making:

    • True AI should navigate ethical dilemmas. It should consider consequences, fairness, and societal impact.
    • An AI system that can weigh trade-offs and make ethical choices aligning with human values is a positive sign.
  5. Self-Awareness and Self-Improvement:

    • While self-awareness akin to human consciousness remains elusive, an AI system that can self-assess its performance and seek improvement is promising.
    • Continuous learning, adaptation, and self-correction are essential traits.
  6. Transparency and Explainability:

    • True AI should be transparent about its decision-making process. It should provide explanations for its outputs.
    • Efforts to develop interpretable AI models and techniques for explaining their predictions contribute to trustworthiness.
  7. Robustness and Safety:

    • AI systems must handle adversarial inputs, noisy data, and edge cases gracefully.
    • Robustness against attacks and safety precautions to prevent harmful behavior are critical indicators.
  8. Human-AI Collaboration:

    • Rather than replacing humans, true AI should complement human abilities.
    • Systems that collaborate effectively with humans—such as AI assistants, co-creative tools, or medical diagnostics—demonstrate progress.
  9. Quantifiable Metrics:

    • Researchers propose various metrics to assess AI capabilities, such as perplexity, BLEU score, F1 score, and accuracy.
    • However, no single metric captures the full spectrum of intelligence.
  10. Detection Tools:

    • Researchers are developing tools to differentiate between human- and AI-generated text.
    • These tools analyze features like fluency, word frequency, punctuation patterns, and sentence length¹.
    • While imperfect, they contribute to identifying AI-generated content.

In summary, the arrival of true AI may not be a singular event but a gradual evolution marked by advancements in the above dimensions. As AI continues to progress, our understanding of its capabilities will evolve, and new indicators may emerge.

Source: Conversation with Bing, 2/29/2024 (1) How to spot AI-generated text | MIT Technology Review. https://www.technologyreview.com/2022/12/19/1065596/how-to-spot-ai-generated-text/. (2) Intelligent Supertrend (AI) - Buy or Sell Signal — Indicator by … https://www.tradingview.com/script/q9244PAH-Intelligent-Supertrend-AI-Buy-or-Sell-Signal/. (3) Indicators - True ALGO. https://truealgo.com/indicators/. (4) Improve Key Performance Indicators With AI - MIT Sloan Management Review. https://sloanreview.mit.edu/article/improve-key-performance-indicators-with-ai/. (5) New AI classifier for indicating AI-written text - OpenAI. https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text/.

  • zcd@lemmy.ca
    link
    fedilink
    arrow-up
    21
    ·
    9 months ago

    You reach down and you flip the tortoise over on its back, Leon. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t. Not without your help. But you’re not helping… why is that Leon?

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    9 months ago

    If you come up with a test, people develop something that does exactly what the test needs, and ignores everything else.

    But we can’t even say what human consciousness is yet.

    Like, legitimately, we don’t know what causes it and we don’t know how anaesthesia interferes either.

    One of the guys who finished up Einstein’s work (Roger Penrose) thinks it has to do with quantum collapse. But there’s a weird twilight zone where anesthesia has stopped consciousness but hasn’t stopped that quantum process yet.

    So we’re still missing something, and dudes like in his 90s. He’s been working on this for decades, but he’ll probably never live to see it finished. Someone else will have to finish later like him and Hawking did for Einstein

    • SpaceNoodle@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      9 months ago

      “Because quantum” always feels like new-age woo-woo bullshit.

      It’s more likely just too vague to define.

      • teawrecks@sopuli.xyz
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        9 months ago

        It’s good to be skeptical of people who throw the word quantum around, but in this case you’d be wrong. Penrose is the real deal.

  • bionicjoey@lemmy.ca
    link
    fedilink
    arrow-up
    11
    ·
    9 months ago

    IMO the Turing test is fine, as long as you allow an indefinite length of conversation.

    It’s not simply about there existing some conversation with a computer where you can’t tell it’s a computer. It’s about there not existing any conversation where you can tell it’s a computer.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      5
      ·
      9 months ago

      It’s an interesting point. I think a skilled examiner is necessary though, because they’re really good at basic chit-chat. Even pre-LLM stuff could fool laymen sometimes.

      • bionicjoey@lemmy.ca
        link
        fedilink
        arrow-up
        5
        ·
        9 months ago

        Yes, that’s part of it too. Basically there cannot be any possible exchange between the machine and any human where the human would determine they were talking to a machine.

        FWIW, I think this was Turing’s original idea as well. The Turing test is meant to be idealistic. It’s a definition of machine intelligence which defines intelligence in terms of whether or not humans could agree that it is intelligence.

  • Tartas1995@discuss.tchncs.de
    link
    fedilink
    arrow-up
    10
    ·
    9 months ago

    The difference between “ai” and “true ai” is as vague as it gets. Are you a true intelligent agent? Or just a “intelligent agent”? Like seriously how are you different to a machine with inputs and outputs and a bunch of seemingly “random” things happening in-between

      • Tartas1995@discuss.tchncs.de
        link
        fedilink
        arrow-up
        2
        ·
        9 months ago

        Qualia is, if I am not mistaken, totally subjective. My argument is that how could you tell that a computer doesn’t have qualia and prove to me that you have qualia. While I wouldn’t limit it to qualia. What can you detect in other people that an ai couldn’t replicate? Because as long as they are able to replicate all these qualities, you can’t tell if an ai is “true” or not, as it might have those qualities or might just replicate them.

        • pmk@lemmy.sdf.org
          link
          fedilink
          arrow-up
          2
          ·
          9 months ago

          I see, I thought you were asking me how I know I experience things in a qualia way. I suspect it can’t be proven to someone else.

    • Ekky@sopuli.xyz
      link
      fedilink
      arrow-up
      2
      ·
      9 months ago

      That’s one of my favorite theories as to what “sentience” is.

      We humans might just be so riddled with mutations and barely functional genetic traits, which tend to be more in our way than help, that we just might have succeeded in banging together a “mundane sentience” by sheer amount of error processing alone.

      Whether this is true is of course up for debate, but it would mean that we can achieve AGI just by feeding it enough trash and giving it enough processing power. Bonus if the head engineer sometimes takes a hammer to the mainframe.

      • Thorny_Insight@lemm.ee
        link
        fedilink
        arrow-up
        3
        ·
        9 months ago

        By sentience I assume you’re talking about consciousness. The fact that it feels like something to be. I think it’s somewhat safe to assume a true AGI system would also be consciouss (if feels like something to be that system) but I don’t think it needs to be and even if it was we couldn’t know for sure. Consciousness is entirely an subjective experience. We can’t even prove other people are consciouss. It’s just a safe assumption. I can also imagine a consciouss system that might not be generally intelligent. Does it feel like something to be a fish? Probably. Are they generally intelligent? Probably not.

  • ShittyBeatlesFCPres@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    9 months ago

    I’ll believe it’s true A.I. when it can beat me at Tecmo Super Bowl. No one in my high school or dorm could touch me because they misunderstood the game. Lots of teams can score at any time. Getting stops and turnovers is the key. Tecmo is like Go where there’s always a counter and infinite options.

    • Godthrilla@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      9 months ago

      This is a scientific paper I would like to see submitted honestly. A simple game, but still with plenty of nuance…how would an AI develop a winning strategy?

  • HopeOfTheGunblade@kbin.social
    link
    fedilink
    arrow-up
    8
    ·
    9 months ago

    What do you mean when you say “true AI”? The question isn’t answerable as asked, because those words could mean a great many things.

  • Thorny_Insight@lemm.ee
    link
    fedilink
    arrow-up
    6
    ·
    9 months ago

    By “true AI” I assume OP is talking about Artificial General Intelligence (AGI)

    I hate reading these discussions when we can’t even settle on common terms and definitions.

    • Melatonin@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      That’s kind of the question that’s being posed. We thought we knew what we wanted until we found out that wasn’t it. The Turing test ended up being a bust. So what exactly are we looking for?

      • Thorny_Insight@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        9 months ago

        The goal of AI research has almost always been to reach AGI. The bar for this has basically been human level intelligence because humans are generally intelligent. Once an AI system reaches “human level intelligence” you no longer need humans to develop it further as it can do that by itself. That’s where the threat of singularity, i.e. intelligence explosion comes from meaning that any further advancements happens so quickly that it gets away from us and almost instantly becomes a superintelligence. That’s why many people think that “human level” artificial intelligence is a red herring as it doesn’t stay that way but for a tiny moment.

        What’s ironic about the Turing Test and LLM models like GPT4 is that it fails the test by being so competent on wide range of fields that you can know for sure that it’s not a human because a human could never posses that amount of knowledge.

        • 8ace40@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          9 months ago

          I was thinking… What if we do manage to make the AI as intelligent as a human, but we can’t make it better than that? Then, the human intelligence AI will not be able to make itself better, since it has human intelligence and humans can’t make it better either.

          Another thought would be, what if making AI better is exponentially harder each time. So it would be impossible to get better at some point, since there wouldn’t be enough resources in a finite planet.

          Or if it takes super-human intelligence to make human-intelligence AI. So the singularity would be impossible there, too.

          I don’t think we will see the singularity, at least in our lifetime.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      9 months ago

      But now that AI has become advanced enough to get uncomfortably close to us, we need to move the goalposts farther away so everyone can relax again.

    • Alex@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      9 months ago

      Have any actually passed yet? Sure LLMs can generate a lot of plausible text now better than previous generations of bots, but they still tend to give themselves away with their style of answering and random hallucinations.

  • GolfNovemberUniform@lemmy.ml
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    9 months ago

    There are no completely accurate tests and there will never be one. Also, if an AI is conscious, it can easily fake its behavior to pass a test

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    4
    ·
    9 months ago

    The ultimate test would be application. Can it replace humans in all situations (or at least all intellectual tasks)?

    GPT4 sets pretty strong conditions. Ethics in particular is tricky, because I doubt a self-consistent set of mores that most people would agree with even exists.

  • linearchaos@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 months ago

    There’s simply isn’t any reliable way. Forget full AI, LLM’s will eventually be indistinguishable.

    A good tell would be real time communication with perfect grammar and diction. If you have a couple solid minutes of communication and it sounds like something out of a pamphlet, You might be talking to an AI.

    • bpalmerau@aussie.zone
      link
      fedilink
      arrow-up
      5
      ·
      9 months ago

      What about semantics?

      “Nothing is better than cake."

      “But bread is better than nothing.

      "Does that mean that bread is better than cake?”

      • linearchaos@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        Right now there’s enough logical holes that you can tell easily even without trickery.

        If you just tell GPT it’s wrong it will backpedal and change its answer even if It was right.

        At some point that won’t be the case.