• M0oP0o@mander.xyz
    link
    fedilink
    English
    arrow-up
    19
    ·
    6 months ago

    We really need a whole community just for the very funny AI errors like this. I could spend all day reading about leaving a dog in a hot car, jumping off a bridge and eating at least one rock a day.

  • RGB3x3@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    6 months ago

    Google has been bad for a long time, but they’ve shut the bed so hard lately. Seriously, look at this:

    I actually run out of screenshot space before I can get to an actual regular search result!

  • weew@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    ·
    6 months ago

    Well, we know Google won’t get rid of this.

    They’ll only cancel it after it actually works and becomes useful

  • kingthrillgore@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    6 months ago

    I spent most of today looking at places to rent in Denver and I come home to Google having killed it’s fucking search engine. What the hell is going on

      • markon@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 months ago

        I don’t get the doom but idk I have been watching this stuff closely for over a decade. I think it’s exciting and people are having All these strange expectations out of these systems all the sudden just because they’re smart. Well they were smart before any of this generative AI stuff. Also scientific breakthroughs in medicine, blind people have something that can assist them. As someone with some disabilities, and knowing a lot of people who also have disabilities, it seems to be the privilege of the healthy and comfortable to keep the status quo.

        Also if we want to play that game we were so fucked by climate change already that I had no hope. Now I have a little. It’s not going away so let’s push for open open open free software. (And model weights)

        • vimdiesel@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          we aren’t talking about using it to -benefit- human society like discovering new proteins or vaccines. We’re talking about it fucking up search results on google and generating billions of new sites with fucking spam. It’s a tool, but it’s being completely misused and ruining the internet.

  • Infynis@midwest.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 months ago

    I asked Google for the release date of the new Final Fantasy XIV expansion today, which comes out June 28th. It told me March 26th

      • andrew@lemmy.stuart.fun
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 months ago

        But not from a knowledge engine. It makes sense if some rando just spouted off a date from the top of their head but this is the former world leader in knowledge capture and search.

  • Lad@reddthat.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 months ago

    What the hell is going on with Google search? Has it completely shit itself after the AI implementation? I know its been bad for a while but this is another level.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 months ago

      Short answer, yes. The ratio of LLM generated noise to actual content is increasing exponentially as we speak. To us it seems overnight because the increase is so steep but it’s been happening for several years. And it’s going to get a lot worse.

      Honestly, I think we’ll have to go back to 90s methods like web rings and human curated link directories.

  • fossilesque@mander.xyz
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 months ago

    I like to use the Void from r/place as a metaphor for the Internet’s gremlins. Google has called to the void, didn’t bother to filter it and isn’t happy with what it found. To me that signals that Google no longer understands internet culture.

    • Kokesh@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Maybe they know something we don’t know? What if: It will be a crime series following the “Fall guy” case, man who was a Boeing whistleblower and got sucked out of the fuselage mid flight. Was it the usual door falling off, or was it a murder? Maybe it is being filmed right now and Google leaked the information?

  • vimdiesel@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    6 months ago

    how do you guys get these things? I never see any summaries like that. I wonder if one of my adblockers is killing google AI lmao. Do you have to be logged into your google account? I never log into google any more.

    • ChocoLemming@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      I get the same description, but when on desktop it’s in the ‘about’ section that appears on the right side of the results, so a different spot than in the OP’s image. Haven’t tried recreating any of the other flops yet though haha.

    • Sidyctism@feddit.de
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      Thats because LLMs have a certain level of randomisation built in. You wont always get the same result for any given inquiry

      • vimdiesel@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        I literally don’t see any AI blurbs at all in my searches. I wonder if one of my 4 ad blockers is killing the javascript element.

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      6 months ago

      I read somewhere they rolled it out to the US only and more countries are for now on the yet-to-do list aka soon™

        • Appoxo@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          No idea how fast they want to roll it out globally.
          But by the recent track record I’d wager they are doing it rather fast than slow.

      • Princeali311@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        6 months ago

        I’m in the US and opted into the beta for the AI stuff, but so far my experience has been generally positive.

  • isles@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 months ago

    I kinda like the new google. It’s strong and wrong and doesn’t afraid of anything.

    • voracitude@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 months ago

      On the one hand, generative AI doesn’t have to give deterministic answers i.e. it won’t necessarily generate the same answer even when asked the same question in the same way.

      But on the other hand, editing the HTML of any page to say whatever you want and then taking a screenshot of it is very easy.

      • Otter@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 months ago

        It could also be A/B testing, so not everyone will have the AI running in general

          • halcyoncmdr@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            6 months ago

            Google runs passive A/B testing all the time.

            If you’re using a Google service there’s a 99% chance you’re part of some sort of internal test of changes.

          • Otter@lemmy.ca
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            6 months ago

            Wouldn’t they be? They could measure how likely it is that someone clicks on the generated link/text

            • credo@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              2
              ·
              edit-2
              6 months ago

              Just because you click on it that doesn’t make it accurate. More importantly, that text isn’t “clickable”, so they can’t be measuring raw engagement either.

              • IllNess@infosec.pub
                link
                fedilink
                English
                arrow-up
                4
                ·
                6 months ago

                What this would measure is how long you would stay on the page without scrolling. Less scrolling means more time looking at ads.

                This is the influence of Prabhakar Raghavan.

              • RvTV95XBeo@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 months ago

                Just because you click on it that doesn’t make it accurate.

                Given the choice between clicks/engagement and accuracy, is pretty clear Google’s for the former is what got us into this hell hole.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  5 months ago

                  Yup, if you have to repeat your search 3 times, you’re seeing 3x the ads. If you control most of the market, where are your customers going to go? Most will just deal with it and search more.

      • QuadratureSurfer@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        Technically, generative AI will always give the same answer when given the same input. But, what happens is a “seed” is mixed in to help randomize things, that way it can give different answers every time even if you ask it the same question.

        • jyte@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          What happened to my computers being reliable, predictable, idempotent ? :'(

            • jyte@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 months ago

              Technically they still are, but since you don’t have a hand on the seed, practically they are not.

              • QuadratureSurfer@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 months ago

                OK, but we’re discussing whether computers are “reliable, predictable, idempotent”. Statements like this about computers are generally made when discussing the internal workings of a computer among developers or at even lower levels among computer engineers and such.

                This isn’t something you would say at a higher level for end-users because there are any number of reasons why an application can spit out different outputs even when seemingly given the “same input”.

                And while I could point out that Llama.cpp is open source (so you could just go in and test this by forcing the same seed every time…) it doesn’t matter because your statement effectively boils down to something like this:

                “I clicked the button (input) for the random number generator and got a different number (output) every time, thus computers are not reliable or predictable!”

                If you wanted to make a better argument about computers not always being reliable/predictable, you’re better off pointing at how radiation can flip bits in our electronics (which is one reason why we have implemented checksums and other tools to verify that information hasn’t been altered over time or in transition). Take, for instance, the example of what happened to some voting machines in Belgium in 2003: https://www.businessinsider.com/cosmic-rays-harm-computers-smartphones-2019-7

                Anyway, thanks if you read this far, I enjoy discussing things like this.

                • jyte@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  5 months ago

                  You are taking all my words way too strictly as to what I intended :)

                  It was more along the line : Me, a computer user, up until now, I could (more or less) expect the tool (software/website) I use in a relative consistant maner (be it reproducing a crash following some actions). Doing the same thing twice would (mostly) get me the same result/behaviour. For instance, an Excel feature applied on a given data should behave the same next time I show it to a friend. Or I found a result on Google by typing a given query, I hopefully will find that website again easily enough with that same query (even though it might have ranked up or down a little).

                  It’s not strictly “reliable, predictable, idempotent”, but consistent enough that people (users) will say it is.

                  But with those tools (ie: chatGPT), you get an answer, but are unable to get back that initial answer with the same initial query, and it basically makes it impossible to get that same* output because you have no hand on the seed.

                  The random generator is a bit streached, you expect it to be different, it’s by design. As a user, you expect the LLM to give you the correct answer, but it’s actually never the same* answer.

                  *and here I mean same as “it might be worded differently, but the meaning is close to similar as previous answer”. Just like if you ask a question twice to someone, he won’t use the exact same wording, but will essentially says the same thing. Which is something those tools (or rather “end users services”) do not give me. Which is what I wanted to point out in much fewer words :)

        • lucas@fitt.au
          link
          fedilink
          arrow-up
          2
          ·
          6 months ago

          @RecursiveParadox @voracitude it absolutely has become a meme, there are (or were) a bunch of repeatable results.

          Google is probably whack-a-mole’ing them now, because “google’s AI search results are trying to kill people” has entered the collective consciousness.

          • vimdiesel@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            I have no doubt some of their AI answers have antivax and injecting bleach recommendations from all over the web as part of their training regime.

        • thegreatgarbo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          If you read the arstechnica article Google is correcting these errors on the fly so the search results can change rapidly.

    • dutchkimble@lemy.lol
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      But the real question is, is the colour blue that you see, the colour blue that I see?