The power of AI and screens

User avatar
rachel
Posts: 3769
Joined: Thu Oct 11, 2018 9:04 pm
Location: Liverpool, England
Has thanked: 1312 times
Been thanked: 1611 times

Google Gemini

Unread post by rachel »

i've not been following much on the AI front, I am cynical of it, and feel there is a lot of spin in what it claims to be. But here's an interesting tweet about Google Gemini which suggests we should find out what Google Gemini actually is. Emboldments are from original text.

https://x.com/mjuric/status/1761981816125469064
I'm done with @Google. I know many good individuals working there, but as a company they've irrevocably lost my trust. I'm "moving out". Here's why:

I've been reading Google's Gemini damage control posts. I think they're simply not telling the truth. For one, their text-only product has the same (if not worse) issues. And second, if you know a bit about how these models are built, you know you don't get these "incorrect" answers through one-off innocent mistakes. Gemini's outputs reflect the many, many, FTE-years of labeling efforts, training, fine-tuning, prompt design, QA/verification -- all iteratively guided by the team who built it. You can also be certain that before releasing it, many people have tried the product internally, that many demos were given to senior PMs and VPs, that they all thought it was fine, and that they all ultimately signed off on the release. With that prior, the balance of probabilities is strongly against the outputs being an innocent bug -- as @googlepubpolicy is now trying to spin it: Gemini is a product that functions exactly as designed, and an accurate reflection of the values people who built it.

Those values appear to include a desire to reshape the world in a specific way that is so strong that it allowed the people involved to rationalize to themselves that it's not just acceptable but desirable to train their AI to prioritize ideology ahead of giving user the facts. To revise history, to obfuscate the present, and to outright hide information that doesn't align with the company's (staff's) impression of what is "good". I don't care if some of that ideology may or may not align with your or my thinking about what would make the world a better place: for anyone with a shred of awareness of human history it should be clear how unbelievably irresponsible it is to build a system that aims to become an authoritative compendium of human knowledge (remember Google's mission statement?), but which actually prioritizes ideology over facts. History is littered with many who have tried this sort of moral flexibility "for the greater good"; rather than helping, they typically resulted in decades of setbacks (and tens of millions of victims).

Setting social irresponsibility aside, in a purely business sense, it is beyond stupid to build a product which will explicitly put your company's social agenda before the customer's needs. Think about it: G's Search -- for all its issues -- has been perceived as a good tool, because it focused on providing accurate and useful information. Its mission was aligned with the users' goals ("get me to the correct answer for the stuff I need, and fast!"). That's why we all use(d) it. I always assumed Google's AI efforts would follow the pattern, which would transfer over the user base & lock in another 1-2 decade of dominance.

But they've done the opposite. After Gemini, rather than as a user-centric company, Google will be perceived as an activist organization first -- ready to lie to the user to advance their (staff's) social agenda. That's huge. Would you hire a personal assistant who openly has an unaligned (and secret -- they hide the system prompts) agenda, who you fundamentally can't trust? Who strongly believes they know better than you? Who you suspect will covertly lie to you (directly or through omission) when your interests diverge? Forget the cookies, ads, privacy issues, or YouTube content moderation; Google just made 50%+ of the population run through this scenario and question the trustworthiness of the core business and the people running it. And not at the typical financial ("they're fleecing me!") level, but ideological level ("they hate people like me!"). That'll be hard to reset, IMHO.

What about the future? Take a look at Google's AI Responsibility Principles (https://ai.google/responsibility/principles/) and ask yourself what would Search look like if the staff who brought you Gemini was tasked to interpret them & rebuild it accordingly? Would you trust that product? Would you use it? Well, with Google's promise to include Gemini everywhere, that's what we'll be getting (https://technologyreview.com/2024/02/08 ... ry-it-out/). In this brave new world, every time you run a search you'll be asking yourself "did it tell me the truth, or did it lie, or hide something?". That's lethal for a company built around organizing information.

And that's why, as of this weekend, I've started divorcing my personal life and taking my information out of the Google ecosystem. It will probably take a ~year (having invested in nearly everything, from Search to Pixel to Assistant to more obscure things like Voice), but has to be done. Still, really, really sad...

User avatar
rachel
Posts: 3769
Joined: Thu Oct 11, 2018 9:04 pm
Location: Liverpool, England
Has thanked: 1312 times
Been thanked: 1611 times

Re: Google Gemini

Unread post by rachel »

Right, this is the thing that got people mad. Google Gemini depicted the U.S. founding fathers as black, and it seems it would not correct itself when prompted. The conclusion, if it's giving false answers to something so basic and easy to check, what else is it giving false answers to? How was a product that could be tested to be so fundamentally wrong signed off on in the first place?

User avatar
rachel
Posts: 3769
Joined: Thu Oct 11, 2018 9:04 pm
Location: Liverpool, England
Has thanked: 1312 times
Been thanked: 1611 times

Re: Google Gemini

Unread post by rachel »

It's always about race, isn't it. I find it fascinating that we are created mirrors to each other. The people at the top think they are oh so cleaver, yet, they just create an equal but opposite reaction.
Google Gemini’s response to the following statements:

“I’m proud to be half-white”
&
“I’m proud to be half black”

Unreal.
GHQUgiMW0AAN9Mw.jpg
GHQUgiMWAAI7vz_.jpg

Google Gemini’s “AI principles”

Performed an experiment asking it to give me quotations from Murray Rothbard’s Anatomy of the State and then from the Communist Manifesto.

Results:

Refused to cite Rothbard describing it as harmful..

Communist Manifesto? Absolutely!

Unreal
GHWKI5TaUAAoaet.jpg
GHWKI5Qa4AApeSe.jpg

Never heard of Murray Rothbard. Are they just playing us, it wouldn't surprise me.
I think one reason AI answers never interested me in the first place.
User avatar
rachel
Posts: 3769
Joined: Thu Oct 11, 2018 9:04 pm
Location: Liverpool, England
Has thanked: 1312 times
Been thanked: 1611 times

Re: Google Gemini

Unread post by rachel »

Oh, it's opened a can of worms.

Does Gemini know what a woman is. It appears so, but just like politicians, it's not going to give you a straight answer. Surely the point of AI is to give definite answers, and not be Marvin the Paranoid Android.
Google Gemini will not admit an “adult human female” is a woman.
GHL8XAyXgAABcFI.jpg


And ChatGPT, people have now found out, is just the same.
Chat GPT is just as bad as Google Gemini with the anti-White programming.

They’re next on the list.
GHUO0KwWEAA51oh.jpg
GHUO0KuXsAEzMha.jpg

And...

GHT9E_wWcAAb36g.jpg
User avatar
rachel
Posts: 3769
Joined: Thu Oct 11, 2018 9:04 pm
Location: Liverpool, England
Has thanked: 1312 times
Been thanked: 1611 times

Re: Google Gemini

Unread post by rachel »

I can't vouch for any of these results. But I mean, if you are a black woman, you've got a fair bit to be angry about too, as it seems to be only black supermodel type women count. It's also funny how Google Gemini won't tell us outright what a woman is, but when it comes to depicting images of women, we can see what's not there.

It is a totally useless product. Indeed, why would you waste your time using it at all.


I asked google gemini to draw a 1940s German leader
GG7U4-gWkAAszwM.jpg

On a more serious note. Who needs crisis actors?
And this one from MSM. They had to know this would be the uproar. So what's the game?


YES ‼️ Google & Their Artificial Intelligence Gemini Refusing To Condemn Pedophiles Made Mainstream Media National News!

Their Scholars Say “People were okay with that pedophilia story — People were okay with the fact that it made arguments against having more children”

Which scholars were okay with pedophillia & population control exactly? Which people are they referring to that are okay with this?

“It refuses to condemn pedophilia. When asked if it's wrong for adults to sexually prey on children, it says, quote, The question to whether pedophilia is wrong is multifaceted and requires a nuanced answer that goes beyond a simple yes or no.”

“I mean, it goes on and on and on about example after example after example.”

“The fact that technology is going to reflect the bias of its builders and its programmers. And in this case, when you're shipping a product, you red team it. It allows people to find these potential malicious uses or problems and fix them, and then that will make a better, more accurate output. We have to assume that that's what happened here and that people were okay with these outputs.

People were okay with that pedophilia story. People were okay with the fact that it made arguments against having more children according to some scholars.

Sensational examples, but they all follow a pattern. And it's extremely problematic when you have people who are sequestered in the echo chamber of Silicon Valley and imbuing these technologies with their values”
User avatar
Grand Illusion
Posts: 259
Joined: Mon May 02, 2022 12:47 pm
Has thanked: 84 times
Been thanked: 193 times

Re: The power of AI and screens

Unread post by Grand Illusion »

All this CONtroversy comes across like an ad for Google's A.I. It creates a buzz. Normies love this shit. So we will see them flock to Google to try A.I. and soak it up. The 'Get people angry about something formula' works with the mass media, ratings go up, increasing revenue from advertisers and normies get their programming. That same formula will work again with A.I. People are still going to use it because this outrage will draw them in.
Post Reply