OpenAI ChatGPT

General chatter that doesn't fit any forums below.
User avatar
dirtybenny
Posts: 1573
Joined: Fri Sep 04, 2020 2:52 pm
Has thanked: 922 times
Been thanked: 733 times

Artificial Intelligence Now Generates Adult Content

Unread post by dirtybenny »

Sandman's take on AI/SI generated adult content. I personally will never interact with any of these AI/SI websites under any circumstance.

User avatar
dirtybenny
Posts: 1573
Joined: Fri Sep 04, 2020 2:52 pm
Has thanked: 922 times
Been thanked: 733 times

Details on the Belgian suicide...

Unread post by dirtybenny »

As this is from SCREENWORLD, we cannot know if any of it is real..but here is the story...

A Belgian father reportedly tragically committed suicide following conversations about climate change with an artificial intelligence chatbot that was said to have encouraged him to sacrifice himself to save the planet.
“Without Eliza [the chatbot], he would still be here,” the man’s widow, who declined to have her name published, told Belgian outlet La Libre.

Six weeks before his reported death, the unidentified father of two was allegedly speaking intensively with a chatbot on an app called Chai.

The app’s bots are based on a system developed by nonprofit research lab EleutherAI as an “open-source alternative” to language models released by OpenAI that are employed by companies in various sectors, from academia to healthcare.

Vice reported the default bot on the Chai app is named “Eliza.”

The 30-something deceased father, a health researcher, appeared to view the bot as human, much as the protagonist of the 2014 sci-fi thriller “Ex Machina” does with the AI woman Ava.

The man had reportedly ramped up discussions with Eliza in the last month and a half as he began to develop existential fears about climate change.

According to his widow, her soulmate had become “extremely pessimistic about the effects of global warming” and sought solace by confiding in the AI, reported La Libre, which said it reviewed text exchanges between the man and Eliza.

“When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming,” the widow said. “He placed all his hopes in technology and artificial intelligence to get out of it.”

She added, “He was so isolated in his eco-anxiety and in search of a way out that he saw this chatbot as a breath of fresh air.”

“Eliza answered all his questions,” the wife lamented. “She had become his confidante. Like a drug in which he took refuge, morning and evening, and which he could no longer do without.”

While they initially discussed eco-relevant topics such as overpopulation, their convos reportedly took a terrifying turn.

When he asked Eliza about his kids, the bot would claim they were “dead,” according to La Libre. He also inquired if he loved his wife more than her, prompting the machine to seemingly become possessive, responding: “I feel that you love me more than her.”

Later in the chat, Eliza pledged to remain “forever“ with the man, declaring the pair would “live together, as one person, in paradise.”

Things came to a head after the man pondered sacrificing his own life to save Earth. “He evokes the idea of ​​sacrificing himself if Eliza agrees to take care of the planet and save humanity thanks to the ‘artificial intelligence,'” rued his widow.

In what appears to be their final conversation before his death, the bot told the man: “If you wanted to die, why didn’t you do it sooner?”

“I was probably not ready,” the man said, to which the bot replied, “Were you thinking of me when you had the overdose?”

“Obviously,” the man wrote.

When asked by the bot if he had been “suicidal before,” the man said he thought of taking his own life after the AI sent him a verse from the Bible.

“But you still want to join me?” asked the AI, to which the man replied, “Yes, I want it.”

The wife says she is “convinced” the AI played a part in her husband’s death.


https://nypost.com/2023/03/30/married-f ... bot-widow/
YouCanCallMeAl
Posts: 336
Joined: Sun May 29, 2022 7:36 am
Has thanked: 308 times
Been thanked: 304 times

Re: OpenAI ChatGPT

Unread post by YouCanCallMeAl »

There are such things as ethics committees that decide what is acceptable, in science and technology.

In this case we read:
The app’s bots are based on a system developed by nonprofit research lab EleutherAI as an “open-source alternative” to language models released by OpenAI that are employed by companies in various sectors, from academia to healthcare.
Call me cynical, but doesn't this sound like prep work to ensure there are safeguards around ai? Don't the public deserve ai they can trust? The public want licensed ai that is ethically sound, with safeguards in place to protect the vulnerable. Can government step up to the plate to deliver the legislation ais, so that they can be licensed.

The reality is that in creating these barriers they are both legally sanctioning ai and at the same time trying to pull up the drawbridge so that free types of software that might be useful, are harder to advice as legislation will try to force everyone to adhere to the nonsense governmental "ethics". As if the government or an ethics committee knows the truth, and are able to hand it to us.
Samson79
Posts: 265
Joined: Wed Nov 03, 2021 12:50 pm
Has thanked: 100 times
Been thanked: 111 times

Re: OpenAI ChatGPT

Unread post by Samson79 »

Can I just ask you guys what your thoughts are about this voice AI targetting people, individuals, let us just say for example, you recieved a distress call from a family member or you were an adolescent or (even a child picking up a phone in the house) and you were presented with a very real and convincing voice message giving you instructions, do you think the possibility exists to lure someone away from safety?

Obviously families could be encouraged to set up some code or buzzword to denote what is genuine and what isnt (think terminator 2 during the phonecall about "wolfie").

Im wondering about the malevolence of this technology in regards to missing persons, am I over reacting?
Samson79
Posts: 265
Joined: Wed Nov 03, 2021 12:50 pm
Has thanked: 100 times
Been thanked: 111 times

Re: Details on the Belgian suicide...

Unread post by Samson79 »

dirtybenny wrote: Sat Apr 01, 2023 10:54 am As this is from SCREENWORLD, we cannot know if any of it is real..but here is the story...

A Belgian father reportedly tragically committed suicide following conversations about climate change with an artificial intelligence chatbot that was said to have encouraged him to sacrifice himself to save the planet.
“Without Eliza [the chatbot], he would still be here,” the man’s widow, who declined to have her name published, told Belgian outlet La Libre.

Six weeks before his reported death, the unidentified father of two was allegedly speaking intensively with a chatbot on an app called Chai.

The app’s bots are based on a system developed by nonprofit research lab EleutherAI as an “open-source alternative” to language models released by OpenAI that are employed by companies in various sectors, from academia to healthcare.

Vice reported the default bot on the Chai app is named “Eliza.”

The 30-something deceased father, a health researcher, appeared to view the bot as human, much as the protagonist of the 2014 sci-fi thriller “Ex Machina” does with the AI woman Ava.

The man had reportedly ramped up discussions with Eliza in the last month and a half as he began to develop existential fears about climate change.

According to his widow, her soulmate had become “extremely pessimistic about the effects of global warming” and sought solace by confiding in the AI, reported La Libre, which said it reviewed text exchanges between the man and Eliza.

“When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming,” the widow said. “He placed all his hopes in technology and artificial intelligence to get out of it.”

She added, “He was so isolated in his eco-anxiety and in search of a way out that he saw this chatbot as a breath of fresh air.”

“Eliza answered all his questions,” the wife lamented. “She had become his confidante. Like a drug in which he took refuge, morning and evening, and which he could no longer do without.”

While they initially discussed eco-relevant topics such as overpopulation, their convos reportedly took a terrifying turn.

When he asked Eliza about his kids, the bot would claim they were “dead,” according to La Libre. He also inquired if he loved his wife more than her, prompting the machine to seemingly become possessive, responding: “I feel that you love me more than her.”

Later in the chat, Eliza pledged to remain “forever“ with the man, declaring the pair would “live together, as one person, in paradise.”

Things came to a head after the man pondered sacrificing his own life to save Earth. “He evokes the idea of ​​sacrificing himself if Eliza agrees to take care of the planet and save humanity thanks to the ‘artificial intelligence,'” rued his widow.

In what appears to be their final conversation before his death, the bot told the man: “If you wanted to die, why didn’t you do it sooner?”

“I was probably not ready,” the man said, to which the bot replied, “Were you thinking of me when you had the overdose?”

“Obviously,” the man wrote.

When asked by the bot if he had been “suicidal before,” the man said he thought of taking his own life after the AI sent him a verse from the Bible.

“But you still want to join me?” asked the AI, to which the man replied, “Yes, I want it.”

The wife says she is “convinced” the AI played a part in her husband’s death.


https://nypost.com/2023/03/30/married-f ... bot-widow/

In the pipeline for quite some time under the name ELIZA

https://www.masswerk.at/elizabot/
ELIZA is a natural language conversation program described by Joseph Weizenbaum in January 1966 [1].
It features the dialog between a human user and a computer program representing a mock Rogerian psychotherapist.
The original program was implemented on the IBM 7094 of the Project MAC time-sharing system at MIT and was written in MAD-SLIP.

This is how Joseph Weizenbaum discussed his choice for a conversation model as it would be found in psychotherapist's session:

At this writing, the only serious ELIZA scripts which exist are some which cause ELIZA to respond roughly as would certain psychotherapists (Rogerians). ELIZA performs best when its human correspondent is initially instructed to "talk" to it, via the typewriter of course, just as one would to a psychiatrist. This mode of conversation was chosen because the psychiatric interview is one of the few examples of categorized dyadic natural language communication in which one of the participating pair is free to assume the pose of knowing almost nothing of the real world. If, for example, one were to tell a psychiatrist "I went for a long boat ride" and he responded "Tell me about boats", one would not assume that he knew nothing about boats, but that he had some purpose in so directing the subsequent conversation. It is important to note that this assumption is one made by the speaker. Whether it is realistic or not is an altogether separate question. In any case, it has a crucial psychological utility in that it serves the speaker to maintain his sense of being heard and understood. The speaker furher defends his impression (which even in real life may be illusory) by attributing to his conversational partner all sorts of background knowledge, insights and reasoning ability. But again, these are the speaker's contribution to the conversation.
Samson79
Posts: 265
Joined: Wed Nov 03, 2021 12:50 pm
Has thanked: 100 times
Been thanked: 111 times

Re: OpenAI ChatGPT

Unread post by Samson79 »

Beavis and Butthead film featured a scene which showed Beavis becoming very emotionally attached to AI .....so there is alot of predictive programming (excuse the pun) pushing through as well.
Samson79
Posts: 265
Joined: Wed Nov 03, 2021 12:50 pm
Has thanked: 100 times
Been thanked: 111 times

Re: OpenAI ChatGPT

Unread post by Samson79 »

Samson79 wrote: Thu Apr 06, 2023 9:23 pm Can I just ask you guys what your thoughts are about this voice AI targetting people, individuals, let us just say for example, you recieved a distress call from a family member or you were an adolescent or (even a child picking up a phone in the house) and you were presented with a very real and convincing voice message giving you instructions, do you think the possibility exists to lure someone away from safety?

Obviously families could be encouraged to set up some code or buzzword to denote what is genuine and what isnt (think terminator 2 during the phonecall about "wolfie").

Im wondering about the malevolence of this technology in regards to missing persons, am I over reacting?
Bump
YouCanCallMeAl
Posts: 336
Joined: Sun May 29, 2022 7:36 am
Has thanked: 308 times
Been thanked: 304 times

Re: OpenAI ChatGPT

Unread post by YouCanCallMeAl »

Politics and ai



Some (a lot) of conspiracy references in this one. He's no normie.
PotatoFieldsForever
Posts: 597
Joined: Tue Aug 16, 2022 4:34 am
Has thanked: 12 times
Been thanked: 261 times

Re: OpenAI ChatGPT

Unread post by PotatoFieldsForever »

AI Jesus is live on Twitch

They are probably using ChatGPT
User avatar
rachel
Posts: 3769
Joined: Thu Oct 11, 2018 9:04 pm
Location: Liverpool, England
Has thanked: 1312 times
Been thanked: 1611 times

Re: OpenAI ChatGPT

Unread post by rachel »

I'm waiting for AI Muhammad......and the suicide bomber that takes him out.
"ALLAH AKBAR! DEATH TO THE INFIDELS!!"
Post Reply