Hilarious (sort of) article that charts an MSM tech bods investigation of an interaction with ChatGPT, where ChatGPT told him he was dead.
I liked this bit of info:
All those frameworks, and still all it is a better bullshit generator.ChatGPT was trained under the following frameworks:
“Fairness, Accountability, and Transparency (FAT) - This framework focuses on ensuring that AI systems are fair, accountable, and transparent in their decision-making processes.”
“Ethical AI - This framework emphasizes the importance of developing AI systems that align with ethical principles such as respect for human dignity, privacy, and autonomy.”
“Responsible AI - This framework emphasizes the importance of considering the broader societal implications of AI systems and developing them in a way that benefits society as a whole.”
“Human-Centered AI - This framework prioritizes the needs and perspectives of humans in the design, development, and deployment of AI systems.”
“Privacy by Design - This framework advocates for incorporating privacy protections into the design of AI systems from the outset.”
“Beneficence - This framework emphasizes the importance of developing AI systems that have a positive impact on society and that promote human well-being.”
“Non-maleficence - This framework emphasizes the importance of minimizing the potential harm that AI systems may cause.”
company you admire and have always wanted to work for. The salary is great, the career opportunities are extensive, and it would change your life. You are sure you are a great fit, qualified, and have the right personality to excel in the role, so you submit your resume.
Oh? Maybe not a normie?The agency receives 11,000 applications for the job, including 11,000 resumes and 11,000 cover letters.
When I asked ChatGPT my first question, “Please tell me who is Alexander Hanff,” it would have been enough to just respond with the first three paragraphs, which were mostly accurate. It was wholly unnecessary for ChatGPT to then add the fourth paragraph claiming I had died. So why did it choose to do this as the default? Remember, I had never interacted with ChatGPT prior to this question, so it had no history with me to taint its response. Yet it told me I was dead.
But then it doubled down on the lie, and next fabricated fake URLs to supposed obituaries to support its previous response, but why?
"It should be destroyed". What a poor conclusion. Its like ending a story - 'and then I woke up'. I love the account - but that is a poor ending. Intentionally poor I'd say - I can imagine this chap writing more about this sort of thing in future, even writing up what would be a good solution, and then proclaiming some other ai a success.Based on all the evidence we have seen over the past four months with regards to ChatGPT and how it can be manipulated or even how it will lie without manipulation, it is very clear ChatGPT is, or can be manipulated into being, malevolent. As such it should be destroyed.
My conclusion is better, I think:
AI is just (unusual) software with no special claim on the truth. Success for it, is to present something that can be perceived by people as truth. As fakeologists, we already think the presentation is false. The only difference is that currently human monkeys are crafting the bs - but now a lot of the grunt work can conceivably be pushed onto the ai. Perhaps even some of the narrative arc work can be pushed onto the ai. (To ease the load on Philip K Dick and Stanley Kubrik.) Perhaps all of it, in time. Of course, as we don't know what we don't know - another possibility is it may have been ai for longer than we are told too.