ChatGPT is making up fake Guardian articles
November 2024 › Forums › General discussion › ChatGPT is making up fake Guardian articles
- This topic has 6 replies, 4 voices, and was last updated 1 year, 7 months ago by robbo203.
-
AuthorPosts
-
April 6, 2023 at 11:22 am #242220LewParticipant
A warning for us all I think.
Last month one of our journalists received an interesting email. A researcher had come across mention of a Guardian article, written by the journalist on a specific subject from a few years before. But the piece was proving elusive on our website and in search. Had the headline perhaps been changed since it was launched? Had it been removed intentionally from the website because of a problem we’d identified? Or had we been forced to take it down by the subject of the piece through legal means?
The reporter couldn’t remember writing the specific piece, but the headline certainly sounded like something they would have written. It was a subject they were identified with and had a record of covering. Worried that there may have been some mistake at our end, they asked colleagues to go back through our systems to track it down. Despite the detailed records we keep of all our content, and especially around deletions or legal issues, they could find no trace of its existence.
Why? Because it had never been written.
Luckily the correspondent had told us that they had carried out their research using ChatGPT. In response to being asked about articles on this subject, the AI had simply made some up. Its fluency, and the vast training data it is built on, meant that the existence of the invented piece even seemed believable to the person who absolutely hadn’t written it…
Two days ago our archives team was contacted by a student asking about another missing article from a named journalist. There was again no trace of the article in our systems. The source? ChatGPT.
It’s easy to get sucked into the detail on generative AI, because it is inherently opaque. The ideas and implications, already explored by academics across multiple disciplines, are hugely complex, the technology is developing rapidly, and companies with huge existing market shares are integrating it as fast as they can to gain competitive advantages, disrupt each other and above all satisfy shareholders.
Chris Moran is the Guardian’s head of editorial innovation.April 10, 2023 at 2:55 pm #242310robbo203ParticipantInteresting comparison….
“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the scientists wrote. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
Musk and others fear the destructive potential of AI, that an all-powerful “artificial general intelligence” could have profound dangers for humanity.
But their demand for a six-month ban on developing more advanced AI models has been met with scepticism. Yann LeCun, the top AI expert at Facebook-parent company Meta, compared the attempt to the Catholic church trying to ban the printing press.
“Imagine what could happen if the commoners get access to books,” he said on Twitter.”
April 10, 2023 at 9:55 pm #242313ALBKeymasterOf course research on AI should not be stopped, not even for six months. Anyway, given capitalism it can’t be and won’t be. And of course, given capitalism, it will be misused but that can’t be stopped as long as capitalism continues.
It’s not clear what those who signed that call have in mind. They say they want a pause to consider these matters:
“Should we let machines flood our information channels with propaganda and untruth?”
“Should we automate away all the jobs, including the fulfilling ones?”
“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?”
“Should we risk loss of control of our civilization?”And come up with answers in six months !
The questions seem rather loaded as the answer to them all will presumably be “no”.
In any event, the first two could only be dealt with in a socialist world of common ownership and democratic control of the Earth. The third begs the question by assuming this is likely to happen, is even technologically possible. As to the fourth, humanity has already lost control (in fact never had it); market forces control what happens and that, too, can only be ended by socialism.
April 11, 2023 at 1:47 pm #242326chelmsfordParticipantYou have to hand it to capitalism, it’s managed to negotiate (in a less than smooth way) technological innovation from the first primitive steam engines right up to these AI gizmos (and beyond?). Far from capitalist relations of production being fetters these relations display a dispiriting flexibility.
- This reply was modified 1 year, 7 months ago by chelmsford.
April 11, 2023 at 2:41 pm #242329robbo203Participant“You have to hand it to capitalism, it’s managed to negotiate (in a less than smooth way) technological innovation from the first primitive steam engines right up to these AI gizmos (and beyond?).”
————————————-That´s an anthropomorphism, though. “Capitalism” doesn’t do anything. It’s basically just a set of rules. It’s flesh-and-blood people that innovate and produce things. The question is whether they would be better off with this particular set of rules or without it.
- This reply was modified 1 year, 7 months ago by robbo203.
April 18, 2023 at 10:17 am #242590robbo203ParticipantPaddy might find this useful for his forthcoming talk
“Google and Alphabet CEO Sundar Pichai said “every product of every company” will be impacted by the quick development of AI, warning that society needs to prepare for technologies like the ones it’s already launched.”
April 19, 2023 at 8:54 am #242612robbo203ParticipantMusk on the risk AI poses to civilization
-
AuthorPosts
- You must be logged in to reply to this topic.