20/3 – 26/3/23 – Chat GPT 4.0

news on the march

Welcome to Monday’s News on the March – The week that was in my digital world.

Ordinarily I highlight 3 or more references illuminating the week passed, but I wish to devote this entire News on the March to one topic regarding AGI due to its order of magnitude in how it may affect our lives, even in the immediate to short – term future.

Bret and Heather 167th DarkHorse Podcast Livestream: AGI: Where Will it End?
Video podcast at Bret Weinstein

GPT-4 is more creative and collaborative than ever before. It can generate, edit, and iterate with users on creative and technical writing tasks, such as composing songs, writing screenplays, or learning a user’s writing style.

I remember watching the documentary Game Over (2003) when World Chess Champion Gary Kasparov played a chess match against IBM’s computer Deep Blue in 1997. He lost the match. Now it’s a given that even the best Chess players in the world including Magnus Carlsen can’t hold a candle to Artificial Intelligence game – play.

AGI, that is Artificial General Intelligence has now attained a capacity to understand or learn any intellectual task equal or better than human beings or other animals, but most strikingly it appears now that experts of this AI can’t ascertain how it could realize its deductive reasoning to achieve such extraordinary output.

To put this into perspective, according to Bret’s comments and reflections about the findings of the paper Bubeck et al 2023. Sparks of Artificial General Intelligence: Early experiments with GPT-4 this is a first in Artificial Intelligence advancement; specifically, how experts, or even the AI creators don’t know how this Intelligence deducted its reasoning to outperform humans.

Our findings suggest that GPT-4 has a very advanced level of theory of mind…We have argued that the ability to explain oneself is a key aspect of intelligence, and that GPT-4 exhibits remarkable skills in generating explanations that are output-consistent, i.e. consistent with the prediction given the input and context.

AI has got so advanced that it impossible for humans to discern whether or not videos / pictures are real as seen already in many tic-toc and facebook snippets ..I recently wrote an article called Joe Biden AI Voice Speech where Bruce my New Zealand amigo chimed ‘I don’t believe it was edited at all‘ (a bit tongue in cheek). From here on on in and it will get worse. We might never be able to trust our own senses about what is real, as alluded to by Joe Rogan in that article.
Also, I point y’all to my article on the movie: Ex Machina for more stimuli on this topic.

You see even today; the press and the Government will claim a lie as truth. By the time it can be demonstrated to be a lie, the damage is, for the most part, already done, because people already believe the lie because that’s what they heard, and they haven’t seen it retracted.

It gets even worse than this. On an individual level, Meritocracy could be dead in little time. The average C plus student with a bent on such technology could subtly acquire the skills to allow the technology to write their answers for them, but not in a way that makes him or her a plagiarist, but a student advancing. And in their subsequent correspondence appear a struggling but agreeable student worthy of support.

Another example, a composer of Classical Music who seeks fame in their field, could use such AI to compose the best music of their preferred artists. Or a talentless songwriter who adores Nick Cave will start writing like Nick or at least writing to such an advanced level that he could be deemed as ‘great’ as Nick and no one else the wiser except perhaps Nick. Nick Cave recently remarked that ‘ChatGPT’s AI attempt to write Nick Cave lyrics ‘sucks.
The list is endless how this AI could be exploited.

Even on a collectivist level; a Regime or Government could learn to harness such technology to implement the policies as advocated by the program (the part of the Overton window in policy range) to win more votes in the subsequent election. On a local level instead of having ‘focus groups‘, this AI could do the focusing for them and arrive at outcomes better than they intended, because, to put it frankly, the AI knows what is assured to succeed.

Artificial Intelligence and ChatGPT—should we be worried? If so, how worried? Bret proposes three categories of AI: malevolent, misaligned, and deranging. (Watch entire video segment on GPT4.0 here)

news on the march the end

“The more I live, the more I learn. The more I learn, the more I realize, the less I know.”- Michel Legrand

Tagged with: , , , , ,
Posted in News, Reflections, Science

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: