Regarding "Artificial Intelligence" (AI) ... #1

Regarding Artificial Intelligence developments and their possible relevance to TW

A few thoughts …

  1. It is now obvious that modern AI, in some many situations, can pass The Turing Test (i.e. can appear human). I leave the wider issues of all that aside for those interested in “ontology”.

  2. Right now I can’t actually use / test the ChatGPT Large Language Model (LLM) AI machine as, in Italy, where I live, the Department Of Privacy (Garante) has just banned it …

That is annoying as I am wanting to understand it better. But I do think it important that there be a proper public discussion of the real implications of the kinds of AI that are getting so much traction right now.

So the Italian reaction is in some ways very understandable and justifiable.

It is a major concern that all the very big AI initiatives are driven by very powerful Internet Corporations who have mastered marketing already.

  1. SO, to TiddlyWiki. I found it very interesting that the powers of modern AI can help users better understand it. That was a bit of a shock! :slight_smile: . But it is not a segue.

I wonder if we can better promote TiddlyWiki as NATURAL INTELLIGENCE???

Thoughts
TT

May be: OpenAssistant Blog | Open Assistant may be an option.

Explore at: https://open-assistant.io … I found out about it today, so I could not give it a good test yet. But id does know “what is TiddlyWiki. Maximum 100 words” and “How to create a list using the tiddlywiki wikitext syntax”

have fun!
mario

2 Likes

I think “that train” has left the station. Authorities can only “hurt” their economy, if they block access to AI technology.

Yes … there has to be a lot of “philosophical” and “ethical” discussions and “alignments”, but IMO that can happen in parallel. …

Especially since “early adopters” do have a big advantage now. I did never think, that I actually want to use the Edge-browser and “Bing”. … BUT Bing-chat is kind of interesting now. Most of the standard search results still “suck”, but the chat option it’s fun :wink: and useful.

The thing is, I’m sure there’s lots of people that can’t.

The bureaucratic version of a towel –

The Ravenous Bugblatter Beast is so mind-bogglingly stupid that it thinks that if you can’t see it, it can’t see you. Therefore, the best defense against a Bugblatter Beast is to a towel around your head.
Hitchhiker’s Guide to the Galaxy

2 Likes

Right. At 4am under the extended torch rays of RoboCop I’d likely fail it too.

TT

LOL,

It is true Italian political decisions are often proven a “shot in the foot”. Ma, it is good at least one state recognises possible AI downsides?

A dopo
TT

Sorry for the groaning.

As you have said, ChatGPT is a Large Language Model. It is primarily trained to reproduce human languages. All the scientifically rigorous metrics used to measure ChatGPT only measure how well it produces sentences and passages. And it has gotten to a stage where it can be a very eloquent speaker or writer.

This is why ChatGPT is great at writing a 300 word fictional story. However, it has never been trained to be “intelligent”. All the claims that have been made about how intelligent ChatGPT is are not scientifically rigorous ones. They are just marketing claims made to sound authentic.

The Turing Test was proposed by Alan Turing, a well known mathematician and statistician. Yet, the Turing Test is not a mathematical nor a statistical test. There is no statistical or mathematical rigor built into Turing Tests. And it was meant to be more of a thought experiment, an analogy for the sort of tests that can be conducted. It was never meant to be an actual, working test for whether machines can think.

That’s why I feel it is important to state that ChatGPT is an amazing Large Language Model. It performs really well when given a host of linguistic tests. However, but ChatGPT is not designed to be, not trained to e, and is not an Artificial Intelligence.

4 Likes

Well no, and there never was meant to be, not in the form proposed. I’ve just reread the paper that introduces this test, and I think you’re misdescribing it.

I would explain it as a way to make more rigorous the terms of the question, “Can machines ever think?” The point here is that although Turing doesn’t attempt to do so, he could easily make it more rigorous. I still think this is the best single test we have for whether a machine can be considered intelligent.

It’s not entirely clear if Turing thought of this as mostly a thought-experiment; it seems more likely that he was thinking that in fifty years (which would have been around the year 2000) we might be hitting a threshold where such tests might be carried out and have a significant chance of bearing fruit.

I assume that — since this is the café — we might treat it as an open mic and see where the conversation goes…

What so many debates leave unrecognized, conceptually, is the possibility that intelligence is not really a count noun at all.

We get much more fresh air if we see intelligence as a phenomenon that can only be recognized as a communicative and living process. Intelligence — like storms, gravitational fields, and traditions — unfold only in relational interaction, rather than being a magical ingredient or secret sauce packed “into” some thing such as a skull or a circuitboard.

Let us grant that ChatGPT is a manifestation of intelligence, and it involves artifice. (TiddlyWiki, and my iPhone, and the books on my shelf all fit these descriptions!) None of that implies that it is “an” artificial intelligence. Its intelligence is continuous with the intelligence of the beings upstream and downstream of its patterns. It’s a development of collective intelligence, extended and reconfigured in some novel ways. Hence it doesn’t make sense to put it on “the other side” of a divide between “the real intelligent units” and “counterfeit” or “illusory” beings.

To be clear, the argument cuts both ways. My “own” intelligence is not a single unit inside my skull (or do I have one in each hemisphere, plus one in my gut??). Intelligence in “my own” case is a unique and complex dance that reanimates meanings cultivated and communicatively interwoven over countless generations (not limited to humanity or primates, though symbolic language is a crucial threshold!).

To take a process-oriented approach to intelligence as a dynamic phenomenon is to admit that I am not “a” natural intelligence, and ChatGPT is not “an” artificial intelligence. Both of us are — in very different ways — local manifestations of how intelligence pulses through the world.

All this is not to say we shouldn’t attend to what’s new, disorienting, or dangerous here! It’s just that drawing emphatic lines around “genuine cases” of intelligence is going to distract us from what’s amazing about the intelligence of organisms, as much as about what’s provocative about AI.

What think you all?

2 Likes

Glad the conversation is fostering a deeper debate. Not before time… :expressionless:

There is only one thing I know for sure and it took me a long to time to learn it. Despite having studied AI very briefly (as an extra curricular offshoot of my college studies) back in the late '80s, I know next to nothing about AI (or AGI – Artificial General intelligence).

These guys, on the other hand…

and of course,

https://www.youtube.com/results?search_query=wolfram+chatgpt

Great post. Truly. However, to answer you would place me at a loss for words – I’ll defer, in hope, to my future self :wink:

1 Like

I’ve often considered questions of whether it’s possible that storms are intelligent, carrying out in a few hours or a few days significant intellectual activity equivalent to a human lifetime of experience, interacting with peers, leaving behind their traces on the land and ocean, but mostly on other weather systems that are constantly forming and reforming.

Or continents? Could mountain ranges just be the detritus of philosophical debates between slow-moving continents?

I recognize that there’s something of a sophomore gab session in these questions, but I mean them quite seriously. Trying to determine what sorts of patterns represent intelligence is an extremely difficult, possibly insoluble, question. The Turing Test is a wonderful idea, but only between two systems that can establish communications. It’s hard to imagine a cyclone trying to determine if its interlocutor is a continent or a hurricane. But for humans and computers over a text interface, it seems a reasonable endeavor.

2 Likes

(useful video links)

… and I’d add … Hi, Robot: Intercom interviews ChatGPT - YouTube

In a later post I will return to AI and TW more specifically.

TT

Right. The first Turing Test (there were some other variants, not applied much) is resolutely oriented to only asses behaviour / performance: i.e. IF the linguistic responses of the machine fully resemble those of a human interlocutor.

So, contra @intrinsical :slightly_smiling_face:, it is not necessary to wonder about the “backroom working” to test the output responses. In short, the Turing Test is of behavioural simulacra.

Regarding the LLM behind ChatGPT several AI boffins have expressed the view that it works out to look more human the LARGER the model and the data set becomes. Certainly the effects of scaling seem to have thrown up the most surprises on replicating human inter-texting.

beep, TT

Right. Get to it! If you, “Storm Whisperer”, could have conversazione with hurricanes in the Gulf Of Mexico, requesting moderation, that would be invaluable!

beep, TT

Since I work with this stuff all day at my day job and hear misconceptions about what AI is in general and specifically ChatGPT I want to say very clearly:

ChatGPT isn’t designed to be intelligent in any way, ChatGPT isn’t designed or intended to have outputs that are correct in any ‘what it says is factually correct’ sense.
ChatGPT is designed to sound human. That is the only concern when designing it.
Any intelligence or basis in fact has to come from some other system. An expert system could supply what is true and pass that to ChatGPT to get the information presented in a way that is both comprehensible to humans and sounds like it was produced by a human.

Because a large amonut of the training data comes from sources where people are saying things that are factually correct many of the responses it gives are correct, but that is completely incidental and irrelevant in terms of what it is designed to do.

One huge thing that people need to understand is that ChatGPT can not lie to you because it isn’t trying to tell the truth. It is like a parrot that repeats words and can sometimes put the words together into a coherent statement. If the statement isn’t factually correct it doesn’t mean the parrot is lying because the truthfullness of the statement has nothing to do with what the parrot is doing. The difference is that ChatGPT is very very good at making coherent statements and people interpret the substance as the important part instead of how it is putting on a performance.

2 Likes

@inmysocks, brilliant reply. Spot on. ChatGPT “is designed to sound human”. Tx.

It’s an “interactive linguistic simulation device.”

I think pragmatically, out in worldly use, there is now the bigger issue of, to put It crudely: “Okay. So we made this smart talker. And … what to do with it?”

Best, TT

I am happy that the use isn’t immeditaely obvious, because while we could pretend that the main use is to pair it with an expert system and have it make the responses nicer (like someone asks ‘how do I turn on my computer’ and I could respond ‘press the power button’ then chatgpt could take that and say ‘Thank you for your question, in order to turn on your computer you press the power button’), in the real practical right now way imagine if you wanted to have 10000 fake twitter accounts to try and change public opinions about the dangers of an invading squirrel army, instead of writing 10000 tweets you ask ChatGPT to write 10000 tweets, each one using the local vernacular from 10000 towns where you want to change the opinions and then post those tweets.
Then do the same thing tomorrow.

Even if you have to post the tweets manually, if it is just copying and pasting you pay 100 mechanical turks 10 cents a tweet and have your very own propaganda army to push your ideas about squirrels.

And if you don't like sleeping at night

imagine someone with state-level resources and a specific agenda.

But note that intelligence is not particularly associated with having vast stores of knowledge.

Intelligence is the ability to gather information and convert it into knowledge that can then be applied in other circumstances. The one who can perceive complex patterns among facts is generally more intelligent than the one who knows more facts.

I agree that ChatGPT and its ilk are not designed to be technological oracles. But the better they become at natural conversation, the closer they come to passing the best test we’ve been able to devise for human intelligence.

“Human” is important there. It’s easy enough to imagine an advanced alien race looking at the latest version of their programming saying, “Well, it’s only a small ways beyond human intellect. In eight or ten iterations, it might start approaching actual understanding.” Or we could imagine squirrels asking whether humans were intelligent, and concluding that, “People simply can’t remember well enough where they stashed the nuts; clearly they have a ways to go.”

… and what did this have to do with TiddlyWiki again? :wink:

1 Like

Knowledge transfer.

Um… storage and retrieval? Sounds good enough/apt to me :wink:

Great post.

2 Likes

:lurks-fiercely:
Good topic.

2 Likes