With an LLM it is all about how good your Question is!

I decided to ask ChatGPT to look at all the JavaScript on Tiddlywiki.com and summarise all the functions, and identify the ones I could execute from the Browser Console.

Wow, I tested half a dozen and they worked. For example deleting an offending tiddler, from the console.

I even have a function that can not only change a tiddler field, but add an entry to tags or a list field.

The mind boggles what good questions I have not yet asked.

I’m generally quite uninterested in the output of LLMs. But I must admit to curiosity over how ChapGPT derived its information. I wonder if it looks through the HTML tiddler store to find all tiddlers – inside or outside plugins – then selects the JS ones and parses their content? Or does it scan the GitHub repo for JS files? Or does it do what I would do and load the main site, open the developer’s console and look at the keys of objects such as $tw, $tw.wiki, $tw.wiki.__proto__, and $tw.utils?

That latter tends to give me a good view of what sorts of functions are available to the console. That includes a great number of them. I’ve never compiled a list. But here’s one way to get details of a particular function: In the developers’ console (often F12), type, say, $tw, hit enter, and expand the resulting object. It shows you a number of properties, most of them functions. If you expand the prototype property at the bottom it shows you additional ones. Depending on the browser – here is one of the places where I find Blink/Webkit-based browsers (Chrome/Brave/Safari, many others) to be more convenient that Gecko/Firefox – you may be able to click on the function name and find a link to open its source code. That will give you any comments and the parameter names, and the implementation, if you can read JS.

There is a great deal of TW you can run from the console.

personally i dont use such tools

and i think asked that question in another thread

because i wanted to understand the thinking of the statement maker

****
for me the answer is not the point
the journey to get to how the conclusion was reached
is likly much more informative than

just: use this method-x

wrt to the title
the premise which
is imho false

With an LLM is all about how good the your Question is

i strike though & replaced the personalized affectations
( because i think they shift the frame in an unhelpful way )

see also : wrt:“exag-gerated claims about artificial intelligence”
(products)

 The input 
does not cause the output 
in an authorial sense,
much like input to a library search engine 
does not cause
relevant articles and books 
to be written ---. 
The respective authors wrote those,
not the search query!

.OSF

The mind boggles what good questions I have not yet asked.

hear is one : can you do that without LLM?

and another
**** why not invent&test :roll_eyes: *your own* methods to answer this question?

next is way out side the scope of the training data
or
even the abilities
of any one other than @EricShulman to answer!

why did Eric make this statement ?

what informs that opinion ?

is it correct ?

(even @TW_Tones had to do the test for the last one… why not cut out the middle man :grinning_face_with_smiling_eyes: )

… not that im hear for grading
or even not that i haven’t ( in the distant past )
received an assessment of “needs more effort” my self :grimacing:

and perhaps even
it is the case that we lack the appropriate tools for collective
collaboration

some part of that is arguably why ppl join this very forum !

… not that im hear for grading
nonetheless …

2 / for effort
-1 no mention of method’s / cant reproduce from given data

If I ask you a question, a very good one, but you don’t know the answer, how relevant it is that the question was very good? And how is it different with an LLM?

1 Like

Please share your prompt(s) !

One of my complaints about TW is that nobody documents the available functions. You either magically understand it, or you’re not in the club.

Not so much for command-line stuff, but for rolling filters for things like better date handling.

I think it’s extremely different, because I don’t at all buy the notion that an LLM knows anything. LLMs are bullshitters with no model of the world. So their “answers” can only be correct if they’re lucky enough to have been trained on much more correct than incorrect data. There’s little to guarantee that.

I’m a strong believer in eventual AI. I don’t think LLMs are much of a path toward that.

I think most people are typically asking the best question they know how to formulate, given their current level of understanding — whether that’s in a classroom, in a forum like TalkTW, or even of ChatGPT. But while a human expert can often identify mistaken assumptions or gaps in understanding and help the learner reach a functional solution (whether the initial question was precise enough or not), an LLM isn’t interested in finding and filling those gaps in your current knowledge — because it isn’t interested at all. It’s going to spit out a coherent-sounding response even if the question was nonsensical… and the less familiar you are with the subject, as a prompter, the less likely you are to identify issues with the response. So paradoxically, the people who are best equipped to ask a “good” question or identify a useful response are the ones least likely to be asking an LLM.

Frankly, this seems more self-congratulatory than useful. As @wiki_user (edit: and @Mark_S) pointed out, your findings would be more helpful to others if you shared the prompt you used (and how you refined it, if you had to do so), the results you got, and the method you used to verify them.

what if you had an environment (a wall if you like)
that was just (a place to throw) filters + data
and see where the pieces land
or the trajectory they bounce back on

like a ping pong table folded in half
good practice environment
to train reflexes / technique

not the real game but better practice than
switch pingpong or the like

*blinking-cursor*

just to clarify
i have no interest at all in

the “prompt” used

these are the

findings would be more helpful

tests
i was looking for !!

not to say its operation dose not serve someone’s interests
:zipper_mouth_face:

But I’ve met so many people like that … and so many people online that mainly repeat what other people have said. Maybe some sort of LLM is our own internal, fall-back thinking mechanism.

That’s valid, and I’m sorry for misrepresenting your position. I agree that specific reproducible findings would be broadly useful, while LLM-prompting seems to be of interest to a more limited subset of the community. But Tony’s thesis seems to be that it is the quality of the question that determines the quality of the results, and that’s hard to discuss without any examples of what he considers “good” prompts (and what sort of results those “good” prompts produce).

Otherwise, we risk implying that a question is “bad” because it produced incorrect results… and I don’t think that’s very fair to the question-askers, when ChatGPT is still hallucinating new TW widgets.

I don’t mean to repeat Searle’s Chinese Room argument. I don’t buy that argument at all. I have no problem with the notion that a model of the world can be purely emergent. Models don’t have to be explicit.

But we know how LLMs are created and trained. There’s no mechanism in that to prefer factual information over counterfactual. There’s just the hope that a high enough proportion of the material is correct that most responses are also correct. Sure, I know people who seem the same, but that doesn’t imply that LLM output should be trusted more, only that some people should be trusted less. :wink:

I’m the one that wanted to see the prompt. So, minor attribution error.

I’ve tried on a handful of occassions to use LLMs to resolve some issues with my TW code, but 90% of them just do not work, usually with made up widgets and nonexistent JavaScript to suppliment built in widgets and macros.

I’ve long since suspected that it was pulling its data from the static webpages, which could be the reason for why I’ve had such bad luck. either that or I haven’t the slightest clue how to ask the right way, which is also probably why lol

I would love to use an LLM to learn the more advanced parts of TW5, such as obscure functionality that lacks documentation or modifying the .js tiddlers for making new features.

Now, I use just the standard ChatGPT search, or duck.ai, is there some pre-use setup that is required in order to get better results? maybe a prechat prompt that says, use the github repo as reference or something along those lines?

It might be beneficial to write up a guide on setting up a standard design for everyone to use so we can then reference eachothers prompts to make progress.
Imagine if we had an LLM that is easily sharable and can provide working code to learn from, akin to something like Smartsheet’s LLM.

1 Like

But which people, and when? Linus Pauling, two-time Nobel prize winner who practically wrote everything in your Organic chemistry book. Yet, he was wrong about the role of vitamin C in health. Multiple generations downed massive amounts of vitamin C in hopes of stemming their cold or flu symptoms, with little evidence it actually worked.

An LLM wouldn’t have made that creative, but incorrect, connection between health and the role of vitamin C in the Kreb’s cycle (unless they were force-fed a lot of irrelevant material).

I don’t know about the Chinese room argument, but I definitely don’t believe in the “Ghost in the machine” argument. The same person on two different but identical days, faced with identical choices, will make different decisions. If you were one person, wouldn’t you always make the same choice? Or try meditation, and you’ll wonder who is really in charge up there in your head.

If it’s drawing on the internet at large, then it could be referencing 3rd party widgets or code from TiddlyWiki classic. Actually, some of the internal code in TW5 has changed over time.

Basically, the pool of information that AI has to draw from for reliable TW is much more limited. Pretty much this forum, the old google forum, and tiddlywiki.com. But both forums are filled with counter examples (e.g. “Why doesn’t my code work?”) and there is no ranking like with the “stack” pages for python and other technologies.

Hm, so then, specifics would be needed, i.e. “Using information that is not over a year old from XYZ, can you …” :thinking: I wonder if a precompiled prompt can be used to get predictable outcomes. I’ll need to experiment with that.

My guess is that, given the way LLM models are trained, they cannot do that. All the material is compressed in a lossy manner, and the models don’t go back out to the internet to recollect and attempt to rehydrate the sites it came from. They “know” the “facts”, they’ve collected, but not their provenance.

I do find it fascinating how an llm is trained to answer questions - you give it a question and then words are added like 'the question is ' <question> 'and the answer is ' <answer> (answer starts as a blank). This is then fed to the next word predicting algorithm a number of times until some “terminating” word is encountered and the <answer> is returned to the user.

Other patterns are possible to product ‘thinking’, others use RAG to get focused answers

Interesting set of responses to my Casual post in the Cafe Topic.

  • Intentionaly posted here so I was not obliged to give a full evidence based, organised documentation of what I did.
  • It does suprise me the amount of “emotion in responces”, and “taking of positions”.

That is not an easy question to answer, because its made up of many parts and although I am not expert I know enough, I should charge consultance fees, in summary off the top of my head,

  • I asked it to look at tiddlywiki.com, and the end of that address is a html file or representation containing javascript code it would have ingested
  • Prior to my question the LLM has ingested this and other content to build the LLM
  • Other people may have asked similar questions and got answers previously that were injested
  • My Question contains in it, the beginning of a matching exercise, if I included or excluded some words I may have got a very different answer.
  • Once I got the result I scanned it, based on decades of experience for “salience and sencibility” I may add that to my book, :laughing:
  • My own history with TiddlyWiki and ChatGPT is going to influence the answers somewhat, given prior questions and my responce to previous answers.
  • My Point is I asked a good Question and as a result (after confirmation) I got good answers. LLMs are very similar to asking a friend or expert. Always be sceptical.

In my view, there are three key areas importiant to getting real value from LLM;

  • Well framed questions are more likely to result in useful and reliable answers
  • Any responce from an LLM still needs to be regarded secepticaly just as you should a responce from a friend, a book, and expert.
  • Expose answers to practical application and test them before assuming they have any value.

No realy, my answer was a simple NO, I dont know Readline. Thus have no idea what it is. Perhaps you could ask better questions? :grinning_face_with_smiling_eyes: otherwise your reply is very, if not too “informal”.

See part of this answered above, “Must I always take the Yellow Brick Road all the way to the end just to find the Wizard is a fraud?”, Sure I value the Jouney but I only have time for so many.

Well in this case yes, but with a much bigger investment of time, and I may need to have learned things on the way to finding the answer, that I had no short or long term “need to know”.

Perhaps another time. It was a conversation, so no single question, but for your reference this is the first question, I would argue was well phrased.

TiddlyWiki.com is a site containing javascript and within that functions. Functions can be called from the browser console and bookmarklets. Can you scan the Javascript and generate a list of functions I can use this way?

  • Note: Used for “Browser console and bookmarklets” specificaly.
  • Hidden in this question is prior work of my own researching functions that can be executed in a browser against websites/tiddlywikis. That includes success and failures that CharGPT and I share.

I understand why it feels like this, I think the problem is not that this does not happen in talk.tiddlywiki but that it does not make it into the formal documentation.

  • However this exploration is answering a subset of questions, for me finds “the available functions” and may result in documentation in a less informal Category.

And my point how good is the question. By the way LLM’s are more correct than incorrect.

I am not confident A.I. Is near at all, I think LLM’s are just another step like OCR, Speech Synthisis/recognition, Image analysis … I dont buy much of the hype.

It depends on the follow in my view,

  • why is the LLM being used?
  • We can learn how to ask good questions
  • We can learn to be sceptical of answers

Since the last two have being a big part of my personal and Professional life I am in a very good position to share and support these skills.

  • It would be the same as reading a news report, the output of an enquiry, a graph etc…

No its not, “self-congratulatory” and I do find that suggestion “unkind if not impolite”. I was sharing I had a lot of sucess and I belive it is dependant on both the questions you ask, and testing the result. I did not think the Cafe discussion demanded such rigor.

I would of hoped others had similar expierences and shared their sucess without having to give a “forensic account”. Somethimes I tire of extencive carfuly researched accounts, which you will see in much of my responces here.

ChatGPT is almost that when informed about TiddlyWiki

They may come from me eventualy, but you can find them yourself. The point is they are findable.

For me to publish my specific results demands of me a rigor I was trying to avoid on this occasion.

Totaly agree, this is why I am a committed Modern Skeptic. Have been for decades.

Yes, thats mostly my point. I think given the responces in this thread, “the quality of the question that determines the quality of the results” is not properly understood, and people would be “wise to take this on”.

Examples given above but I hoped others may responded “me too”, an informal discussion not a formal Treatise

It is perhaps you taking this too far, I made no such assertions. I only opened a discussion.

I have done this discussed elsewhere. But yes lets do it. I have shared a Custom GPT informed by tiddlywiki specificaly, but it still needs more work. In fact this touches on the point of this Topic, I used an question in an area ChatGPT was better informed in - JavaScript.

  • Similarly I have also given ChatGPT specific tiddlywiki code, such as a filter operator and asked it to give me a variation with sucess.

All

I will endvor to share a couple of validated answers this question gave me, but I will not dump answers until and if I validate them.

1 Like