Expanded ChatGPT Interface

I’m not totally sure what you’re asking, @JanJo, so I will just give some context and you can ask for further clarity if I don’t address your question:

The API key that you get from platform.openai.com is a secret key that you only get once when you sign up for it. From your platform you can create more, but if you lose that key you cannot go back and get it.

If you have this plugin installed in your wiki there is a place to input the API key – that gets saved to the wiki as the text field of a tiddler.

The key is accessible only to those who have access to that tiddler. If you had a publicly available wiki, you would need to find some way to obfuscate and encrypt that tiddler… Although you would probably not want to have this plugin be available for people to use in a way that they could access your key, as each query sent by a user would charge your account (usually less than a penny for a simply query, but still)

It could pretty easily be modified for public use so that a user would have to input their own API key in order to make queries, which would not carry over through saves.

If you have some specific use case in mind, I’d be interested to help work through it with you.

I am a teacher and I would like to build an interface that gives pupils feedback on a handwritten input - so it would have to be my API-Key.

In the first step it should recognize the text and give hints on orthography.
In the second it should evaluate the content.

That’s an interesting usecase (and the first time I’m coming across the word orthography, thank you for that! :smiley: )

I believe that it would be capable of recognizing misspellings and making suggestions - - I haven’t tested on elementary handwriting, but I imagine it would do a fairly decent job, though further tests would need to be done.

As far as your API key being used, possible misuse could be minimized pretty easily: The OpenAI platform allows you to associate any API key with a specific project title and then set limits on how much can be charged on that project per month. I have not had to reload my credits since throwing in $20 at the beginning of the year, so I think it could be fairly affordable and if you set the limit at say $5, it would be safe to allow students to use the interface with your API key without running the risk of going destitute.

This would protect you even if you had the API key publicly available and were just acting on the basis of trust – if you ever had reason to suspect the API key had been compromised, the damage would be minimal and you could just swap it out.

Another simple process would be creating a new API key for each assignment, that would again have guardrails against severe misuse.

As far as hiding the API key, though, I’m afraid I don’t have too much experience with sharing my wikis with others, and from what I’ve seen around here, the ability to hide or encrypt a tiddler in a sufficient way to prevent malicious intent seems challenging… though perhaps will become less challenging with the new release? (no citation on that, it’s just the general sense I have)

There may be someone who knows more about encryption and hiding tiddler content, however, that could give you more info.

1 Like

An easy way to make the API key less visible would be to hardcode it within the widget javascript – I don’t know the skill level of the average student, but it’s possible that the key would be sufficiently hidden from most people within the context of a very large codeblock.

But the API key would need to be accessible by the widget somehow… Would be very interested to hear if anyone has a method for extracting information from a source outside the wiki to use within it.

1 Like

There is an encrypttiddler-plugin by danielo that does a decent job on one or a list of tiddlers.

1 Like

What about entering a hashed key by a qr-code ?

1 Like

Great! Then all you would need to do is to encrypt the single api tiddler ( $:/plugins/NoteStreams/expanded-chat-gpt/openai-api-key) and you would be good to go.

Although you should still set up the guardrails in the openAI platform, since a user could still query the agent far more than necessary (Unless maybe you were to also modify the system message to forbid this kind of behavior… that would be interesting to attempt).

Would this be a key the student would input at time of use? If so, the key would have to decryptable for the widget but not decryptable for the student.

My cryptography skills are extremely minimal.

I guess the student should have to authentificate.
I also would love to have an LDAP or OAuth plugin for TW

Keep me up to date if you decide to attempt something like this, or would like to have further brainstorming on it. I often find that my usecases are niche, and find a lot of excitement and intrigue in hearing about and discussing those of others!

1 Like

Thanks a lot.
I guess this week I won’t have the time. Maybe in two Weeks.

An example (in german) how ai is used that way can be found here : https://www.fiete.ai/

1 Like

Hey all, it’s been another 2 weeks, so I’m obviously going back on what I’ve said and releasing another version of the plugin:

$__plugins_NoteStreams_Expanded-Chat-GPT (0.9.8).json (199.7 KB)

This is a major update with significantly improved functionality and several new features that should not interfere with its previous features, but please read the documentation to see all of which it’s capable.


Great! So, what’s in this update?

To begin, you will see several new options and improvements in the user interface.

First, you will notice the button container will now wrap underneath the text box when the screen size is reduced – this is especially noticeable on mobile or in the sidebar, but also works in the story river if you reduce the screen size.

vivaldi_cz0RhQBAXz

As you also can see, this update also implements a flexible text field, which will expand to the size of your user message.

And, of course, you’ll notice several new buttons:

  • Microphone button: Begins voice recording in browser which automatically stops and transcribes when you stop speaking (This uses openAI’s Whisper system, which will charge your openAI account)
  • Musical Notes: Allows the user to upload an audio file which will be transcribed as above

Additionally, you will see that the undo button now has a dropdown next to it:

Building off the previous update, this update expands the undo functionality so the user can either undo individual actions, or undo entire queries

queryundo-ezgif.com-speed
Here you see a multistep query which can be undone in one step rather than clicking the undo button multiple times.

I believe that this is a superior solution to giving the agent undo capabilities, giving the user the opportunity to resolve a mistaken response manually rather than engaging in continued dialogue.

The number of incorrect actions the agent might take is greatly reduced by an extensively improved transaction-based validation system, which checks the intended action against the results, rolls back unintentional changes, and retries. As a side effect, this should greatly reduce the possibility of the agent reporting actions being done which have not succeeded.

If you are going to be performing more complex, multi-step operations with ambiguous steps, I highly recommend toggling on one of the newest options, adversarial agent, with the following invocation:

<$chat-gpt adversarial="yes"/>

The adversarial agent acts as a gatekeeper at the very first step when the agent tries to perform an action – a small model (gpt-4o-mini in this case) compares the first agent’s proposed action array against the user’s original query, and returns (to the 1st agent) a boolean (allowing or disallowing an action) as well as an explanation of why it has done so and suggestions for how it should proceed.

Not only does this act to prevent unnecessary or incorrect actions from taking place, it also helps to keep the 1st agent on track, when performing several actions. For example, the adversary might say “I’ll allow this, but remember you still need to do these other things in order to fulfill the request.”

image
This exchange between the two agents as observed in the browser console


Behind the scenes, there are quite a lot of changes:

  • each of the major groups of function types have been split off into separate modules and classes, and each has been improved in its own way.
  • Connection pooling has been implemented, which vastly improves response speeds from the server
  • Caching service temporarily stores the results of recent searches, vastly improving information retrieval by the agent.
  • Managing interactions between all these different modules is the service coordinator, a module which is invoked when the agent tries to perform an operation that involves both actions and validation.

Finally, (I think), this update introduces Text-to-Speech capabilities for the agent, which can be enabled with <$chat-gpt tts="yes"/>

In this mode, the button container will be expanded to include a dropdown to allow the user to select a voice. When a response is posted by the agent, it will be accompanied by a voice response (an openAI service that will be charged to the user’s account).


This is a significant update and I wouldn’t be surprised if there would be fixes to make – please report and I will do so promptly :slight_smile: I’m extremely happy to answer any questions and engage in discussions about how this plugin can be incorporated into ones workflow, as well as features for the future and problems one might encounter.

As I’ve made most of the significant updates and the system is working fairly well, this is essentially a Verison 1.0.0

Going forward, the areas I will be looking at will be incorporating anthropic and oLlama models as options… and I will likely post more about my specific thoughts about that at a later point… but, that said, semantically, the “Expanded ChatGPT Interface” plugin name will have to change, making this possibly the “last” major update.

Cheers!

2 Likes

@well-noted I have being somewhat absent from Discourse for a while and have only read this thread so far, I will play ASAP but must say I am both impressed generaly but also with the idea its an active assistent. Next is something that may help with API Keys then after that some ideas to explore.

Thanks so much for your excelent and substantial contribution.

API key

If the user takes responsibility for the API key, the solution could just store the API key in local memory and not save it back to the wiki.

  • One could also store the API key inside a Bookmarklet to install on a wiki as a temp tiddler that is not saved, thus one can install the API key on demand with a click. I have all the tools to do this easily https://bookmarklets.tiddlyhost.com/
  • A clear and simple message when no API key is available would help deal with the the case of it missing.
  • With this method the API key is never in the Wiki, just the Authorised users browser for all of file:// or the site domain.

Ideas to explore

Naming tiddlers

It is often very common for people to have trouble naming tiddlers, it seems a LLM could help in this, first use a generic title then press a button that digests the content and comes up with a meaningful title suggestion, click to accept. The rename plugin if installed can take responcibility for all wiki wide renames.

Creating test data - Background

Some time ago I created https://test-data.tiddlyhost.com/ and specificaly Periodic-table-of-the-elements this was then taken by @Scott_Sauyet and used to build a sophisticated interface

  • This demonstrated the value of presenting data sets to inspire development on top of tiddlywiki.
  • It would also allow shared standard datasets to be published to help users quickly develop solutions and seek support with a reference dataset without sharing their private wiki. For example a dataset of Human Resource records of People, Geological Eons, example products etc…

So It is with interest I can see your solution could be used to aquire such test data sets with real or dummy content. Basicaly a set of tiddlers with common fields that can them be used to query or build a UI to manage.

  • It seems your solution may already have what it takes to do this, but since I have not yet had hands on, I am not sure, I thought I would raise it.
  • I tend to define a tiddler using the field object-type=person to avoid polluting the tag space.

In the long run

  • what if we could generage JSON directly, which can ultimatly become a plugin, thus easily queried (shadows) deleted or edited/replaced.
    • This would allow results of Queries to be exported {drag an Drop) to the destination wiki.
  • perhaps eventualy we may be able to drive queries that generate the equivalent of multiple related tables (by keys in fieldnames).

Why?
This kind of database development based on a structured test data, or model is the key to building bespoke solutions in all maner of subject areas, sometimes based on an existing csv file and other times based on Prior art.

  • Examples may include community groups, Sweepsteakes, Event management, Recipies, Libraries, bookmarking site…
  • This however can be very time consuming and needs a high level of database management expertise.

It seems to me an LLM may be able to bypass this complexity by building appropriate data sets then the user builds there solution to the data rather than the tediouse and error prone development from scratch.

  • Once the data set is there we can Build tools to further support solution development, largly driven by the content of the data set(s).
  • Ultimatly new records become available and the user removes or hides the test data.

Once again, thanks so much for your contribution @well-noted

1 Like

It’s a real honor getting such congratulation and feedback from you @TW_Tones, I have learned much from your posts over the years.

The current setup could definitely use a temp tiddler instead, it would currently require that one edit the javascript, but I think adding a config tiddler which allows the user to select would be fairly straightforward.

Naming of tiddlers works quite well with this plugin - even without the rename plugin, the agent is generally capable of doing this and I’ve had no problem using it for that.

You are correct that it is very helpful for naming new tiddlers – I have begun to speak to the agent and say “Create an appropriately titled tiddler tagged Stub which summarizes this idea,” and then I spew off the top of my head – the results are consistently quite good, both the titling and the summary.


Per your datasets question, yes, the plugin is quite capable of handling real data, but also generating dummy data. If you turn the adversary on, the tag and field requirements you add will be quite consistent, I use this to generate stream-list tiddlers. I mean to experiment with adding my personal conditions (When tagging an tiddler Idea, also set its description field to {{!!text}}), but most of my wiki time recently has been devoted to toying with this.


Interested in your generation of Jsons directly, I will need some time to wrap my head around the applications of it personally, but I’m sure I could just use whatever the “Export as JSON” button uses as the function.

The question I don’t have an answer to is, “would it be possible to mass export these?” Is it possible to generate a button that can export all tiddlers that match a filter, for example? I’ve never tried. The current implementation could handle the sorting of imports, definitely, using the same idea that Commander uses for bulk editing imports.


Finally, I intend to repackage tomorrow, but if you decide to experiment before I get around to it, I’d recommend updating the validation module

$__plugins_NoteStreams_expanded-chat-gpt_validation-service.js.json (24.9 KB)

I realized today that the 0.9.8 version is too strict about verifying that unrequested changes are not made, and fails too many valid requests. I’ve rolled back to a more lenient version, which seems to be working better, though I’d like to hear outside opinions on how validation could be more successful and consistent.

Thanks, but part of my remit is to help people become part of the community and eventualy contribute such substantial features, as you are here, with LLM intergration. TiddlyWikis features and posibilities already exceed what any one person can imagin or perhaps even understand, but lets just keep pushing it.

My very simple intergration TiddlyWiki ChatGPT

This may help. I have a references Wiki, with the markdown plugin, new tiddlers are markdown and I name them with ChatGPT Questions, or abbreviated and I paste a copy from ChatGPT which returns markdown nativly. I may place related questions and answers in the text of the same tiddler.

  • Because I am not yet had hands on expierence are or can you make use of the direct markdown import?

This is a resonable design method to externalise config info to seperate tiddlers the javascript retrieves. It has being done many times.

When renaming a sopisticated wiki the rename plugin accesses content in the text and other fields that reference the renamed tiddlers, including links, transclusions even parameters to macros.

I am thinking here of task1 task2 task3 “please rename based on content”

  • But using your method where your “spew” is the body text would be great.
  • Alternativly you enter a question (it becomes the title) and the result the text.

You do not need to worry here, edit an existing plugin and you will see the JSON tiddlers, even the import does this with tiddler or json imports. We now have a somewhat complete set of macros/widgets for JSON (and presumably JS functions.

  • I would suggest do your best to limit what your plugin does but leverage the tiddlywiki methods directly. Then other solutions can use the results in Novel ways and your plugin is less likely to need modifications.

All you need to do is to create the plugin and it will contain multiple tiddlers and exporing that exports all the tiddlers within. But export is often not nessasary because you use the result in wiki.

You can in advanced search out of the box. But I have built icons to drag and drop all tiddlers in a filter as a JSON file the destination imports.

Compound tiddlers

There is a new simpler multitiddler format called compound tiddlers but they can’t deal with all tiddlers but can most straitforward ones. This this somewhat new and used by the Innerwiki and testcase plugins.

Thanks but when I stop procrastinating and get off my computer I am moving home (House sitting) so I will not get to it right away.

1 Like

Thanks for your reflections – I’d forgotten there was a way to export a selection from advanced search, it’s not a feature I use often.

I hadn’t actually thought about giving the agent export abilities but I think it’s definitely something that could be useful – find and export all tiddlers that mention XYZ in any of their fields" could be a useful command to be able to issue in natural language. I’ll look more into how it currently works and mull on what the function would look like.

I was not familiar with this format yet, I’ll have to stew on this to wrap my head around the possibilities.

Minor update:

$__plugins_NoteStreams_Expanded-Chat-GPT (0.9.9).json (199.5 KB)

Fixes overly strict validation system, which might flag requested field (especially text field) modifications as erroneous.

1 Like

Actually, it was pretty straightforward using the built in mechanisms, once I looked into them

$__plugins_NoteStreams_Expanded-Chat-GPT (0.9.10).json (206.8 KB)
Allows for export commands

Seems to work fairly well

Also works with CSV and .tid – just tried “export story river” and it exported the entire wiki, so I’ll take a look at improving that next time I sit down with it.

Still working on its integration, but for anyone following, a little preview of what’s to come:

image


EDIT:

Cat’s out of the bag – anyone following this conversation should reference WikiSage for future updates

1 Like