JSON Tiddlers -- Ease of Use

A vero.

Given JSON is a fundamental base structure in TW I’d guess the more one understands JSON the more one can do?

Of course there is a self-protect in TW from auto-cannibalism.
Meaning: it’s data store is itself IN JSON so activity IS limited to in-tiddler?

That said, it is interesting to consider one TW eating another.

Meaning parse and manipulate the JSON data store in A TW from inside a tiddler in another TW?

Just a comment
TT

This can already be done, although I have not done it recently you can either import a TiddlyWiki into another or treat is as a file containing JSON and extract tiddler and this is even without coming at it in the Node file system.

It is an interesting observation to consider along with its Non-trivial Quine nature.

I do think if you constructed a tiddler containing a tiddler inside it, in the same form as it is stored in the html file I doubt it will be a problem, as there is some encoding that happens. Also keep in mind that the tiddlers are loaded from the HTML into browser memory, then saved back when requested. So as a rule we manipulate tiddlers in memory.

The tiddlers form of tiddler encoding can be used to construct PLUGINS and are read into memory as shadow tiddlers. Eric recently extended his SaveAs solution to allow sending filtered tiddlers in Advanced search to be pumped into a Plugin, Import and JSON tiddler. We can import JSON files while avoiding the Import process using a custom $browse widget, or interrupting the Import mechanism. Add the recently discussed InnerWiki and the ability to store and export tiddlers/files in a zip file, my own work on no-touch flags and we have a plethora of interesting possibilities.

  • TiddlyWiki is the only tool I know that can build its own SDK Software development Kit/Platform.

As I gave recently in a discussion with @EricShulman JSON containing tiddlers allows you to take tiddlers out of the tiddler store and save a version, and other tricks. Imagine, you could save a days work in one tiddler.

I’m afraid that this is a misconception.

TW’s basic internal data structure is a collection of tiddler objects. Those tiddler objects themselves are hash maps/dictionaries, that is, mappings between string names and string values.

TW’s basic interface is either a single string or a list of strings (titles or otherwise.). That is what we use to display content and what we use to query and manipulate that internal data store.

That is true even for operations that look to involve other data types: the multiply operation accepts a list of strings (perhaps a list of only one) that might represent numbers and a separate string that might represent a number and returns a new string that represents a list of numbers. 1 2 3 4 5 +[multiply[5]] is a list of the strings “1”, “2”, “3”, “4”, and “5” along with the multiply operator and the parameter “5”. It returns the list of strings “5”, “10”, “15”, “20”, and “25”, usually stored as a single space-separated string.

JSON is not a data structure. It is the transport mechanism for a data structure, a single string format for passing simple or complex data structures between systems. On your file system, or on the other end of a web request, we do not find TW’s collection of dictionaries. That would make no sense; such a structure is an in-memory construct. Instead we find a string to represent that structure, one that is passed to your browser and parsed into the actual structure when TW starts.

This is not just picking nits. There is a very important word in that last sentence: “parsed”. To load JSON into TW, we first need to parse it. This is a much more time-intensive process than the manipulations we perform on the in-memory object store. We want to do that as infrequently as possible. Similarly we want to serialize the data back to a string only when we save. It’s less expensive than parsing, but still far more expensive than working with our in-memory model. JSON was built to be as inexpensive as possible to parse and serialize from JS, but it is still tremendously faster to work with the in-memory structures.

The same is true for any in-wiki JSON. Part of the reason that it is hard to work with is that when we store data in this transport format, every query and every manipulation involves parsing , then the work we want to do against an in-memory structure, then, often, another serialization.

We might want to suggest that this could be cleaned up by adding a system that somehow stored such JSON data in memory, only parsing it on startup and only serializing it on save. We could. But then we’d need an interface for manipulating it in memory; we wouldn’t want to require users to be JS experts.

We already have such a system for working with in-memory data. That’s what our filter language is for! Building a second, parallel system offers very little bang for its buck.

This is why some of us are recommending using JSON tiddlers very sparingly. They can’t become easy to use without a lot of work. But for the most part, we can convert our JSON constructs into collections of tiddlers, which we can work with easily with existing tools.

All the above also answers the following:

Right.

For mortals an example might be useful.

Footnote: “Eating” a TW inside another in one Tiddler is likely easier when the “Eaten” TW is one that externalizes it’s core. Less weighty.

It’s Robinson Crusoe all over again.

TT

I thought the TW data store was structured in JSON?

I guess you trying to go further.

Right now I’m confused.

TT

No. Again, that is what I called the transport mechanism. There’s probably a better term, since it’s also what’s stored in the TW file. But it’s not what’s in memory as the wiki operates. Think of this string:

{"name": "Jill", "age": 19, "eyes": "hazel"}

What sorts of things can you do to it as a string?

You could take the first ten characters:

{"name": "

You could upper-case all the characters

{"NAME": "JILL", "AGE": 19, "EYES": "HAZEL"}

You could reverse it

}"lezah" :"seye" ,91 :"ega" ,"lliJ" :"eman"{

You could apply some regex, and do a number of other things.

But in order to answer the question “Is this person over 21 years old?”, you will need to parse it into an actual in-memory object. In order to say, “No, her eyes are actually green,” you would need to parse it, make the change, then serialize it back into a string to save in the text field of the tiddler in question.

The text you quoted is like this. It’s what TW can parse to create its in-memory store. And TW can serialize its store to save the wiki. But its not what is used to run the wiki. If every TW interaction involved reparsing and/or reserializing the data, it would extremely, paralyzingly, sssssssslllllllloooooooowwwwwwww.

At a smaller scale, that is what we’re doing whenever we use JSON tiddlers. We first parse, then manipulate the JSON data , and then if we’re updating, serialize it back to a string to save. If the equivalent data was in tiddlers, we would have only done this parsing once, and only done the serializing on save.

It’s not that JSON is never appropriate. For deeply nested structures, it might well be your best bet. I’m all in favor of adding the jsondelete operator: it’s necessary in to be able to use JSON well. Although I’ve tried to demonstrate that no such tool can ever cover all potential JSON, I have no objection to adding some operators to allow you to use keys such as mother.address.city instead of the three separate parameters. But I don’t think its worth spending a great deal of time on optimizing the usage of JSON. Most uses of JSON tiddlers, in my opinion, would be better written as a collection of standard tiddlers.

That’s right, but it is used to store our data as serialized text, which is machine readable and somewhat human readable too.

As Scott wrote. At the wiki startup we need to take that string and parse it. Since JSON is (now) used all over the web, all browsers support parsing and serializing JSON strings natively. So it has good performance, but is slow compared to “in memory” object handling.

I can 100% sign that. Using data tiddlers, that contain a structure that looks like a tiddler, is like: “Why make it simple, if it can be complicated.”


IMO the JSON-operators have been primarily designed to be used with the tm-http-request message.

They should allow us to read data from 3rd party sites as JSON and convert them into tiddlers, so we can easily work with them.

It is true, that working with JSON operators should be easier, especially if the 3rd party structure is completely different to what we need.

But imo working with “tiddlers” inside a data-tiddler is the wrong way to go. Except to use them as a transport medium, for export / import.

1 Like

This is a good point. I’m sure there are other appropriate reasons too, but this is the best reason I’ve seen for using JSON operations in TW.

Okay. From a user view of me I get confused. I am now hearing two stories.

  1. TW data is stored in JSON

  2. AFTER that there can be complications of the @Scott_Sauyet kind?

How, practically, should I respond?

Seems a bit complex?
TT

p.s. My complication got complicated …

That’s right. It is parsed once at startup and is serialised once on save. All the time in between we work with JavaScript objects that are handled efficiently in the JS engine.

In contrast all JSON operators have one function in common. See line 29

Everytime we use eg: jsonextract the input data needs to be parsed and the result needs to be serialised for the output (see line 33).

On the input side of every other json* operator there is a new parse step. … and so on.

For small data-structures that’s OK but it seems users want to create their own tiddler-data-store. Doing it that way is complex and completely inefficient. A data-tiddler is no database.

Converting JSON data into tiddlers once and use tiddlers IMO is much more efficient.

I did create a write-up a GitHub discussions 4 years ago: How the TW internal data structure looks like and why data-tiddlers are not optimal · TiddlyWiki/TiddlyWiki5 · Discussion #6116 · GitHub which goes a bit deeper and describes a data-dictionary tiddler. But the problem with the text-index parsing is the same as with JSON data in the text field.

With a career in IT behind me I see that what is happening yet again.

I takes time to understand json and more so when there are gaps in the tools and documentation. even then once you understand it it needs to be understood in the perspective of the core and how it’s used in tw and code environments.

so if you are aware of tw in general but not json you will look around to fill a few gaps in the tools you have.
If json is not the answer we need to first understand the questions.

the premise we need to start with is there are gaps we need to fill and we need to find out how to, rather than trying to resist the demand and blocking progress.

:thinking:

I think you’re misunderstanding the fundamental nature of the problem here. Sure, there are some gaps that should be filled. jsondelete, for instance, is an essential companion to the other json* operators! But what I am trying to point out (and I would guess that @pmario agrees) is that the very basics of how JSON works is so at odds with the core of Tiddlywiki that an attempt to fully integrate JSON tiddlers with the rest of TW is the matter of a full core rewrite, and the resulting tool would likely not be entirely compatible with current TW. It would overturn our “everything is a tiddler” mantra. It would require totally new data storage mechanisms. Its deeply nested nature would most likely mean a hugely expanded collection of filter operators. It’s a gigantic undertaking. And for little benefit, as most anything we can do with JSON tiddlers we can also do with normal tiddlers.

I’m not merely suggesting that such tools would be too hard to write. I believe that such a tool would need a brand new name because its too different from current TW to be called the same thing.

2 Likes

computer says no?

I am not even saying the solution needs to be JSON. But the solution may be adjacent, but we need a solution as is evidenced by related questions popping out all over the place.

lets us find the root causes without dismissal of people’s needs perceived or otherwise.

This is a really interesting discussion!

To me it kinda illustrates a difference between folk (you & Scott?) deeply aware of how TW’s fundamentals work (“basement thinking”) and end users not interested, or capable, in the basement.

Your data-dictionary comment really illustrates the polarity.

For practical purposes it is entirely healthy and functional to see data-tiddlers as databases.

Why not?

Just a comment
TT

So? I’m trying to understand, practically where this thread comes out?
Are there any changes you are suggesting?

Just a comment
TT

Mantra: OM MANI PADME HUM (Tibetan style)

I’m sorry, but I thought this thread was about JSON. It started with @Eskha’s list of very specific how do I do X with JSON? questions and has proceeded to discuss JSON in every post.

If you want to talk about some abstraction from there, might I suggest you start a new topic?

I’ve put in a lot of effort to thoroughly explain deep objections to expanding TW’s core JSON capabilities. To hear it suggested that I’m merely being dismissive is disheartening.

1 Like

@Scott_Sauyet, it looks like you inadvertently edited @TiddlyTitch’s post instead of replying him, would you mind reverting to his original post please?

Fred

1 Like

Oh damn. I won’t try it from my phone. That was probably the cause of the problem in the first place. But I’ll try to straighten it out ASAP.

Thanks for the heads-up.

Edit: fixed. Sorry about the mess up. And again, @tw-FRed, thanks for catching it.

Except for the minor addition of jsondelete, I, for one, am not suggesting any changes.

Obviously, people can add their own tools to work with their particular JSON formats, but I am trying to demonstrate that there is no reasonable approach to integrating more general JSON-related tools into the core. A superficial integration would be very slow because of the parse-manipulate-serialize cycle on every transaction. A deeper integration would require a radical restructuring of the core and a huge expansion of the operator space, with very little benefit to end users.

@TW_Tones seems to find my responses dismissive. I don’t accept that. I’m trying very hard to explain why I see the suggested expansions as a problem. I am certainly not trying to appeal to my TW expertise.

I reject that. Mario clearly has a deep knowledge of the core, alongside @saqimtiaz and @jeremyruston, and, I think, in a slightly different way, @EricShulman. I’m not a novice there, but I barely make it to intermediate.

I am, though, an advanced JS developer. I understand the capabilities of the language TW is written in, and have nearly three decades of experience with it. I’ve been trying to explain (and not simply asset) the fundamental difference between TW’s tiddler store, which is a functional database, and JSON, which is primarily a storage/transport format.

When @pmario says that a data tiddler is not a database, it’s like saying that sheet music doesn’t groove or swing or jive. It takes musicians “parsing” it to bring it to life. Yes, those who read JSON/music notation can get some sense of what the parsed information would be like, but it’s nothing like attending a concert.

Damn it, Titch, now you’ve got me humming everything from Hair!

2 Likes

if you are saying json is not going to work or should not be used don’t be surprised if the conversation strays.

@Scott_Sauyet I have valued your contribution but you recent responses suggest you are not hearing what I said, nore being generous in your interpretation of what I have said. it is irrelevant if you think you have not being dismissive if another feels you have. let us just try to listen a little more deeply.