I’m afraid that this is a misconception.
TW’s basic internal data structure is a collection of tiddler objects. Those tiddler objects themselves are hash maps/dictionaries, that is, mappings between string names and string values.
TW’s basic interface is either a single string or a list of strings (titles or otherwise.). That is what we use to display content and what we use to query and manipulate that internal data store.
That is true even for operations that look to involve other data types: the multiply operation accepts a list of strings (perhaps a list of only one) that might represent numbers and a separate string that might represent a number and returns a new string that represents a list of numbers. 1 2 3 4 5 +[multiply[5]] is a list of the strings “1”, “2”, “3”, “4”, and “5” along with the multiply operator and the parameter “5”. It returns the list of strings “5”, “10”, “15”, “20”, and “25”, usually stored as a single space-separated string.
JSON is not a data structure. It is the transport mechanism for a data structure, a single string format for passing simple or complex data structures between systems. On your file system, or on the other end of a web request, we do not find TW’s collection of dictionaries. That would make no sense; such a structure is an in-memory construct. Instead we find a string to represent that structure, one that is passed to your browser and parsed into the actual structure when TW starts.
This is not just picking nits. There is a very important word in that last sentence: “parsed”. To load JSON into TW, we first need to parse it. This is a much more time-intensive process than the manipulations we perform on the in-memory object store. We want to do that as infrequently as possible. Similarly we want to serialize the data back to a string only when we save. It’s less expensive than parsing, but still far more expensive than working with our in-memory model. JSON was built to be as inexpensive as possible to parse and serialize from JS, but it is still tremendously faster to work with the in-memory structures.
The same is true for any in-wiki JSON. Part of the reason that it is hard to work with is that when we store data in this transport format, every query and every manipulation involves parsing , then the work we want to do against an in-memory structure, then, often, another serialization.
We might want to suggest that this could be cleaned up by adding a system that somehow stored such JSON data in memory, only parsing it on startup and only serializing it on save. We could. But then we’d need an interface for manipulating it in memory; we wouldn’t want to require users to be JS experts.
We already have such a system for working with in-memory data. That’s what our filter language is for! Building a second, parallel system offers very little bang for its buck.
This is why some of us are recommending using JSON tiddlers very sparingly. They can’t become easy to use without a lot of work. But for the most part, we can convert our JSON constructs into collections of tiddlers, which we can work with easily with existing tools.
All the above also answers the following: