I haven’t had time to follow the whole thread.
I’ll just chime in that there are lots of different kinds of questions, and they may require different kinds of tiddler-structures (of course).
Here’s a set of what I call “Multi-choices” questions, in a fully-functioning higher-ed wiki that has been actively under development for years.
They’re probing for deep comprehension of readings, and structured superficially like “multiple choice” questions — except there’s no process of elimination! Any subset of responses (any number, potentially including none or all) can be correct. So, you could also think of each question as a cluster of true-false questions, independently scored, but with five at a time (on my model) sharing same thematic framing.
(The pedagogical situation is that these are initially discussed in groups, under “open book” classroom conditions, though it’s impossible to do well in real time without advance preparation. Often the juxtaposition of alternate responses to the same basic prompt helps to foster discussion — one student recognizing a claim that was explicit in the text, another noticing that a certain option exaggerates, or smuggles in a concept that’s really accurate only for a prior reading, etc.)
Right now, I’ve only set up the site for self-quizzing purposes, as well as displaying (and printing) in a classroom setting so that answers are available for discussion immediately after papers are submitted. I’ve also printed (customized) exams using list-conditions based on this format.
I’ve not tried to set up anything like a procedure that would allow this to function as an online exam or at-home quiz (say, for a student who has prepared for class but can’t attend due to illness or emergency).
These questions (with answers) are currently each packed into a single tiddler (as preferred by @Charlie_Veniot). Their order and the associated details widgets are currently hard-coded in the body of each tiddler (though the fine-grained contents are actually articulated within an old FileMaker database, which churns out the TiddlyWiki body text).
I’ve considered refactoring into a more fine-grained structure in TiddlyWiki because of the structural complexity of the five multiple-choices options. Clearly it would be super-neat to articulate the data more neatly (either into carefully named fields, or into separate tiddlers) and to be able to shuffle the order of answers, or tweak their presentation details, via template-level changes. Further, if I ever want an automated process for selecting and scoring responses, the current structure would be insufficient — since self-evaluation on the current model is simply a matter of verbal/visual recognition that is available to the student through the detail disclosure GUI of each tiddler.