Work On A Rate Tiddlers Plugin

Hi, with a very large TW knowledge base two things continually occupy my mind.

  • How to make sure some tiddlers don’t rot on the bottom of the pond by having a method of signaling a review is required at intervals of N days - this resulted in my review plugin which I have been using for a while now.

  • Recognising that the value of a tiddler, an item of knowledge in my case is not always accurately indicated by having a short review interval, I may for instance remember the content of a very valuable tiddler reliably and so not need to review it periodically.

  • Always mindful of what I call value inflation - it’s easy to get excited about a new or recently reviewed item and assign it a high value, then the next exciting thing comes along and so that one is given an even higher value and before you know it there is too much clustering around the ‘high score’ values. An analogy is that sports and dance judges need to be careful awarding tens because they have no-where else to go when someone even better than the best so far comes along. On this note my review tiddlers plugin now has an anti-inflation device in that there are options to increase or decrease the review interval for all tiddlers by one day, typically I increase by one day ( in a range 1 to 100 days ) and so this works in cycles with the inevitable inflation which tends to shorter review intervals - it is in effect a counter inflation device.

So anyway I have started ( well almost completed work ) on a new ratings plugin. I wanted a fine granularity and so the rating is between [1 and 100], 100 represents the most valuable tiddlers.

Here are some screen shots, they all show the rating plugin working along side the existing review tiddlers plugin.

The first slider controls how many rated tiddlers will appear on the storyriver - call this value ‘N’
The second slider controls what I am calling the breakpoint rating value - call this value ‘B’

The upwards point double chevron will fill the story river with N tiddlers with rating value greater or equal to B, they will be ordered down the story river by decreasing rating value BUT if there are more than N tiddlers satisfying this criteria then the list will be truncated at the high value end. Put simply if you ask for tiddlers with a rating value of 85 or more and some actually have the value 85 then these will appear at the end of the storyriver and the highest value tiddlers will be omitted if necessary.

The downwards pointing double chevron shows tiddlers down the storyriver with the first ones having rating closest but not more than B.

The overall pattern is therefore to think of all rated tiddlers ordered by rating value with the highest at the top, select a breakpoint value and then go up or down from there N tiddlers.

It’s designed that way to avoid having two sliders to indicate a range and also because my other review plugin has a similar slider for N, I wanted consistency but I didn’t want three sliders in all.

The display a randomly selected list of non-rated button is simply a way for me to periodically examine which tiddlers should ideally join the ‘rating pool’.

The above image shows a couple of tiddlers with the rating icon shown, it displays the rating value, in this case 100 and 50 and shows how the colour of the icon changes with the rating value. The inc and dec buttons are used to increase or decrease the rating value.

If the tiddler has no rating yet then the rating icon appears ghosted-grey minus the inc-dec buttons, the inc-dec buttons only appear when it is rated which is achieved simply by clicking on the rating icon, then it gets a default value of 50 and the inc-dec buttons appear.

The green bell icon is part of the review plugin and has similar controls.

If anyone else with a need to manage a large number of tiddlers, knowledge base or other then I can post the plugin here, the first version is working, I am just refining doc and so on.

I will probably need to add the usual anti-inflationary measure, in this case functionality to increase or decrease the rating of all rated tiddlers by one but staying in the range [1 to 100] of course.

I have only been using the new ratings plugin for a few days but already I can see it works for me slightly differently to the review plugin - they are different aspects of the same fundamental need to somehow rank over 2000 tiddlers, some book chapter length (OCR scans). It is somehow more meaningful in some situations to fill the storyriver with say 100 tiddlers ( all but the shortest have a ‘reveal more’ mechanism to minimise screen space) and then to say “Hmmm that one appears above this one, do I really think this is the correct importance ranking” and perform the relevent promotion-demotion swap, somehow that doesn’t work with review interval as well for reasons already stated.

4 Likes

Looks interesting, thanks for sharing. I’m still thinking on how to approach the “rating” of tiddlers in my work-related notes. One downside of your rating system, as you explained, is that the user has to decide on the value, and that brings the problem of inflation of the ratings and countering it.

An idea I had recently was to implement rating based on actual uses/views of a tiddler.

Tracking the views would require no interaction from the user, but is more complicated to implement and biased by opening a tiddler which turns out not to be the one needed.

So instead, I was thinking of having a “this is useful” button on tiddlers. Clicking it would increment the value of a uses field, and set a last-used field to now. A minimal time interval between allowed consecutive uses would be defined, e.g. 1 hour.

Advantages:

  • Value of a tiddler based on actual uses, not (potentially biased) decision on assigning a value.
  • Easy decision to make: “this tiddler is useful to me now”, compared to “on a scale 1–100, how much value is this tiddler worth”.

Disadvantages:

  • Requires frequent interaction from the user.
  • The values are accumulated with time, so there’s no immediate information from such a system.
  • Not possible to discern between new valuable tiddlers and very old mediocre ones.
    • Could be countered by registering all uses and then calculating the value as uses in a given recent time interval. To reduce amount of data, use entries could be aggregated by day or week.

@jonnie45 this type of approach would solve some of the issues you mentioned (and introduce others). Have you found similar approaches discussed or implemented?

I know this is a big digression from OP, but I don’t think my idea is developed enough to make a separate thread out of it. Please let know if it belongs in a separate thread.

Hi Vilc,

Interesting to discuss the idea of a value assessment vs an ‘aggregated this was useful in the moment’ value.

I had considered some kind of scheme like this in the past as I have been toying with this whole area for a year or so, I went through various ratings and favourites plugins which did not do quite as I wanted.

Returning to the ideas you floated…

  • I imagine sophistication would be required, for instance a new tiddler would have a disadvantage over one that was 3 years old - indeed you point this out in your disadvantages.

  • It would still suffer from inflation, if the numbers are not to get arbitarily high you might need some kind of normalisation and I suspect that might have a bearing on the first point. I find myself thinking of using floating point numbers and dividing through by elapsed time to get a final value but it is probably more complicated than that.

As far as the review plugin goes I am finding that the anti-inflationary measure of adding one day to the review-interval for all tiddlers in the review pool works very well. The review plugin creates work because unlike a value score it actually invites me to do the review. I am often behind and have a list of 200 or more tiddlers to review. Over time I click on the button that increases review-interval for all tiddlers until over time I find I reach some kind of equilibrium. Each time I do this I review all tiddlers with a short review interval and decide whether I should reverse the action just taken for that particular tiddler - so after anti-inflation I review specific ‘prices’ that need to increase again but this time only for the short interval tiddlers that have some perceived extra value or I keep forgetting.

So it’s a kind of multiphase sifting process which starts with a software time-save but then flows into a manual adjust phase to try to ensure that the really deserving cases end up where they belong whilst ensuring a manageable daily workload.

The anti-inflationary measures for the rating value plugin will likely follow a similar pattern but this time the driver will be the accuracy of the higher ratings rather than trying to manage a self-imposed or rather software-imposed workload. The ratings plugin does not directly suggest I do some work but the review plugin does.

I don’t see any of this as “fire and forget” - our understanding and memory are fickle beasts and ratings from today will not necessarily serve as ratings for tomorrow so I see the anti-inflationary measures only as a time saving device to re-baseline everything after which the fine tweaking starts. The tools designed to fill the storyriver with a range are therefore just as important for the search for tiddlers affected recently by an anti-inflationary sweep that need re-adjustment.

Maybe other people would not suffer from inflationary tendencies but I doubt it - we do tend to get excited by the latest thing and sometimes need to review and older thing to realise just how good it really was.

I do find long term use of these tools affects the mind.

Initially the review plugin meant my brain was overloaded and I entered a phase of brain fog - the software was being too demanding, then as I added anti-inflation it helped and finally I started to look at the list of tiddlers to review as something to chip away at and not to get done today and that phase started to ease off a bit. Changes in software + changes in me.

I read the earlier thread with interest but I had no need for any sort of rating tool at that point. I’m thinking something I’ve got coming up could eventually use this, even if not right away, and I’m intrigued again. Here I’m not really commenting on your new version. I’ll look more at that when you publish it. I’m just thinking through how I might like one of these to look.

To me there are two very distinct ideas. One is rating: tracking the relative importance/usefulness of various tiddlers according to the users. The other is periodic review: making sure that the content does not wither away from neglect.

I’m thinking that mathematically they would have different tools to solve them.

rating is subject to inflation, and/or to polarization. It’s well known that most rating systems have a lot more 1’s and 5’s than they do 2’s, 3’s, and 4’s. I haven’t thought through details, but I think that one solution to this problem would be to consider the scores supplied as raw data, and to normalize them on some sort of bell curve to give display values. (These scores would be relative to the other tiddlers, rather than hard numbers; I consider that a feature, not a bug.) We could do this in a way that minimizes the number of dreaded 1’s and overdoes the number of 5’s. We could even add a reliability metric that allows the scores of newly-edited tiddlers to vary more quickly than older ones.

But there is a big flaw for me in any of this. Even when I’m planning on a public wiki intended for use by a number of users, I am quite happy with the one-file wiki experience (even if under the covers, I’m serving it over Node.) I like having my server do nothing more than serve up the initial content. A ratings system would need to be interactive. So for now this is just dreaming for me.

Periodic review is a different story; it’s designed for the creators, not the end users. I would like to see this happen in a more automated fashion, with no need for the editor to establish a review period. There is a strong tension between two competing factors here, and I’m not sure how to reconcile them. Recently changed content is often content that needs still more work. If you’ve made changes to a tiddler in each of the last ten days, there’s a good chance that you’ll still need more changes today. But on the other hand, content that hasn’t been touched in many months or years, also should get an occasional look-see. But we need some discounting factor as we get further back. There’s only very slightly more reason to revisit a tiddler not touched in 366 days over one not touched in 365 days.

Perhaps these are better handled by two separate review queues. But I will try to think of an elegant way to combine them into one. Perhaps the queue would look like this:

  • Tiddler 1 (changed 14 times this month)
  • Tiddler 2 (not reviewed for 137 days)
  • Tiddler 3 (not reviewed for 118 days)
  • Tiddler 4 (changed 2 times this week)
  • Tiddler 5 (not reviewed for 94 days)

This would require us to run some activity on edit/save of the tiddler, and I don’t really know how to do that, outside overriding the save button, but I’m sure we could figure that out.

Just some immediate thoughts.

Rating - polarization - yes I experienced that when I created a tag system 1,2,3,4,5 I was also very short sighted by choosing 1 to the be highest value as in 1st which left me nowhere to go to if I decided I wanted an even higher rating.

The one weapon I have against this is that now with the new plugin is that I can specific a range of rated tiddlers to appear on the storyriver I will doubtless move one up or down relative to it’s neighbour as a result of seeing them one above the other. If I merely exchange tiddlers that are separated by a rating difference of 1 then I have not altered the distribution but if I move only one tiddler up or down then I have changed the distribution - this combined with the anti-inflation measure might help avoid or reduce polarization.

Regards review-interval, my concept of ‘review-interval’ is some kind of function of value and how long my organic memory retains the information in the tiddler - also we are prone to think we have remembered the gist of something but when we read it again we realise we have missed quite a lot of important detail so I find it’s not a simple matter of thinking “Oh yeah I remember that one well” it might be a very selective recall. On top of that we sometimes need a nudge to recall something, sometimes all I need to do is to read the title of the tiddler and the rest comes back but I still needed the refresh to waken up the knowledge.

I did start off thinking of frequency of edits or views somehow an indicator of quality but I don’t think it is. For instance a tiddler that is four or five pages of OCR scans from a book might receive a lot of visits as I spot pesky OCR scan errors or sort out layout, so the visit frequency might have more to do with how many non-obvious OCR errors made it past first checks, spell checkers miss a lot. So I am not usually using the review-interval value as a cue to the likelyhood that a tiddler needs more editing.

A tiddler of high value which I remember easily => long review interval
A tiddler of high value which I forget easily => short review interval
A new tiddler where value may or future memory retention not be clear yet => start off short interval and adjust
A tiddler of low value => long review interval

I think you are right about polarization that is one of the reasons I wanted a much finer granularity in the ratings plugin as I experienced polarization using the 1 to 5 tag system.

The anti-inflation device may help combat polarization at the short interval end of the range as all move down one step periodically but only some move back up. Polarization at the long interval end of the range is less of a concern but I will ultimately need some counter measure.

There is a lot of human psychology involved here and I will probably have to evolve the code as I see what patterns emerge. To take this to the next level and have multiple users assigning value would be a significant step I think, at least I only have to deal with the changing perceptions of “me-yesterday” and “me-today”.

I agree that rating-systems are more vulnerable to polarisation than review-period systems, the one major point about the review-system is that it creates work for the user so there is a driver in one direction favouring all review periods to be longer, against that of course the user knows that if they simply aim for long review periods for all then it loses it’s value so there is a driver in the opposite direction as well - I think this helps balance things.

Certainly I have been running with the review-interval plugin for a while now and so far there is not any evidence of clustering but I expect in the longer term to find clustering towards the long review interval end because of the method used for anti-inflation - it is complex I can see why you ended up thinking of a bell-curve normalizing step.

Thanks for another perspective - I have been thinking single-user, single file, I can see how more sophistication would be required for a shared system.

Hi Scott,

I created a very crude monitoring tiddler to check on the distributions of both the rating and the review plugin.

First the new ratings plugin

14 tiddlers with rating 90-100
10 tiddlers with rating 80-89
13 tiddlers with rating 70-79
18 tiddlers with rating 60-69
23 tiddlers with rating 50-59
12 tiddlers with rating 40-49
2 tiddlers with rating 30-39
0 tiddlers with rating 20-29
0 tiddlers with rating 10-19
0 tiddlers with rating 1-9

And now the review interval plugin (measured in days - max 100 )

10 tiddlers with review interval 90-100
19 tiddlers with review interval 80-89
42 tiddlers with review interval 70-79
74 tiddlers with review interval 60-69
162 tiddlers with review interval 50-59
84 tiddlers with review interval 40-59
142 tiddlers with review interval 30-39
171 tiddlers with review interval 20-29
77 tiddlers with review interval 10-19
34 tiddlers with review interval 1-9

The ratings plugin is not too skewed in the region of 50 to 100 but is sparse in the range 1 to 49 and this is not a surprise because I am tending to target the more useful tiddlers when I assign values, in some ways I am not so bothered if low value tiddlers do not get a value for a long time, maybe never but it would be a mistake to normalise to better use the whole range because I want to leave room for lower ranking tiddlers as and when I wish to add them.

The review interval plugin has been in use for a number of months now and does have an anti-inflation option and again only covers a fraction of the +2000 tiddlers in my wiki but I feel shows a reasonably non-polarized distribution.

Doing this has made me realize that sometimes we don’t really want to rate the lower rankers, perhaps it’s too time consuming, perhaps they require a cull etc, maybe the reaction to mediocrity is non-decisiveness.

Right now my feeling is that the best course of action is to work on monitoring, perhaps a nice bar graph in each of the sidebars and to be prepared to focus attention when the distributions seem to warrant intervention or just take note and take opportunities to adjust during the natural course of other work.

I think I will be proceeding in the short term in adding good tools to view the distributions so that I can manually or semi-manually shape things. I don’t think I am ready to work with heuristics methods to achieve this.

I have found time and time again in this search for better ranking and review tools that we don’t always know exactly what we want, we just sense we need some kind of prioritization of information and we perhaps underestimate factors in the way we work and think that really should be known and taken into account before trying to work out heuristics.

So I think my immediate course is to model how things work with the existing simple software with some manual intervention and learn from use, it’s sometimes not a bad way of evolving software, manually patch the difficult heuristic parts with human versatility but still get the software to do most of the time consuming stuff and taking baby steps get a better idea of what it is you are actually aiming for.

Next mission - see if I can get cute little bar graphs into each plugins side-bar tab - at a glance warning of any clusters or polarization trends.

@jonnie45 thanks for sharing your research and design on this. Some interesting ideas and helpful terminology. I will need to put some time aside to read the thread in full.

There has being a lot of discussion about “spaced repetition” in the comunity in the past that may interest you.

Personaly and professionaly I have a lot of experience with information and knowledge managment alongside personal productivity and life long learning. So I am very interested in tools to support these in tiddlywiki which is an ideal platform for this.

It would be nice to automate this as much as posible as @vilc raises but the concepts are most importiant. To add to your own.

Here are a few importiant concepts to support similar objectives

  • effective search and linkage to related information
  • analytics that monitors your interaction with the information and feeds it back to you
  • freelinks plugin helps with cross linking, as does backlinks etc…
  • a great deal can be gained from full use of dates and times such as a touch date stamp, a last viewed date, created and modified and comparisons between them.

I appreciate your detailed post but could you share a minimalist user guide to your tool to help wrap it up in a concise statement?

  • yes please, Hopefuly I can return the favor.

Oh you mean my blundering around in the dark work? :rofl:

Yes the plugin has a about page so when I have refined that it should be enough as an overview.

Just to clarify.

The review plugin was made available on the forum sometime ago but my personal copy has moved on since so perhaps an update to the forum post is in order anyway.

The rating plugin is new, it’s what I am working on now and has not yet been made public on the forum.

Yes, I think this is absolutely true—and a very useful insight—for single-user systems. For community ones, there will always be people interested in venting their negative feelings. And of course there are also those of the persuasion “If you don’t have anything nice to say, don’t say anything at all.” That combination all by itself explains rating polarization.

I am thinking about how I might use ratings, but they would be much later in my process. But I think I want to build in periodic review right from the start, especially once the wiki has multiple editors. It might be grouped together with some sort of work-tracking. “Oh yes, Tiddler 452 really should be reviewed, but I did it last time, so let me assign it to Anne.” This is a new thought for me as I type this, so maybe it’s crazy, but I see it as quite possible.

I look forward to seeing your current progress!

Removed post: premature release of latest version of ratings plugin - something wrong with graph display which only showed in test version (in an empty TW).

Ahhh - firefox does not allow SVG rectangle height to be set by CSS as it is an attribute - apparently this is correct although TiddlyDesktop headless chrome does allow the CSS to alter rectangle height this way which is why I didn’t catch it previously. I use the CSS to over-ride default rectangle heights in a bar graph.

Apparently I can achieve what I want by setting bar heights via attributes to some standard height and then using CSS to scale the height.

Now I think I am ready - latest version of Ratings Plugin, remember try on an empty TW first!

Changes

Sidebar tab now split into three sub tabs

Graph display of distribution of ratings to help decide if “ratings inflation” is happening, down to human psychology we often want to rate the newest thing higher than the previous thing we were most excited about. Feature to decrement all tiddler ratings as an anti-inflationary measure, repeat cycles of anti-inflation and the go back to promoting individual tiddlers where judgement dictates. I have been using this now for an extended period and find it quite natural to go through cycles of inflation and anti-inflation, the hope is that everything eventually finds it’s place if it has a natural place - the other possibility is that our brains are always in flux and so any rating system in a knowledge base will do likewise - either way it is pragmatic and I feel natural to work with.

Graph fixed for strict browsers - bar graph SVG image relied on CSS altering default height values of each bar (rect) in the graph. Tiddlywiki desktop chrome allowed me to do this - testing on Firefox revealed that CSS should not be able to alter rectangle height - workaround was to set initial SVG height attributes all to 1 and then use CSS scaling to dynamically change height.

$__plugins_jonnie45_RateTiddlers.json (46.4 KB)

Please let me know if you discover anything amiss - I am hoping to bring all of my plugins up to a decent standard with decent documentation and so on. My understanding of my knowledge base and it’s interaction with these plugins and my mind is maturing I feel so I want to get them polished to a level suitable for anyone else out there like me.

My goals with my large knowledge base - 2500 tiddlers varying in length from a paragraph to a book chapter.

  • Ensure that I do not forget important bits of knowledge through lack of review - Review Tiddler Plugin
  • Assign an importance value or rating to each tiddler so that I can prioritize and constantly review what is helping me most - Rating Tiddler Plugin
  • Ability to flag tiddler when time is short - no specific meaning other than I want to revisit this soon - Flag Tiddler Plugin.
  • Ability to save tiddlers on story river as a workspace to be revisited at a later date - Workspace Plugin

All plugins display tiddlers of interest or in selection in the story river, in other words I don’t display lists of links to tiddlers in the sidebar but directly fill the storyriver with the tiddlers selected by the user who is using the plugin.

2 Likes

I would like to see the latest version of your Review Plugin.
Would you please post a link.
I am interested in your review process but thinking on a multi-user / reviewer level.

Instead of a link, @jonnie45 posted the plugin itself in the previous message. Just download the link provided, and drag it into a wiki you want to try it on, perhaps on tiddlywiki.com, for instance. This is how things are often shared here in talk.

It looks great! And everything seems to work perfectly. However, it’s the kind of tool that really takes some dedicated time to see how well it works, and I don’t have any wikis right now for which it’s appropriate. I will keep it in mind when I do.

I am not a fan of the big empty box for an unrated tiddler, but I don’t have any good alternative.

1 Like

Here is the updated version of the Review tiddlers plugin which I use alongside the Rate tiddlers plugin mentioned above. Again please try out on an empty tiddlywiki and please let me know of any issues as I want to polish these plugins now.

$__plugins_jonnie45_ReviewTiddlers.json (43.1 KB)

Hi Sunny, Scott answered your question - the plugin was in my post. I have also added another post on this thread which gives the latest version of the Review Tiddler plugin. The two plugins work well together.

Hi @Scott_Sauyet
I believe the link was for the Rate Tiddlers Plugin, not the Review Tiddlers Plugin I was requesting.

Hi Sunny - both plugins ( Review and Rate ) are now on this thread, let me know if any issues.

Hi @jonnie45
Thank you for providing both Plugins (Rating Tiddlers and Review Tiddlers).
My main interest was in the Review Tiddlers Plugin, which I had downloaded recently, but on seeing that you had revised it further than the version I had, I requested the newer revision.
I will let you know my thoughts on the latest version after giving it a try.
Thanks again

I think the biggest difference you will see on the Review Tiddlers plugin is that the single side bar tab now has 3 sub-tabs so it’s less cluttered. Also the graph may not have been displaying correctly on some browsers - I was relying on something that is not standard SVG CSS across browsers but I was unaware of this until today - I used a different method that relies on standard properties so it should be ok now.