I’ve been toying with some new environments for interacting with audio in one’s wiki recently, and I’d love to get some perspective on how others are interacting with sound in their wiki

I’ve been toying with some new environments for interacting with audio in one’s wiki recently, and I’d love to get some perspective on how others are interacting with sound in their wiki

I am not doing it in my wiki so far, but I have a very good recording app on my phone where I record voice memos and want to import them in bulk as text after transcription, then expect to turn each into a tiddler and classify them as note, instructions, todo or idea as an example.
Hmm, sounds a bit like how I have a python script I run that takes the .txt file my tablet’s annotation system exports, strips it of noise, and creates .json files with all my notes that I import.
It reliably applies all the tags and fields that I’ve indicated, but commander is also a lifesaver for a big import like that.
I just run that whenever I finish reading a book.
Where i can find this plugin? It looks exceptional!
Am I misunderstanding what’s happening in the demo? Because I see audio data (which likely was text data before) being converted to text data. Such a waste of resources, it’s getting worse than with food 