What is a “small” data set for you. How many records?
And – What is “large”?
I think it also depends, what you do with your data while importing. I don’t know the GEDCOM data format, but a short glimpse to spec indicates that it is “line-based”, with at lot of “back-references”, that may need modifying existing records (tiddler) several times. … Depending how your data-structure looks like. … That needs time.
Is there an easy way to “split” the data-set into several smaller “chunks” and import them that way?
— some browser JS background —
Browser engines work with a single event-loop. That means everything is done in 1 “never ending” loop. The browser UI handling and the site javascript-code is done in this 1 loop.
So if a JS programmer creates an endless loop in JS eg: while (1) {}
the browser UI will be completely blocked.
Today’s browsers detect these “long running js code” and create a popup with a button to “continue” or “block” that script. … If it is blocked, it can only resume, if the page is reloaded!
So if you import “large” data in a for() loop
, after some time the browser will show the popup you mentioned.
— end background —
In my opinion, this can only be avoided by splitting and queueing the import data and then process the queues in smaller chunks until everything is imported. 1 “chunk” per browser loop.
At the moment TW doesn’t provide any queueing and task-execution API to developers.
Such a mechanism would allow us to give some user progress feedback, since the UI will be responsive while the tasks are executed.
As I wrote — How many lines is a “large” data set?