SEO - Any expertise on getting better search ranking

@Christian_Byron Search is a dark art and now challenged by LLMs so content is not what it was, However if we are going to help you need to give us more information about your online site. Is it single file, node etc…

It is possible to export your sites content as static html which then becomes quite searchable however it (in my opinion) needs to be modified to open the interactive wiki rather than other static pages if you want real interaction.

You may want to look at the following core plugins;

  • Google Analytics: Website visitor statistics from Google
  • Consent Banner: Consent banner for GDPR etc

You may also want to look at other SEO tools and configurations that allow the search engins to index your content such as robots.txt and other methods. Again this depends on how you are hosting your wiki.

  • TiddlyWiki can just look like a website and as a result can have most if not all SEO work done.

Also adding a splash screen is wise so people dont abandon your site on first visit if it is large and interactive.

Otherwise TiddlyWiki can generate all kinds of content and files to tempt the search gods.

  • Lucky for us tiddlywiki is a great search keyword for those of us in the know.

Thanks Tony for some insight … I’m hoping more people can add thoughts like yours…

Actually, as I’m learning more about the subject of SEO - I’m now concerned that TW is actually hindering any search optimisation. Hence why TW is maybe not well known outside maybe this lucky bubble.

This video gives a good introduction on the topic and available tools. The author suggests that keywords for search are found in three key areas ( Title, Meta description tags and the first paragraph )… So using some vibe coding again to scan for these I get this result for TW

Keyword Results for https://tiddlywiki.com/

Keyword Count Found In
non-linear 2 Title, Meta Description
personal 2 Title, Meta Description
notebook 2 Title, Meta Description
tiddlywiki 1 Title

Whereas the same code used again on a “competitor” shows you what maybe missing above…

Keyword Results for https://obsidian.md/

Keyword Count Found In
free 2 Meta Description, First Paragraph
flexible 2 Meta Description, First Paragraph
private 2 Meta Description, First Paragraph
thoughts 2 Meta Description, First Paragraph
obsidian 1 Title
sharpen 1 Title
thinking 1 Title

Because TW stores any first paragraph as a tiddler inside the wiki store script tag - its unsearchable by google and other search engines. So its not adding weight to the indexing of the page.

I don’t know much about SEO, but that seems likely. I was not paying attention in the days when the static generation was created, but I assume that it was at least in part an attempt to address SEO concerns. I do wonder if another pass at static generation would be warranted.

My idea would be to do static generation mostly like it is now, but include in the output a script that waits for any human interaction (mouse enter, scroll, etc.) and then scans the DOM for appropriate links, and replaces something like this:

<a class="tc-tiddlylink tc-tiddlylink-resolves" href="Title%2520Selection.html">set of tiddlers</a>

with one that points to the main site with an expanded link, like this:

<a class="tc-tiddlylink tc-tiddlylink-resolves" href="https://tiddlywiki.com/#Title%20Selection:%5B%5BTitle%20Selection%5D%5D%20HelloThere%20%5B%5BQuick%20Start%5D%5D%20%5B%5BFind%20Out%20More%5D%5D%20%5B%5BTiddlyWiki%20on%20the%20Web%5D%5D%20%5B%5BTestimonials%20and%20Reviews%5D%5D%20GettingStarted%20Community">set of tiddlers</a>

That is, instead of linking to the Title Selection page, we would link to the main site with that tiddler focused

We should be able to automate this for any site, using $:/DefaultTiddlers. The point here is that the static site is really only meant for SEO, but it’s usable for a quick glance at the material, with all of its links dynamically updating to point back to the much more useful dynamic wiki site.

This may be incredibly stupid. I can’t imagine I’m the first to conceive of this idea, and it’s quite possibly that various page ranking algorithms would actually punish this as some sort of abuse. In this case, I don’t think it actually would be abuse. It’s simply a way to make it possible for the the crawlers to crawl Tiddlywiki material but for the links actual users see to be to normal TW views.

I don’t think this would be difficult to do, but I have never cared much about SEO, and so I’m not very interested in trying this myself. But if someone does attempt this and wants to discuss technical details, I’m more than willing to help.

I have insufficent time to answer right now, but you must tell us how you are serving tiddlywiki as it may make all the difference, Providing a robots.txt and some others, adding meta records via the appropriate raw tag(s) and other steps can make tiddlywiki as strong in SEO as any other site. You just need to know what to do, and ask here for help if there is something you cant see how to achive with tiddlywiki.

Like most websites, especialy those with logins, in tiddlywikis case loading into memory, search engins tend to simple look at files, but of course if they need to load/login the search engin is less likely to do that. The result is we need to provide the information that they will use to index your site.

  • I too tend not to bother with SEO and anyway LLM’s are going to answer questions they detrived from your content and not even direct people to your website, anymore.
  • I dont remember the files etc… to generate and store with your wiki but this can all be found with guidence online easy to find. tiddlyWiki may not go out of its way to support SEO but it does not stand in its way either.
  • Obsidian being comercial is possibly paying a lot of money to get high in search results.

My understanding of the problem is that the first step of SEO optimisation is having the content in something that can be scraped and understood by the relevant scraping back-end - and the TW format is not useful for that.

If it can’t be scraped and understood for SEO, then it can’t be scraped and understood for LLM.

I was thinking similar Scott - although maybe a plugin could add the needed content to the splash screen that @TW_Tones mentioned and other meta tags etc… it would have to persist in whatever save mechanism is used.

The static site mechanism might give a better search ranking - but then, as pointed out, you loose functionality of the wiki which is not ideal.

Please, take my word for it, you can Seo tiddlywiki, even if you externalise that which a search engine can’t see. keep in mind most html pages can also present data that can only be seen interactively, so there are ways to make at least a subset available for the search engine to use.

for example if you include static pages Along side the interactive ones, not instead of, you can get the best of both worlds.

Tiddlywiki is not the issue it is catering to what to search engines want

see sitemaps, google search console etc…

try this in google search to see how it can see deeply into tiddlywiki site:tiddlywiki.com analytics

All the search results are to the static HTML pages within tiddlywiki.com, and not to a Tiddlywiki page itself.

So I think that proves the problem here - that TW in it’s native form is not SEO friendly. Sure, if additional effort is spent to make a static version of a TW site then that is SEO friendly, but then the search engines index that - but this process takes additional configuration effort, and ends with people visiting static html pages and not experiencing the actual tiddlywiki site.

It is… but those search results always end up on a static page (usually relevant I grant you) and then I mutter under my breath, click the “This page is part of a static HTML representation of the TiddlyWiki at https://tiddlywiki.com/” link, and then end up at the front page of tiddlywiki.com, mutter annoyance again, and then search again to get the TW version of the static page I was just at.

This is not SEO friendly. This is not even particularly user friendly.

1 Like

In TiddlyWiki everything is configurable. It’s a relatively straight forward fix.

I did create a PR at GitHub: [DOCS] Add deep links to the static banner by pmario · Pull Request #9339 · TiddlyWiki/TiddlyWiki5 · GitHub but a bit more testing will be needed.

If you want that something changes, you need to create an issue at GitHub. If we do not see such discussions by “accident”, fixable problems will be forgotten.

1 Like

That is certainly an improvement. My suggestion was to go a lot further and, if possible, to find a way to make all internal links from a static site, when clicked by a person and not a spider, to lead to the dynamic page. It wouldn’t be terribly difficult to do, but it’s possible that this could get labeled abuse by the crawling engines.

I think this statement is misleading, is your Banks website not SEO friendly.?, no but they do have to address content that is mostly not automaticaly scraped by search engins.

I asked a question early in this thread;

If published via node it is trivial to also have all tiddlers appear as static pages from the interactive server. But @nemo If you want to call this native or not you can, but you miss my point, its not about tiddlywiki its all about SEO.

  • Tiddlywiki is not hindering SEO
    • Its SEO that is not very good at looking into interactive site content
  • There are solutions to all of this
  • I have suggested, most solutions come from the perspective of SEO Knowledge “not tiddlywiki”

The current Static tiddler templates can be modified such that any link loads the interactive wiki. I have done so before and will look to see if I can find it.

  • It is true I had to do this myself so those links into the interactive wiki included the tiddler in the URL.

Once upon a time most websites were just a series of static pages, and now just like tiddlywiki with its highly interactive, lifting itself up by its bootstraps, and other sites database backends, sophisticated interactive environments including web 2,0 or content managment, API intergrations, maps etc… the simple static site has all but disappeard.

  • II think you will find that in the worst case, a search points at the site, but not the tiddler. Assuming you comply with other rules of good SEO.
  • Both site ownwers and Search engins have being addressing this for decades. It’s not new.

You are not stupid, not the first to concieve this and its not abuse. But you can start with a site map, meta tags, etc… google reads the HTML file even if it does not know how to itterate and seperate tiddlers. As long as you have given permission to robots.

  • Another area is complying with GDPR the european standards for privacy. If you are saving cookies without opt out or seeking permissions, most of europe wont see your site as well.

Finaly much more importiant to the sucess of SEO is fresheness or currency of the website. If the single file tiddlywiki is never updated, and given new content and fresh time stamp, similarly for any static representation of tiddlers your site will quite fast loose its SEO power.

  • It does not matter how much you “improve tiddlywiki” to support SEO, your SEO can easily fail based on another aspect of SEO.

With all due respect I am not going to repeat these points over and over again, you as reader can choose to ignore or despute what I say, but please don’t do it without reading my comments properly. I am sure you all know by now I rarely assert something without good evidence for it.

Having another coffee, maybe I wont be so grumpyafter that :nerd_face:

Hey Tony, no need to be grumpy… just admit when you’re wrong :smiley: … You have put forward some great points and valuable advice regarding how search is evolving with AI and how SEO is difficult no matter how well the technology supports it.

Regarding “information about your online site” - I wasn’t seeking direct help for a site - just expertise on SEO generally (e.g where do seacrh engines look for keywords etc) … And I agree a static file approach plus @pmario 's changes will help/work if you want your entire site indexed… However - I checked to see if TiddlyHost supports static sites - to best my knowledge it doesn’t … Nor do i know if github or other hosting options do ?? … So a single file solution is maybe still needed for the flexibility of hosting options we all love TW for.

Regarding:

As I’m just starting to learn, a majority of websites are built with Wordpress (about 40% - especially small business sites like mine) which renders on the server - weaving individual content into a static page as needed. They have a SEO plugin that helps weave in the needed meta tags etc. I’m not suggesting this is where TW should go - just an observation that there are many ways to deliver a site.

So - I did more reading on TW and discovered there is already a mechanism that builds the splash screen that could help. Tagging a tiddler with $:/tags/RawMarkupWikified/TopBody will save it into the raw html in the same way as $:/tags/RawMarkup … What might be good is to use this in a plugin that focusses on SEO tags & concepts etc… It wouldn’t get the entire site indexed or take you to a specific tiddler … but it could solve a real world problem for something like “Is there a plugin for …” (eg this recent discussion)

Cheers
CB

Hosting static sites, without any JavaScript is the most basic form of hosting solutions. So as long as every page is linked in some way, crawlers will find them.

Additionally the standard TW static file creation configuration creates a single page, that contains all the content. See: https://tiddlywiki.com/alltiddlers.html#HelloThere

This page has the advantage, that it can use the browser search function and will basically find everything on 1 page.

tiddlywiki.com and the static version, is hosed as a GitHub “pages” page.

GitHub offers a CI/CD mechanism, that lets us build everything when a new version or changes to “the docs” is pushed. So the whole system can be automated.


The biggest disadvantage of our static export process is, that eg: tabs and reveals are not expanded. So especially tabbed content can not be exported very well.

So it is important that tiddlers listed in tabs, or reveals are also available as single tiddlers.

As I wrote everything in TiddlyWiki is configurable.

You can put whatever you need into a <noscript> area of your wiki.
But, it will increase the SPA size.

\define tv-wikilink-template() https://tiddlywiki.com/static/$uri_doubleencoded$.html

<%if [<savingEmpty>match[yes]] %>

<$transclude tiddler="$:/core" subtiddler="$:/core/templates/static.content"/>

<%else%>

<!-- Mastodon verification -->

<a rel="me" href="https://fosstodon.org/@TiddlyWiki">~TiddlyWiki on Mastodon</a>

<!-- For Google, and people without JavaScript-->

It looks like this browser doesn't run JavaScript. You can use one of these static HTML versions to browse the same content:

* https://tiddlywiki.com/static.html - browse individual tiddlers as separate pages
* https://tiddlywiki.com/alltiddlers.html#HelloThere - single file containing all tiddlers

---

{{TiddlyWiki}}

<%endif%>
  • See screenshot below


This info is there, to get “scraping” of the static site going, without the needs of a robots.txt file.

Hey @pmario - I’m using Chrome and I don’t see a Debugger tab on the Dev Console … Are you using a special extension ?

I did find the Lighthouse reporting tool - which has some SEO reports… Will see what this tells me

I am using FireFox. Here is some Edge screenshots, which should be similar. I have no Chrome.

F12 → Sources → … → Settings

Settings → Debugger → Check Disable JavaScript → Reload browser tab

I checked $:/core/templates/static.content on the empty editions I’ve used … here’s what it has:

<!-- For Google, and people without JavaScript-->
This [[TiddlyWiki|https://tiddlywiki.com]] contains the following tiddlers:

<ul>
<$list filter=<<saveTiddlerFilter>>>
<li><$view field="title" format="text"></$view></li>
</$list>
</ul>

So workable if you have named tiddlers with any keywords you want SE’s to index you by… But to be like the tiddlywiki.com version - I think a user friendly way to set these might be warranted

I agree it’s native in that TW has back-end mechanisms that can build those static pages for SEO purposes. I was however meaning it’s not native in the sense that a delivered TW page (ie, empty.html + content) does not.

Then I think we’re both missing each others point.

If you mean “Tiddlywiki” as a set of tools/commandline options/node server/etc etc etc, that can build static pages? Then sure. It’s not hindering SEO.

I was meaning “Tiddlywiki” as “the content the browser (or more relevantly: the search engine) gets when it requests a page” - which is a big blob of JS + content, in a highly interactive format which I love from a user point of view, but which is poor for SEO.

To ensure I’m following this right - the wikitext in this static.content is interpreted by the core and used to populate the <noscript> area? (that interpretation being done at save time on a single file, and at http request time when it’s node?)

I stopped being grumpy after my third coffee. I was trying to get this info, that you were using TiddlyHost, and you seemed to forget to answer. So yes TiddlyHost is not giving you a folder for a html site, if it was, you could quite simply place the static pages/tiddlers along side your wiki, with links into the interactive wiki, and add other files.

There is an option on node wikis to simply publish static tiddlers along side the interactive wiki, such that every tiddler ends up with what may be “crawlable” tiddlers, and get a url for each tiddler. Thus permiting search engins to provide links into tiddlywiki. Rather than only to the index.html equivalent of any internet site.

  • Basicaly as I understand it the crawlable content can just be made available automaticaly.
  • Sorry I dont recall how to do it. Perhaps that is how tiddlywiki.com works?

I also want to simply remind you that the SEO world has solutions already for such cases and you can publish along side a website, tiddlywiki’s included, such as site maps etc… that will help SEO results. But of course there may be limitations when publishing on top of tiddlyhost but we can ask @simon if he has a view on this matter, especialy for those who pay for tiddlyHost (as I do).

Background
For some years I did a lot of SEO accross sites including tiddlywiki, online shops, wordpress and other websites. This included retaining SEO links when bulk renaming posts/pages and on top of a php hosted sites (I used tw-reciever to edit online) there are various things you can do to make links into your site visible to the internet and search engins.

  • One advantage tiddlywiki has is the ability to build automated ways to generate custom files to place on an internet site to support this configuration and advanced SEO. This is achived on other content using plugins such as Yoast on Wordpress, because as with tiddlywiki, most of wordpress’s content is only avialable interactivly as the content is stored in a database.

My main point is that with very few exceptions sucess at SEO is all about the dark art of SEO and has very little to do with tiddlywikis architecture, but I am sure it could be used to build wizzards to do spells in this dark art.

The main thing about the splash screen, especialy on larger sites, is indicating the site is loading into memory for fast and interactive use, unlike other sites that may load quicker, because they only deliver a small part of the whole site. It is death for a website to offer the user a blank screen because people will quickly abandon your site.

  • Extenal images except perhaps for a landing pages, are also a good move, so they do not need to load into memory before interaction becomes possible. ie they load when you go to the content that needs them.

Another thing a learned, the load time of the website will be recorded by Search engins, the main way to help with TiddlyWiki single file implementations is to use a CDN such as Cloud Flare, which will improve load times all over the world, and with the free account (when I last checked).

I hope this helps

1 Like

Yes. This template is rendered, when a single file wiki is saved. But you need to be careful. Search engines will hit you with a big penalty, if you create spamdexing content.

So you will need to read the search engine guidelines about, how to create a proper SEO optimisations for SPA (single page apps)

If you search for SEO and SPA, there is one topic that always comes up. Server side rendering. Which is exactly, what our /static/ page representation does.


If we use a Node.js client / server installaiton, when the server is started, it will create a TW SPA and keeps it in memory. When the URL is requested, it will send the full SPA to the browser.

So for the browser there is no difference between a single file wiki served with eg: nginx, or a client / server Node.js served wiki.

If you change $:/core/templates/static.content and save it with the client/server configuration, the noscript area is updated with the next browser reload.

2 Likes