Help understanding macro calls with transcluded parameters

Here it is for the record:

<!-- Text substitution patterns -->
\define Any-a() [ÀÁÂÃÄÅàáâãäå]
\define Any-e() [ÈÉÊËèéêë]
\define Any-i() [ÌÍÎÏìíîï]
\define Any-o() [ÒÓÔÕÖØòóôõöø]
\define Any-u() [ÙÚÛÜùúûü]
\define Any-y() [ÝŶŸýŷÿ]
\define Any-c() [Çç]
\define Any-n() [Ññ]
\define Any-ae() [æÆ]
\define Any-oe() [Œœ]

<!-- Text sanitizer for sorting/searching -->
\function .toascii(src)
  [<src>]
  :map[<currentTiddler>search-replace:gi:regexp<Any-a>,[a]]
  :map[<currentTiddler>search-replace:gi:regexp<Any-e>,[e]]
  :map[<currentTiddler>search-replace:gi:regexp<Any-i>,[i]]
  :map[<currentTiddler>search-replace:gi:regexp<Any-o>,[o]]
  :map[<currentTiddler>search-replace:gi:regexp<Any-u>,[u]]
  :map[<currentTiddler>search-replace:gi:regexp<Any-y>,[y]]
  :map[<currentTiddler>search-replace:gi:regexp<Any-c>,[c]]
  :map[<currentTiddler>search-replace:gi:regexp<Any-n>,[n]]
  :map[<currentTiddler>search-replace:gi:regexp<Any-ae>,[ae]]
  :map[<currentTiddler>search-replace:gi:regexp<Any-oe>,[oe]]
\end

Fred

Yes it is very efficient (and provided by AI :joy:). It’s not actually what I am currently using, though.

This is what I am currently using. Converting to lowercase avoids the need to specify É and é, for example. And even though some letters are redundantly repeated in the German section, it is visually clear what substitutions are being applied and easy to check you have everything you want (there won’t actually be duplicate entries in the accentmap dictionary, but its helpful for human readability and sanity checking).

/*\
title: mymacros/deaccent.js
type: application/javascript
module-type: macro
\*/

/*
Macro to deaccent a string
*/

"use strict";
exports.name = "deaccent";
exports.params = [{name: "input"}];


exports.run = function(input) {
    // Define the mapping of accented characters to their simplest equivalents
    const accentMap = {
        // French
        'à': 'a',
        'â': 'a',
        'ä': 'a',
        'ç': 'c',
        'é': 'e',
        'è': 'e',
        'ê': 'e',
        'ë': 'e',
        'î': 'i',
        'ï': 'i',
        'ô': 'o',
        'ö': 'o',
        'ù': 'u',
        'û': 'u',
        'œ': 'oe',

        // German
        'ä': 'a',
        'ö': 'o',
        'ü': 'u',
        'ß': 'ss',

    };

    // Convert input string to lowercase and split into individual characters
    const characters = input.toLowerCase().split('');

    // Replace each character with its simplest equivalent if it's in the accentMap
    const replacedCharacters = characters.map(char => accentMap[char] || char);

    // Join the characters back into a single string
    return replacedCharacters.join('');
};

If I used the newer script in my previous post, however, I could probably just add two lines to get the same effect:

        .replace('œ' , 'oe'); 
        .replace('ß' , 'ss'); 

@TW_Tones has mentioned the possible technical approaches for diacritic insensitive searching here, in the thread I linked a few posts up (title: diacritic-insensitive-searching-of-fields-for-non-english-languages)

Sadly I am not technically knowledgable enough to understand how to make diacritic insensitivity a core TW sort/search feature, but I do hope the developers add it to their new feature plans!

I haven’t been following closely and don’t have the time for a proper investigation, but is String.prototype.toLocaleLowerCase an appropriate solution? Clearly it would have to be folded into search, since, I assume, it’s not already there. But I think it would likely beat the best custom implementations we’re likely to develop.