Ah, I didn’t notice that the code wasn’t formatted as such. I do apologize for jumping to the conclusion that the code was untested.
[Slightly edited:]
In some respects, it doesn’t ultimately matter what the origin is, so long as you’ve tested the code and made sure it works (ideally, in an elegant/flexible way).
However, our community norm is to avoid posting LLM/AI-generated code without acknowledgment. And in this case, even before seeing the bits that were stripped out within angle-brackets, it’s clear that the approach in that code is really not a straightforward and elegant solution, compared to the neat solution posted earlier in the thread by @saqimtiaz …
I don’t think it’s a coincidence that such verbose and roundabout code was generated by a LLM, though I myself have also sometimes built some very complex Rube Goldberg contraptions when I couldn’t find an elegant solution!
For such reasons, I do think our norm should be not only to make sure code works, but to include a disclaimer if/when LLM-generated code is posted by someone who is not prepared to explain how it works and how to learn from it.