I think it would be better if this plugin supports fetching llm model and running local llm, like https://webllm.mlc.ai/. This would solve the potential of API leaks and would be free.
I think it would be better if this plugin supports fetching llm model and running local llm, like https://webllm.mlc.ai/. This would solve the potential of API leaks and would be free.