I have 2 issues:
1.As seen on the screenshot Pickaxe screen takes like 12 seconds to load.
-
Inside my pickaxe preview after 4 short back and forth chat message, the last message was sometimes get truncated/cut . It happened with different LLM-s. I even tried with gemini 2.0 flash , which has a 1M token context window, so it shouldn’t be the LLM-s problem.
The response truncated/cut response character length was 1024 characters.
When I said “you didn’t finish” or “continue” , then it responded with the full answer which is 2800 characters.
My system prompt is 10.815 characters, my knowledge base is 353.000 characters.
