It would be incredibly valuable if the data could be semantically chunked. In its current form, information is sent to the LLM in disconnected fragments, which increases the risk of hallucinations. I was genuinely shocked to see how NotebookLM performs real-time source retrieval and displays references directly in the answers ( the hallucination rate is nearly zero). I would love to see such a powerful framework like Pickaxe adopt this approach. Introducing semantic chunking, along with improvements in how chunks are classified and retrieved, could be a game-changer for Pickaxe.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Add the ability to reveal sources used in Knowledge Base | 4 | 94 | August 3, 2025 | |
| Accurate bibliography from the knowledgebase | 0 | 25 | January 31, 2025 | |
| Message Insights: Look at Knowledge Base Citations 📣 | 11 | 396 | January 29, 2025 | |
| Knowledge Base retrieval bug | 32 | 437 | August 3, 2025 | |
| How does a pickaxe use its knowledge base vs. its LLM training info? | 1 | 90 | March 31, 2025 |