How do we know if documents are passing cleanly to the LLM?

When my users upload a document to Pickaxe, sometimes the LLM only gets a portion of the document. This results in strange responses and hallucinations. The issue is intermittent.

The issue shouldn’t be token size, I’m using Gemini with 500k+ token context for the file upload.

In a somewhat related question, is it possible to see exactly what Pickaxe is passing to the model so we can trouble shoot in case the issue is with our prompt? If I could see how the document is being presented to Gemini it might help me troubleshoot this on my end.

Hi Thomas! Which of your Pickaxes is experiencing this issue? For a breakdown of what chunks are being pulled during RAG, you should be able to see this in Message Insights - are you not able to access that?

I have the privacy settings set to maximum. Does this block message insights when I’m testing?

Hi Thomas, no this shouldn’t impact it! You can find message insights both in the Pickaxe Builder after a response is generated (magnifying glass just underneath each response). As of this week, you can also find it in the Users tab of your Studio Dashboard - when you look through your interactions in the Activity section, you should see a little information icon underneath each chatbot answer, which will also have these insights!

Ah I think the problem may be that all my pickaxes use iframes. I’m not using the user feature so this is what I see:

Hi @thomasumstattd! It doesn’t have anything to do with your embeds - it’s because “Collect user history” is toggled off in your Studio! If you want to gather this information, you’ll have to turn it on :slight_smile: