About Deep Research

Hey everyone.

I have prepared Perplexity’s native model on the left and a model recreated with Pickaxe on the right. Even when given the same prompt, they return completely different responses. In this video example, I used the instruction “Based on recent AI news, suggest 10 tech-related posts with a touch of humor.” For some reason, the Pickaxe model consistently generates massive amounts of text. Is there a way to improve this?

1 Like

Hi @avakero, thank you for sharing this. That’s a really good test and observation.

The difference in response length usually happens because Pickaxe uses your own prompt setup, reasoning level, and model configuration. Even if the same model name is used, the system might interpret your instructions a bit differently depending on how your Pickaxe is built.

The best way to get results closer to what you want is to test and fine tune your prompt. You can try making the instruction more specific like “Give exactly 10 short and funny post ideas” or adjust the response length under the Configure tab.

Small prompt changes often make a big difference. It might take a few rounds of testing, but you’ll quickly find the sweet spot for your ideal output. :slight_smile:

1 Like

Thank you. It seems to be working now after adjusting the prompt.