I’ve been A/B testing a very open-ended Pickaxe powered by GPT with as open parameters as I can muster - actually telling it that it IS ChatGPT, act ike ChatGPT etc. (and tried Pickaxe vs Claude and Gemini Tests as well)
Why isn’t this as good as ChatGPT itself? Not really my question, but got users asking me and I’ve got no answers for them. Truthfully what are the Pickaxe limitiations? Why does it dumb down the ChatGPT product even without anything in the knowledge base and a very open persona)
Last compounding point - it’s not a just a little off, it’s miiiiiiiles off.
Hi @simonh when you instruct your Pickaxe to “act like ChatGPT,” you are essentially instructing a custom-wrapped GPT (which is what a Pickaxe is) to act like a universal chatbot that is trained on all of the world’s data. I would argue that that is the culprit. A custom GPT is intended to be trained on your specific dataset to transform it into an expert on your expertise/topics/niche/…
There is now FREE training available for ALL Pickaxe community members on the Pickaxe Prospectors Academy. Check it out here:
In the Beginner Track, you will learn how to:
Properly set up a Pickaxe studio
Configure a Pickaxe
How to use advanced prompting techniques to get exactly what you want out of your Pickaxe.
That prompting video is a little outdated at this point. Check out the free courses in the academy for training on how to get your Pickaxe to do whatever you want it to!
Additional context: There are multiple similar posts on OpenAI’s own community forum where people talk about how the GPT5 API is slower than the actual GPT5. I dont think this is Pickaxe specific.