Trying to use GPT 1.5 image generator but it fails every time. Gemini 3 Flash / Pro simply refuses to activate it. What could be causing this? Is it the model or the instructions and I am missing something? Note that I am explicitly telling to use the tool always.
Hi @hurmuli,
I spent some time reviewing your Pickaxe bot, and this looks like a prompting issue, not a model or platform bug.
I noticed you’re using both a role prompt and a model reminder, and they overlap.
The model reminder includes things like “make sure the user is happy”. Instructions like that sound harmless, but they’re vague and subjective. When it sits next to strict tool instructions, models often get confused and default to the safest behavior, which is not triggering the action.
Models can give inconsistent results when they receive overlapping or conflicting guidance across the role prompt, model reminder, and action trigger. Even if you say “always use the tool,” the model still tries to balance everything it’s told.
In most cases, very simple instructions work best. Something like:
“Generate an image if the user requests one.”
That’s usually enough. There’s no need to over engineer it or repeat the same instruction in the role prompt, model reminder, and every other place.
What I’d suggest:
-
Remove the model reminder if you don’t really need it
-
Simplify the role prompt to the basics
-
Keep instructions clear, direct, and non overlapping
Clean prompts almost always lead to more reliable action triggering across models.
Simplified the prompt at bots end and using the default “trigger prompt”, but after uploading photo and asking the model to “Make it better” it simply tells what it edited but didn’t return an image or trigger the action. Is there still something i am doing wrong?
Note I tried different main models and nothing changed.
Thanks for the update, that helps ![]()
Could you share the Pickaxe link you’re testing with? I just want to make sure I’m looking at the exact same setup you’re seeing on your end.
If you’d rather not post it publicly here, you can also email the link to the Pickaxe support address (info@pickaxe.co) and mention this thread. Either option works.
Once I can see the exact Pickaxe, I can check whether the model is actually being given a clear signal to regenerate an image versus just describing edits.
Thank you for taking a look. I sent you an email.
For anyone else struggling with this, here’s a quick gif of GPT 1.5 working with this setup (Gemini 3 Pro) from start to finish!

Thanks for the reply Nathaniel. Here is a completely new + empty Pickaxe bot with only GPT 1.5 activated. Gemini 3 Flash ![]()
Note - I am trying to edit an image I uploaded (image size 2.3 MB, so relatively small in file size, full size picture though). Action was not triggered and action default prompt was not edited in any way.
Also like to note that this was working (old prompt + action) just fine for users, but now nothing. And user even edited bigger image with a person.
If you want to edit images, it can be a more complex action. 2.0 Flash might not be smart enough to accomplish that! We know it to be unstable when calling actions beyond very basic ones.
Here’s a gif of the process working right out of the box with Gemini 3 Pro! You should try a smarter model.

Heads up, I’m struggling with this as well as many of our users.
I think the problem is still related to reference images - I could be wrong.
It works when it’s a simple prompt, but if you add a reference image it doesn’t pass the image to the image model the right way and it’ll provide back either an empty response or an image that is way below the quality you’d get direct on one of those platforms
Hi @stephenbdiaz ,
I use a reference image in the above example and it seems to work ok. Can you help me understand what you’re doing differently?
Oh you do! I see that now. Dang this has me stumped!
Hmm I’m stumped, it’s got to be the system prompt then I guess. In your example you put nothing in the prompt area right?
I took my studio usage json and analyzed it with gemini and this is what it said:
Based on a detailed analysis of the studio-activity.json log, the tool is currently experiencing a critical failure rate. While users are engaging with the tool enthusiastically, the system is failing to deliver results in the vast majority of recorded interactions.
Here is the breakdown of the usage patterns, specifically regarding the “Working vs. Not Working” dynamic.
1. The “Ghost Image” Phenomenon (High Frequency)
The most common and damaging pattern is a disconnect between what the Assistant thinks it did and what the User sees.
- The Pattern: The Assistant runs the tool, says “Here is your lifestyle image,” and often provides a link.
- The Reality: The user immediately responds with “nothing is shown,” “where is the image?”, “I don’t see it,” or “link is broken.”
- Impact: This causes high user frustration because the AI creates a false positive, acting as if the task is complete when it failed.
2. The “Upload Loop” (Medium Frequency)
There is a persistent issue where the Assistant fails to recognize that an image has been uploaded.
- The Pattern: Assistant asks for an image → User uploads it → Assistant says “I haven’t received it, please upload” → User says “I just did” → Cycle repeats.
- Root Cause: The system likely isn’t passing the file attachment correctly to the vision model or the tool function.
- Example: User
mckatiemac@gmail.comuploaded their photo 4 times, but the assistant kept claiming it wasn’t there.
–
I’m going to try to delete my system prompt and see if that gets it to work, basically copying your gif
Okay the system forces you now to set a “role” - so I can’t copy your gif exactly, but here’s a zoom clip of me trying everything like yours @nathaniel
Abhi! Thank you for this response. I often struggle to get mine to work consistently too and I’m excited to try simply simplifying the prompts. Thanks again for always providing such great feedback!
This was tested on Gemini 3 Pro and Gemini 3 Flash. Both refused to trigger the action. Also other modelds failed to trigger it as well and didn’t give the new image.
Hi @hurmuli, did you catch the gifs I added above? They show one-shot image editing working with the setup you mention. Are you doing anything different than is shown there that could have caused different results?
Yes I saw them and tested with exact same setup (new, empty bot), as I explained in my earlier message.
EDIT. Feel free to send me an email and we can hop on a Google Meet if you want me to demo it to you live.
Hi @hurmuli
While I can’t jump on at the moment, please record a video and upload it here! It’d be very helpful in getting to the bottom of the issue!
stephenbdiaz pretty much showed what happens on his video, so not sure how much another video from me would bring to the table since same thing is happening to me.
Hey folks! We’ve pushed some changes in the last few days - when you get a chance, please retry and let me know if you’re still not getting an image to appear!
Just tried and no image appeared when asked to generate a photo of a man on a PC.
After recreating the bot it seemed to work perfectly now. No idea why the earlier bot didn’t work.
EDIT. After editing the new bot and adding intro + icebreakers the bot firs didn’t print the AI image, but after telling it to try again it did print it. Note that the action GPT 1.5 did trigger both times.
