I’m all brand new to all of this (I am social work professor) creating a pickaxe to help women with children who are trying to leave their abusive spouse. It is educational only (lots of legal language included), but women can use it to help them know what to document what is happening, find local resources (I use 211.org), understand domestic violence dynamics, etc. The prompt gets a little long and I think complicated with the safety parameters (988 and detects- to the best it can– any safety concerns and tells them to contact 911 or 988 (mental health). I use manus to help me develop the prompts.
What is happening is that it dumps information when they ask. It sometimes gets stuck in a loop where it gets a little pushy when they want to change subjects, and/or they have answered the question.
Any suggestions would be greatly appreciated!
Scottye
I’m really glad you’re building something so meaningful. I’ve built a few healthcare tools myself, and during my master’s program in health communication I spent a lot of time thinking about how people understand safety messages. So I can already see how much thought you’re putting into this, and it truly matters.
One thing that helps a lot with safety prompts is being very clear about where your users are located. AI is trained on global data, so if someone from India or the UK happens to find your Pickaxe, “call 911” may not make sense to them. If your audience is US-based, you can define that right at the top of your prompt. It helps the model give the right crisis guidance without confusing anyone.
A few other things that usually help with the issues you’re seeing:
1. Break the prompt into smaller sections
Long prompts with many safety rules can cause the chatbot to feel rigid or get stuck repeating itself. Breaking the prompt into clean blocks like “Tone,” “Safety rules,” “When the user changes topics,” and “How to respond” makes the model behave more consistently.
2. Add a gentle rule about not being pushy
You can add something simple like: “If the user wants to change subjects, allow it calmly and respond without repeating earlier questions.” This usually stops the looping behavior.
3. Reduce max output tokens
If it’s dumping too much information, lowering the max output in the Configure tab can encourage shorter, more supportive replies.
4. Try a different model
Some models handle sensitive topics more naturally. GPT 5 Chat or Claude Haiku often feel softer and avoid repeating themselves. Reasoning models can overthink things and get stuck.
5. Use AI Builder to simplify the safety instructions
You can ask AI Builder to clean up the safety section, rewrite it in clearer steps, or make it more conversational while still keeping your crisis-response rules strong.
I’ve found it really helpful to chat with the AI Builder in plain, toddler-level language. When I keep things simple, it builds exactly what I need. You can learn more about AI Builder here:
You’re doing something incredibly important here, and it’s inspiring to see someone from outside the tech world stepping in to build tools that can truly help people.