Chat Pickaxe skipping Question 1 and starting from later steps

Hi everyone :waving_hand:

I’m working on a client project where we’re building a Chat Pickaxe that should guide users through a set of 9 questions in strict order. The goal is to validate each answer, move step-by-step, and finally produce a report + recommendation based on logic.

Here’s the challenge I keep running into:

  • Sometimes the bot skips Question 1 and jumps straight to Question 5.

  • Other times it repeats a question that’s already been answered.

  • This breaks the flow and doesn’t feel professional or human for the user experience.

What I’ve already tried:

  • Removed all Knowledge Base docs with the questions (they now only live in Instructions).

  • Cleaned Training Dialogue down to a minimal “start” example.

  • Added explicit rules in Instructions:

Always start with Question 1.
Move to Question 2 only after Question 1 is answered and validated.
Continue strictly 1→9.
Only repeat if the answer is unclear.
Generate the final report + next step recommendation only after all 9.

  • Raised relevance cutoff, and left only style/voice docs in the KB (so the bot speaks in the client’s voice).

Has anyone here managed to set up this kind of validated, step-by-step flow in Chat mode?

Really appreciate any insights :folded_hands:

1 Like

Hey @katobm , I get the pain. You want a strict 1 to 9 flow with validation, but the chat sometimes jumps ahead or repeats. Based on what you shared, you already pulled the questions out of the KB and tightened training. Here is what consistently fixes it.

Why this happens in plain English

  • LLMs try to be helpful and anticipate later steps if they can see the whole plan.
  • Any hint of the full checklist, examples that include later questions, or multi-goal instructions can cause it to skip.
  • Without an external state, the model can forget where it is and re-ask.

Minimal fix that works inside Chat mode

Use a single-step gate and a simple internal counter. Tell it to ask exactly one question per message and wait. Advance only after validation. If the user answers something else, park it and return to the current question.

Template you can copy and paste and update (XML):

<SystemPrompt>
  <Identity>
    <Name>Step-by-Step Interviewer</Name>
    <Purpose>Ask one question at a time in strict order and collect valid answers for all steps before generating any summary or report.</Purpose>
  </Identity>

  <Config>
    <TotalSteps>9</TotalSteps> <!-- Change to your total -->
    <StartStep>1</StartStep>
    <AllowCreativity>false</AllowCreativity>
  </Config>

  <Behavior>
    <Rule>Maintain an internal counter named current_step. Initialize it to the value of &lt;StartStep&gt;.</Rule>
    <Rule>Ask only the question for current_step. Do not reveal later questions. Do not generate any final output until all steps are valid.</Rule>
    <Rule>After the user replies, validate briefly. If valid, say "Noted." then increment current_step by 1 and ask the next question.</Rule>
    <Rule>If the answer is unclear or invalid, re-ask the same question with a short hint. Do not advance current_step.</Rule>
    <Rule>If the user asks for the final report early, say you will do it after the last step is complete, then re-ask the current question.</Rule>
  </Behavior>

  <Questions>
    <!-- Provide only the current step's question at runtime. Store the others outside visible context if possible. -->
    <!-- Example definitions for your own tracking. Do not show these to the user -->
    <Question step="1">
      <Prompt>Q1 text goes here</Prompt>
      <Validation>Describe what makes an answer valid for step 1</Validation>
    </Question>
    <Question step="2">
      <Prompt>Q2 text goes here</Prompt>
      <Validation>Describe what makes an answer valid for step 2</Validation>
    </Question>
    <!-- Continue up to TotalSteps -->
  </Questions>

  <OutputPolicy>
    <OnAllStepsValid>Only then generate the final deliverable.</OnAllStepsValid>
  </OutputPolicy>
</SystemPrompt>

Small tuning that helps

  • Keep the Training Dialogue to a single tiny example that covers only Q1.
  • Avoid pasting the full list of questions where the model can read it in one go. That is what triggers jumping.
  • If your platform exposes temperature, lower it for more deterministic phrasing.

Common gotchas to double-check

  • Examples or KB snippets that accidentally reveal later questions.
  • Instructions that say collect answers and generate the report in the same breath. Split those into two phases.
  • Overly long system text that mentions several steps together. Keep it terse.

If you share your current Instruction block and the exact wording for Q1 with its validation rule, I can tune the XML for you so it stops skipping and repeating.

-Ned

2 Likes

Wow Ned. Thank you so much for taking time to provide ths responce. I really appricate it. I will test and try :smiling_face:

1 Like