Pickaxe cannot properly call another Pickaxe via action

Let’s say I have 2 Pickaxes: 1 is the Orchestration Layer, and the other one is the main Pickaxe

Let’s say the main Pickaxe is a bot responsible to introduce the company. Within this Pickaxe, and attached .txt file is provided, and within that file is all the information about the company.

The orchestration layer pickaxe does not have any attached knowledge base.

Now, I am planning on adding more Pickaxes and attach them to the Orchestration Layer, but for now, 1 main Pickaxe is enough for the sake of simplicity.

What I want is to have a conversation with the Orchestration layer Pickaxe, and depending on the user input, proper action will be taken:

  • If the user query is related to the company information, return information about that company (via interacting with the main pickaxe with the knowledge base).
  • If the user query is greeting, greet back
  • If the user query is not related to the company or is a greeting, simply return “INVALID”

The issue I am facing is that this interaction is not working. I tried to connect the main pickaxe to the orchestration layer pickaxe via “Your Pickaxe” in action but no matter how I prompt it, it does not response properly (Does not return company information, often even make up information, even though I have explicitly stated to the Orchestration layer pickaxe that it need to refer to the main pickaxe for any questions regarding the company, and I have stated clearly to the main pickaxe that it is only allowed to use information from the attached file in its knowledge base).

Furthermore, even though I enabled thinking, and I can see that the bot is thinking, the thoughts is not show.

I am open to provide more information about my current build if needed.

Thanks!

Hi @lumora_support, this should work as expected. One Pickaxe can trigger another and pull responses from it.

If it’s not behaving that way, it’s usually tied to the prompt setup or whether the Action is properly enabled. Trigger prompts are different than main Pickaxe prompts. I’m happy to take a closer look and help troubleshoot.

Please share the links to both Pickaxes and the Studio here, or feel free to email them to us at info@pickaxe.co.

Hi @abhi

Here is the studio URL: Pickaxe

Here is the Orchestration Agent: TESTING GROUND (OLD)

Here is the “Main” Agent: TESTING GROUND (OLD)

This is the proof of concept I wanted to create to ensure that multiagent flow works in Pickaxe. The Orchestration Agent’s only job is to determine when to do what (call the main agent or not). In this case my Orchestration Agent task is to call the “Blue” agent if the word “Blue” in any form of capitalization is input by the user. If the user type “Red” in any form of capitalization, return “RED”, and otherwise, return INVALID.

The “Blue” Agent’s only job is returning “1”, no matter what the input is.

When I was using the orchestration agent, the “Red” and “otherwise” case works. The Blue case does not work as I intended. The desired result is: user type “blue” → Orchestration agent call “Blue” agent → “Blue” agent return “1” → “1” is returned to user.

Thanks!

Hi, I spent some time reviewing the full setup and I see what’s going on.

The main issue is the prompt on the Blue bot. Right now it says “Return ONLY ‘1’ and nothing else,” but it doesn’t clearly define when the bot should do this. Because of that, when the orchestration agent sends a request, the action does trigger, but the Blue bot doesn’t generate a response since it isn’t sure what behavior is expected.

You can see this in Message Insights. The action is firing correctly, but no output is produced because the prompt is unclear.

I’d recommend simplifying the rule and making it explicit. For example:

“When the user types blue, reply with five blue objects, such as sky, ocean, blueberries, jeans, and sapphires.”

State this rule once in a clear, direct way. Avoid repeating the same instruction in different active or passive forms, as that can create conflicts.

After updating the prompt, try again. The flow should work much more predictably. If you’d like, I’m happy to review it again once you’ve made the change.

Hi @abhi

Thank you for your response.

I have not changed anything but it just fixes itself. I am not sure if you guys did or did not change anything in the backend on your side to fix this or not, but either way, the proper output is now returned.

Unfortunately, I have encountered another issue regarding multiagent system, but this time, it is related to knowledge base.

Here is the flow:

  • There are 2 agents, “Name” and “Bob”
  • Agent “Name” is an orchestration layer, and agent “Bob” is the main agent
  • Agent “Name” does not have a knowledge base, and agent “Bob” has a knowledge base which includes information about Bob
  • The intended flow is: User ask information about Bob in Name → Name call Bob → Bob search its own knowledge base → Bob return information about itself from the knowledge base → Information about Bob is returned to Name → Name return the output to the user
  • The issue I am currently facing is that although Name has successfully called Bob, Bob was not able to access its own knowledge base to get the information about itself; however, if I call Bob directly and ask the same question about Bob as I asked in Name, it does retrieve information from the knowledge base and give me the correct result.

Here is the studio: Pickaxe

Here is the “Name” Pickaxe: CMPUS TEAM TESTING GROUND MAIN)

Here is the “Bob” Pickaxe: CMPUS TEAM TESTING GROUND MAIN)

If you need any more information, please let me know. Thanks!

Hi,

I tried replicating this setup on my end and was able to get it working by slightly adjusting both the prompts and the Knowledge Base setup.

Engineering is aware of the minor issue in which Bob’s KB isn’t showing in the message insights modal when being called from Name when Bob is chained! They are working on a fix :construction_worker_man:

Chain pickaxe text

But based on a bit of experimentation, here are a couple of things that may help:

  • Knowledge Base files live at the Studio level, so they can be reused across Pickaxes in the same Studio. If it’s critical that the orchestration flow always pulls accurate information from Bob’s KB, I recommend uploading that file directly into the Studio and attaching it to the Pickaxe that needs it most. In this case, you can attach a file to “Name” Pickaxe.

  • As a quick validation, I also copied a short but important section of the KB text directly into the Prompt box of the second Pickaxe, and that worked consistently. This is useful when certain facts must always be available during an action call.

  • Model behavior can vary. Hallucination is a known issue across all LLMs, especially in multi-agent flows where KB signals may weaken. To reduce this, it helps to either:

    • Add the most important KB content directly into Bob’s prompt, or

    • Move that content into the main Pickaxe Knowledge Base if multiple agents depend on it.

Also, if you ever want to confirm whether the Action is actually firing, you can always check this by clicking Message Analytics and reviewing the Action logs. That’s usually the fastest way to see whether the orchestration step triggered correctly or not.

chain text analytics

I really like the way you’re testing this. Multi-agent setups do work well in Pickaxe, but they usually need a bit of fine-tuning since every use case is different.

Hi @abhi

Thank you for your response.

However, I believe your solution to this issue is only temporary, as you just put the information from the knowledge base into the system prompt, which defeat the entire purpose of having the knowledge base.

The reason why I did not want to connect the knowledge base to the orchestration layer agent, and I only attach the knowledge base to the sub agent, and I did not include any information from the knowledge base to the sub agents, are:

  1. I want to test if the sub agent with the knowledge base is able to query its own knowledge base and retrieve information request by user sent from the orchestration layer agent.
  2. If I have multiple documents and multiple knowledge bases, each with long texts, I do not think putting all of them in the system prompt is a good way to design a smart and efficient multi-agent system.
  3. Often, the information that the users need is only a small subset of the information from the knowledge base. Since the user only have “access” to the orchestration layer agent (user interact with the orchestration layer chatbot), it is preferable to only query that small subset of information from the knowledge base and return that to the user, and if memory is enabled, then that small subset of information is saved in the orchestration layer memory instead of saving the entire knowledge base in the orchestration layer memory.
  4. The job of the orchestration layer is to know When to call the sub agent, not What the sub agent can do and not How the agent will do it. If I connect the subagent’s knowledge base to the orchestration layer, that defeats the purpose of having the subagent.

I hope this issue is fixed soon, since subagent not being able to access its own knowledge base is a very serious issue.

Thanks for explaining and adding more context. I’ve shared this with the engineering team, and they’re working on it.

1 Like