Overview
This workflow leverages the LangChain code node to implement a fully customizable conversational agent. It is ideal for users who need granular control over their agent’s prompts while reducing unnecessary token consumption from reserved tool-calling functionality (compared to n8n’s built-in Conversation Agent).
Setup InstructionsConfigure Gemini Credentials: Set up your Google Gemini API key (Get API key here if needed). Alternatively, you may use other AI provider nodes.
Interaction Methods:
– Test directly in the workflow editor using the “Chat” button.
– Activate the workflow and access the chat interface via the URL provided by the When Chat Message Received node.
Customization OptionsInterface Settings: Configure chat UI elements (e.g., title) in the When Chat Message Received node.
Prompt Engineering:
Define agent personality and conversation structure in the Construct & Execute LLM Prompt node’s template variable.
⚠️ Template must preserve {chat_history} and {input} placeholders for proper LangChain operation.
Model Selection: Swap language models through the language model input field in Construct & Execute LLM Prompt.
Memory Control: Adjust conversation history length in the Store Conversation History node.
Requirements:
⚠️ This workflow uses the LangChain Code node, which only works on self-hosted n8n.
(Refer to LangChain Code node docs)