Building agentic AI workflows often requires multiple moving parts: memory management, document retrieval, vector similarity, and orchestration. Until now, these pieces had to be custom-wired.

But with the new native n8n nodes for MongoDB Atlas, we reduce that overhead dramatically. With just a few clicks:

– Store and recall long-term memory from MongoDB
– Query vector embeddings stored in Atlas Vector Search
– Use these results in your LLM chains and automation logic

In this example, we present an ingestion and AI Agent flows that focus on Travel Planning. The different interest points that we want the agent to know about can be ingested into the vector store. The AI Agent will use the vector store tool to get relevant context about those points of interest if it needs to.

Prerequisites:
– MongoDB Atlas project and Cluster
– OpenAI Valid API Key for embeddings (can be another provider)
– Gemini API Key for the LLM (can be another provider)

How it works:
There are 2 main flows.

One is the ingesting flow:
– Gets a document from a webhook and uses MongoDB Vector Atlas to embed the document title and description into the points_of_interest collection.
– Embeddings are stored in a field named embedding.
– Embeddings used are OpenAI’s, but it can be any type of supported embedders.

The second flow is an AI Agent node with Chat Memory stored in MongoDB Atlas and a Vector Search node as a tool:
– Chat Message Trigger: Chatting with the AI Agent will trigger the conversation stored in the MongoDB Chat Memory node.
– When data is necessary, like a location search or details, it will go to the Vector Search tool.

Vector Search Tool – uses Atlas Vector Search index created on the points_of_interest collection:
// index name : “vector_index” // If you change an embedding provider, make sure the numDimensions correspond to the model. { “fields”: [ { “type”: “vector”, “path”: “embedding”, “numDimensions”: 1536, “similarity”: “cosine” } ] }