Vector Database as a Big Data Analysis Tool for AI Agents
This series of workflows shows how to build big data analysis tools for production-ready AI agents with the help of vector databases. These pipelines are adaptable to any dataset of images, hence, many production use cases.
Uploading (image) datasets to Qdrant
1. The first pipeline uploads an image dataset to Qdrant.
2. The second pipeline sets up cluster (class) centers and cluster (class) threshold scores needed for anomaly detection.
Anomaly Detection Tool
This is the third pipeline, which takes any image as input and uses all preparatory work done with Qdrant to detect if it’s an anomaly in the uploaded dataset.
KNN (k nearest neighbours) Classification
1. The first pipeline uploads an image dataset to Qdrant.
2. The second is the KNN classifier tool, which takes any image as input and classifies it based on the uploaded dataset in Qdrant.
To recreate both
You’ll have to upload crops and lands datasets from Kaggle to your own Google Storage bucket and recreate APIs/connections to Qdrant Cloud (you can use the Free Tier cluster), Voyage AI API, and Google Cloud Storage.
Anomaly Detection Tool
This tool can be used directly for anomalous images (crops) detection. It takes as input any image URL and returns a text message indicating whether the image is anomalous in relation to the crop dataset stored in Qdrant.
An image URL is received via the Execute Workflow Trigger, which is used to generate embedding vectors using the Voyage AI Embeddings API. The returned vectors are used to query the Qdrant collection to determine if the given crop is known by comparing it to threshold scores of each image class (crop type). If the image scores lower than all thresholds, then the image is considered an anomaly for the dataset.