This workflow implements a message-batching buffer using Redis for temporary storage and GPT-4 for consolidated response generation. Incoming user messages are collected in a Redis list; once a configurable βinactivityβ window elapses or a batch size threshold is reached, all buffered messages are sent to GPT-4 in a single prompt. The system then clears the buffer and returns the consolidated reply.
Key Features
– Redis-backed buffer to queue incoming messages per user session
– Dynamic wait time (shorter for long messages, longer for short messages)
– Batch trigger on inactivity timeout or minimum message count
– GPT-4 consolidation: merges all buffered messages into one coherent response
Setup InstructionsMap Input
– Rename node to βExtract Session & Messageβ
– Assign context_id and message from webhook or manual trigger
Compute Wait Time
– Rename node to βDetermine Inactivity Timeoutβ
– JS Code:
javascript const wordCount = $json.message.split(‘ ‘).filter(w=>w).length; return [{ json: { context_id: $json.context_id, message: $json.message, waitSeconds: wordCount < 5 ? 45 : 30 }}]; Buffer Message in Redis
- Push into list buffer_in:{{$json.context_id}}
- INCR key buffer_count:{{$json.context_id}} with TTL {{$json.waitSeconds + 60}}
Mark Waiting State
- GET waiting_reply:{{$json.context_id}} β if null, SET it to true with TTL {{$json.waitSeconds}}
- Rename nodes to βCheck Waiting Flagβ / βSet Waiting Flagβ
Wait for Inactivity
- Wait node: pause for {{$json.waitSeconds}} seconds
Check Batch Trigger
- GET keys:
- last_seen:{{$json.context_id}}
- buffer_count:{{$json.context_id}}
- IF both:
- buffer_count >= 1
– (now β last_seen) >= waitSeconds * 1000
– Rename node to βTrigger Batch on Inactivity or Countβ
Fetch & Consolidate
– GET entire list buffer_in:{{$json.context_id}}
– Information Extractor β rename to βConsolidate Messagesβ
– System prompt: βYou are an expert at merging multiple messages into one clear paragraph without duplicates.β
GPT-4 Chat
– OpenAI Chat Model (GPT-4)
Cleanup & Respond
– Delete Redis keys:
– buffer_in:{{$json.context_id}}
– waiting_reply:{{$json.context_id}}
– buffer_count:{{$json.context_id}}
– Return the consolidated reply to the user
Customization Guidance
– Batch Size Trigger: Add an additional IF to fire when buffer_count reaches your desired batch size.
– Timeout Policy: Adjust the word-count thresholds or replace with character-count logic.
– Multi-Channel Support: Change the trigger from a manual test node to any webhook (e.g., chat, SMS, email).
– Error Handling: Insert a fallback branch to catch Redis timeouts or OpenAI API errors and notify users.