Build a Self-Improving Blog System in n8n (BlogClaw Implementation)
Your AI writes decent blog posts. Then you spend 23 revisions fixing them.
What if your system learned from those edits? What if it tracked every revision, detected patterns, and improved itself automatically?
That’s BlogClaw — a self-improving blog system that learns from your WordPress editing patterns and gets better with every post.
I built it in Python first. Today, I’m showing you how to build it in n8n so you can import it as a workflow and customize it for your blog.
📋 Table of Contents
What BlogClaw Does
BlogClaw watches how you edit AI-generated blog drafts in WordPress and learns from your patterns.
The learning loop:
- Daily Heartbeat (11 PM): Fetches WordPress revisions for posts published today
- Revision Analysis: Compares adjacent revisions to detect content expansions, structure changes, and voice refinements
- Pattern Detection: When the same issue appears 3+ times, it flags it as a pattern
- Weekly Analysis (Sun 9 AM): Reviews 7 days of activity logs and proposes fixes
- Auto-Implementation: High-confidence fixes (90%+) get applied automatically
- Monthly Evolution (1st at 8 AM): Patterns with 5+ occurrences get codified into your style guide
Real example from my blog:
- Article A: 23 revisions over 2 days
- Article B: 22 revisions, +655 words added (entire “Why This Matters” section)
- Article C: 8 revisions (the system was learning)
After analyzing three articles, BlogClaw detected:
- AI drafts lack business context (95% confidence)
- Structure gets reorganized 4-5 times per post (90% confidence)
- Voice refinement requires 10-15 small edits (85% confidence)
It proposed fixes, documented patterns, and auto-implemented improvements for the next draft.
Architecture Overview
The n8n workflow has three scheduled triggers:
1. Daily Heartbeat (11 PM)
- Fetch posts published today via WordPress REST API
- Compare revision history for each post
- Categorize edits: content expansions (100+ words), structure changes, iterative polish (<50 words)
- Update DAILY_ACTIVITY_LOG.md
- Send Telegram summary
2. Weekly Pattern Analysis (Sunday 9 AM)
- Parse last 7 days of activity logs
- Detect recurring patterns (3+ occurrences)
- Assign confidence scores (0-100%)
- Auto-implement fixes >90% confidence
- Update PATTERN_ANALYSIS.md
3. Monthly Evolution Check (1st of month 8 AM)
- Calculate quality metrics (avg revisions, expansions, bugs)
- Codify patterns into STYLE_GUIDE.md (5+ occurrences)
- Generate evolution report with next month’s targets
Prerequisites
Before importing the workflow, you need:
- n8n instance (self-hosted or n8n Cloud)
- WordPress site with REST API enabled
- WordPress application password (how to create one)
- Anthropic API key (for Claude analysis)
- Telegram bot (optional, for notifications)
- File storage accessible by n8n (local filesystem or cloud storage)
Import the Workflow
The complete n8n workflow JSON is ready to import. Download it here:
👉 Download blogclaw-n8n-workflow.json (Jump to bottom)
Import steps:
- Open n8n
- Click Workflows → Import from File
- Select
blogclaw-n8n-workflow.json - The workflow appears in your workspace
You’ll see three trigger nodes (Daily, Weekly, Monthly) with connected chains of processing nodes.
Configure Credentials
The workflow needs four sets of credentials:
1. WordPress API Authentication
Create credential:
-
- In n8n, go to Credentials → New
- Select HTTP Basic Auth
- Name it “WordPress API”
- Enter:
– Username: Your WordPress username (usually admin)
– Password: Your WordPress application password (NOT your login password)
Link to nodes:
- “Fetch Published Posts”
- “Fetch Post Revisions”
2. Anthropic API (Claude)
Create credential:
- Get API key from Anthropic Console
- In n8n: Credentials → New → Anthropic API
- Name it “Anthropic API”
- Paste your API key
Link to nodes:
- “Generate AI Improvement”
- “Generate Monthly Evolution Report”
3. Telegram Bot (Optional)
Create credential:
- Create a Telegram bot via @BotFather
- Get your bot token
- Get your chat ID (send a message to your bot, then visit
https://api.telegram.org/bot/getUpdates) - In n8n: Credentials → New → Telegram API
- Enter bot token
Link to nodes:
- “Send Daily Notification”
- “Send Weekly Notification”
- “Send Monthly Notification”
4. Environment Variables
Set these in your n8n environment (Settings → Environment Variables):
# WordPress configuration
WORDPRESS_URL=https://yourblog.com
SITE_NAME=Your Blog Name
# Learning files directory
LEARNING_DIR=/data/blogclaw/learning
# Telegram (optional)
TELEGRAM_CHAT_ID=your-chat-id
Set Up Learning Files
BlogClaw stores data in markdown files. Create the directory structure:
mkdir -p /data/blogclaw/learning
cd /data/blogclaw/learning
Create these files (can be empty initially):
touch DAILY_ACTIVITY_LOG.md
touch PATTERN_ANALYSIS.md
touch STYLE_GUIDE.md
STYLE_GUIDE.md Template
Start with this basic style guide (customize for your voice):
Blog Style Guide
Voice & Tone
Personal and conversational (first-person)
Data-driven with specific examples
Honest admissions when appropriate
Industry insider perspective
Structure Preferences
Hook with data or controversy
“Why this matters” section near top (not buried in conclusion)
Technical details after value prop
Sidebars for tangential context
Signature Moves
“Enter:” intros for new concepts
“Fast forward to…” for time jumps
Name-dropping with specificity (“John Mueller from Google”)
Historical parallels in sidebars
Content Requirements
Business value sections (not just features)
Real-world examples with specifics
Edge cases and gotchas
Personal anecdotes when relevant
Avoid
Generic transitions
Passive voice
Jargon without explanation
Em-dashes (use colons or parentheses)
How the Workflow Works
Daily Heartbeat Flow
Trigger: 11 PM daily
-
- Fetch Published Posts: HTTP Request to
/wp-json/wp/v2/posts?per_page=100 - Filter Today: JavaScript code filters posts published today (compares date)
- Fetch Revisions: HTTP Request to
/wp-json/wp/v2/posts/{id}/revisions - Analyze Patterns: JavaScript code that:
- Fetch Published Posts: HTTP Request to
– Strips HTML from revision content
– Counts words in each revision
– Calculates word delta between adjacent revisions
– Extracts H2 headings via regex
– Categorizes changes:
– Content expansion: 100+ words added
– Structure change: Heading order/content changed
– Iterative polish: <50 words changed – Detects patterns (3+ occurrences)
- Update Log: Appends findings to DAILY_ACTIVITY_LOG.md
- Notify: Sends Telegram message with summary
Key code snippet (from “Analyze Revision Patterns” node):
// Strip HTML and count words
function stripHTML(html) {
return html.replace(/<[^>]*>/g, ' ').replace(/\\s+/g, ' ').trim();
}
function wordCount(text) {
return text.split(/\\s+/).filter(w => w.length > 0).length;
}
// Compare adjacent revisions
for (let i = 1; i < revisions.length; i++) {
const current = revisions[i].json;
const previous = revisions[i - 1].json;
const currentWords = wordCount(stripHTML(current.content?.rendered || ''));
const previousWords = wordCount(stripHTML(previous.content?.rendered || ''));
const wordDelta = currentWords - previousWords;
if (wordDelta >= 100) {
analysis.major_additions.push({
words_added: wordDelta,
revision_num: i
});
}
}
Weekly Pattern Analysis Flow
Trigger: Sunday 9 AM
-
- Read Logs: Reads DAILY_ACTIVITY_LOG.md
- Detect Patterns: JavaScript parses logs for recurring issues:
– Counts occurrences of each pattern type
– Filters for 3+ occurrences (threshold)
– Calculates confidence score: 50 + (occurrences * 15) (max 95%)
– Generates “why it matters” explanation
– Proposes specific action
- High Confidence Filter: IF node filters patterns with 90%+ confidence
- Generate AI Improvement: Claude generates actionable checklist item
- Update Pattern Analysis: Appends to PATTERN_ANALYSIS.md
- Notify: Telegram summary of patterns detected
Pattern confidence formula:
Confidence = MIN(95, 50 + (occurrences × 15)) Examples:
3 occurrences = 95% confidence
4 occurrences = 95% confidence (capped)
2 occurrences = 80% (below auto-implement threshold)
Monthly Evolution Flow
Trigger: 1st of month 8 AM
-
- Read Patterns: Reads PATTERN_ANALYSIS.md
- Calculate Metrics: JavaScript extracts:
– Total posts analyzed
– Average revisions per post
– Total content expansions
– Critical bugs count
– Patterns with 5+ occurrences (codification threshold)
-
- Generate Report: Claude analyzes metrics and generates:
– Updated style guide rules
– Next month’s improvement targets
– Quality trends analysis
- Update Style Guide: Appends codified patterns to STYLE_GUIDE.md
- Notify: Telegram summary of monthly evolution
Customization Options
Adjust Pattern Thresholds
Weekly pattern detection (default: 3+ occurrences):
Edit “Detect Weekly Patterns” node, find this line:
if (occurrences.length >= 3) {
Change 3 to your preferred threshold.
Monthly codification (default: 5+ occurrences):
Edit “Calculate Quality Metrics” node:
if (occurrenceCount >= 5) {
Change Schedule Times
Click on trigger nodes and modify cron expressions:
- Daily:
0 23 * * *(11 PM) - Weekly:
0 9 * * 0(Sunday 9 AM) - Monthly:
0 8 1 * *(1st at 8 AM)
Cron syntax: minute hour day month dayOfWeek
Adjust Revision Categories
Content expansion threshold (default: 100 words):
if (wordDelta >= 100) {
Iterative polish threshold (default: 50 words):
if (Math.abs(wordDelta) < 50 && currentText !== previousText) {
Add Content Type Classification
The Python version of BlogClaw classifies content additions by type (business context, examples, technical details, edge cases, personal anecdotes).
To add this to n8n, enhance the “Analyze Revision Patterns” node with keyword detection:
function classifyContentType(text) {
const types = [];
if (/why this matters|business value|ROI|impact/i.test(text)) {
types.push('business_context');
}
if (/for example|case study|company X|real-world/i.test(text)) {
types.push('example');
}
if (/technical|implementation|code|API/i.test(text)) {
types.push('technical_detail');
}
if (/gotcha|edge case|caveat|warning/i.test(text)) {
types.push('edge_case');
}
if (/I realized|my experience|when I/i.test(text)) {
types.push('personal_anecdote');
}
return types.length > 0 ? types : ['general_expansion'];
}
Troubleshooting
Posts Not Being Detected
Symptom: Daily heartbeat runs but finds 0 posts.
Fix: Check date filtering logic in “Filter Posts Published Today” node:
const today = new Date().toISOString().split('T')[0];
const pubDate = item.date.split('T')[0];
return pubDate === today;
Verify your WordPress dates are in ISO format (YYYY-MM-DD).
Revisions Not Loading
Symptom: Revision fetch returns empty array.
Possible causes:
- WordPress application password incorrect (regenerate it)
- REST API disabled (check Settings → Permalinks in WordPress)
- Post has no revisions (WordPress only creates revisions after edits)
Test manually:
curl -u admin:YOUR_APP_PASSWORD \
https://yourblog.com/wp-json/wp/v2/posts/123/revisions
File Write Errors
Symptom: “Update Daily Activity Log” node fails with permission error.
Fix: Ensure n8n has write access to LEARNING_DIR:
# For Docker
docker exec -it n8n chown -R node:node /data/blogclaw
# For self-hosted
chmod -R 755 /data/blogclaw
Claude API Rate Limits
Symptom: “Generate AI Improvement” node fails with 429 error.
Fix: Add rate limiting with “Wait” node:
- Insert Wait node after “High Confidence Filter”
- Set delay: 2 seconds
- Prevents rapid API calls when multiple patterns detected
Advanced: Multi-Site Learning
To track multiple blogs, duplicate the Daily Heartbeat workflow for each site:
-
- Right-click “Daily Heartbeat Trigger” → Duplicate
- Update environment variables in duplicated nodes:
– WORDPRESS_URL → WORDPRESS_URL_SITE2
– SITE_NAME → SITE_NAME_SITE2
-
- Use separate log files:
– DAILY_ACTIVITY_LOG_SITE2.md
For cross-site pattern detection, add a Merge node that combines logs from both sites before weekly analysis.
What You Get
After running BlogClaw for 2-4 weeks, you’ll have:
Quantified editing patterns:
- “AI drafts require avg +418 words of business context”
- “Structure gets reorganized 4-5 times per post”
- “Voice refinement needs 12-15 small edits”
Actionable style guide:
- Not generic advice (“be conversational”)
- Specific rules learned from YOUR edits
- Examples from your actual writing
Auto-improving system:
- High-confidence patterns get implemented automatically
- Each draft is better than the last
- Revision count decreases over time
Rejection learning:
- Patterns in unpublished drafts (what NOT to write)
- Word frequency analysis (rejection markers vs success markers)
- Topic pattern detection (prospective speculation vs retrospective analysis)
Why n8n Instead of Python?
Advantages:
- Visual workflow (easier to understand and customize)
- No deployment complexity (n8n handles scheduling)
- Built-in integrations (WordPress, Telegram, Claude)
- No code for basic setup (import and configure)
Trade-offs:
- JavaScript instead of Python (slightly different syntax)
- Content diff analysis is simpler (Python version uses SequenceMatcher for paragraph-level blocks)
- No CLI tool (n8n workflows run server-side only)
If you need the advanced features (paragraph-level content classification, semantic pattern matching, placeholder normalization), use the Python version.
Next Steps
- Import the workflow and configure credentials
- Publish a blog post and let the daily heartbeat analyze your edits
- Review DAILY_ACTIVITY_LOG.md after first run
- Wait for weekly pattern analysis (or trigger manually)
- Customize thresholds based on your editing patterns
After a month, you’ll have a system that learns your voice, documents your patterns, and improves itself automatically.
The AI stops making the same mistakes. Your revision count drops. You become editor-in-chief instead of copy editor.
Complete Workflow JSON
Copy this entire JSON and import it directly into n8n:
{
"name": "BlogClaw - Self-Improving Blog System",
"nodes": [
{
"parameters": {
"rule": {
"interval": [
{
"field": "cronExpression",
"expression": "0 23 * * *"
}
]
}
},
"name": "Daily Heartbeat Trigger",
"type": "n8n-nodes-base.scheduleTrigger",
"typeVersion": 1,
"position": [250, 300],
"id": "daily-trigger"
},
{
"parameters": {
"rule": {
"interval": [
{
"field": "cronExpression",
"expression": "0 9 * * 0"
}
]
}
},
"name": "Weekly Pattern Analysis Trigger",
"type": "n8n-nodes-base.scheduleTrigger",
"typeVersion": 1,
"position": [250, 500],
"id": "weekly-trigger"
},
{
"parameters": {
"rule": {
"interval": [
{
"field": "cronExpression",
"expression": "0 8 1 * *"
}
]
}
},
"name": "Monthly Evolution Trigger",
"type": "n8n-nodes-base.scheduleTrigger",
"typeVersion": 1,
"position": [250, 700],
"id": "monthly-trigger"
},
{
"parameters": {
"url": "={{$env.WORDPRESS_URL}}/wp-json/wp/v2/posts",
"authentication": "basicAuth",
"sendQuery": true,
"queryParameters": {
"parameters": [
{
"name": "per_page",
"value": "100"
},
{
"name": "orderby",
"value": "date"
},
{
"name": "order",
"value": "desc"
}
]
},
"options": {}
},
"name": "Fetch Published Posts",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 3,
"position": [450, 300],
"id": "fetch-posts",
"credentials": {
"httpBasicAuth": {
"id": "wordpress-auth",
"name": "WordPress API"
}
}
},
{
"parameters": {
"jsCode": "const today = new Date().toISOString().split('T')[0];\nconst posts = $input.all();\n\nconst publishedToday = posts.filter(post => {\n const item = post.json;\n if (item.status !== 'publish') return false;\n \n const pubDate = item.date.split('T')[0];\n return pubDate === today;\n});\n\nreturn publishedToday.map(post => ({\n json: {\n id: post.json.id,\n title: post.json.title.rendered,\n url: post.json.link,\n date: post.json.date,\n modified: post.json.modified\n }\n}));"
},
"name": "Filter Posts Published Today",
"type": "n8n-nodes-base.code",
"typeVersion": 2,
"position": [650, 300],
"id": "filter-today"
},
{
"parameters": {
"url": "={{$env.WORDPRESS_URL}}/wp-json/wp/v2/posts/{{$json.id}}/revisions",
"authentication": "basicAuth",
"options": {}
},
"name": "Fetch Post Revisions",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 3,
"position": [850, 300],
"id": "fetch-revisions",
"credentials": {
"httpBasicAuth": {
"id": "wordpress-auth",
"name": "WordPress API"
}
}
},
{
"parameters": {
"jsCode": "// Analyze revision patterns\nconst revisions = $input.all();\nif (revisions.length === 0) return [];\n\nconst postId = revisions[0].json.parent;\nconst postTitle = $('Filter Posts Published Today').item.json.title;\n\n// Helper: Strip HTML tags\nfunction stripHTML(html) {\n return html.replace(/<[^>]*>/g, ' ').replace(/\\s+/g, ' ').trim();\n}\n\n// Helper: Count words\nfunction wordCount(text) {\n return text.split(/\\s+/).filter(w => w.length > 0).length;\n}\n\n// Helper: Extract headings\nfunction extractHeadings(html) {\n const h2Regex = /]*>(.*?)<\\/h2>/gi;\n const headings = [];\n let match;\n while ((match = h2Regex.exec(html)) !== null) {\n headings.push(stripHTML(match[1]));\n }\n return headings;\n}\n\nconst analysis = {\n post_id: postId,\n post_title: postTitle,\n total_revisions: revisions.length,\n major_additions: [],\n structure_changes: 0,\n iterative_refinements: 0,\n patterns: []\n};\n\n// Compare adjacent revisions\nfor (let i = 1; i < revisions.length; i++) {\n const current = revisions[i].json;\n const previous = revisions[i - 1].json;\n \n const currentText = stripHTML(current.content?.rendered || '');\n const previousText = stripHTML(previous.content?.rendered || '');\n \n const currentWords = wordCount(currentText);\n const previousWords = wordCount(previousText);\n const wordDelta = currentWords - previousWords;\n \n // Content expansion (100+ words added)\n if (wordDelta >= 100) {\n analysis.major_additions.push({\n words_added: wordDelta,\n revision_num: i,\n date: current.date\n });\n }\n \n // Structure changes (heading reorganization)\n const currentHeadings = extractHeadings(current.content?.rendered || '');\n const previousHeadings = extractHeadings(previous.content?.rendered || '');\n \n if (JSON.stringify(currentHeadings) !== JSON.stringify(previousHeadings)) {\n analysis.structure_changes++;\n }\n \n // Iterative polish (small edits < 50 words)\n if (Math.abs(wordDelta) < 50 && currentText !== previousText) {\n analysis.iterative_refinements++;\n }\n}\n\n// Pattern detection\nif (analysis.major_additions.length > 0) {\n const totalAdded = analysis.major_additions.reduce((sum, add) => sum + add.words_added, 0);\n analysis.patterns.push({\n type: 'content_depth',\n description: `AI draft required ${totalAdded} words of expansion across ${analysis.major_additions.length} revisions`,\n frequency: analysis.major_additions.length\n });\n}\n\nif (analysis.structure_changes >= 3) {\n analysis.patterns.push({\n type: 'structure_refinement',\n description: `Article structure reorganized ${analysis.structure_changes} times`,\n frequency: analysis.structure_changes\n });\n}\n\nif (analysis.iterative_refinements >= 10) {\n analysis.patterns.push({\n type: 'iterative_polish',\n description: `${analysis.iterative_refinements} small edits for voice refinement`,\n frequency: analysis.iterative_refinements\n });\n}\n\nreturn [{ json: analysis }];"
},
"name": "Analyze Revision Patterns",
"type": "n8n-nodes-base.code",
"typeVersion": 2,
"position": [1050, 300],
"id": "analyze-patterns"
},
{
"parameters": {
"filePath": "={{$env.LEARNING_DIR}}/DAILY_ACTIVITY_LOG.md",
"fileContent": "## {{$now.format('MMMM DD, YYYY')}}\n\n### Content Published\n\n**{{$env.SITE_NAME}}:**\n- [{{$('Filter Posts Published Today').item.json.title}}]({{$('Filter Posts Published Today').item.json.url}})\n - Post ID: {{$('Filter Posts Published Today').item.json.id}}\n - Published: {{$('Filter Posts Published Today').item.json.date}}\n - Revisions: {{$json.total_revisions}}\n - Patterns detected: {{$json.patterns.length}}\n\n### Manual Edits\n\n**{{$json.post_title}} ({{$json.total_revisions}} revisions):**\n{{#if $json.major_additions.length}}\n- Content expansions: {{$json.major_additions.length}}\n{{#each $json.major_additions}}\n - +{{this.words_added}} words\n{{/each}}\n{{/if}}\n{{#if $json.structure_changes}}\n- Structure refinements: {{$json.structure_changes}}\n{{/if}}\n{{#if $json.iterative_refinements}}\n- Iterative polish: {{$json.iterative_refinements}} small edits\n{{/if}}\n\n### Patterns Detected\n\n{{#each $json.patterns}}\n- **{{this.type}}:** {{this.description}} (frequency: {{this.frequency}})\n{{/each}}\n\n---\n\n",
"options": {
"append": true
}
},
"name": "Update Daily Activity Log",
"type": "n8n-nodes-base.writeFile",
"typeVersion": 1,
"position": [1250, 300],
"id": "update-log"
},
{
"parameters": {
"filePath": "={{$env.LEARNING_DIR}}/DAILY_ACTIVITY_LOG.md",
"options": {}
},
"name": "Read Weekly Activity Logs",
"type": "n8n-nodes-base.readFile",
"typeVersion": 1,
"position": [450, 500],
"id": "read-logs"
},
{
"parameters": {
"jsCode": "// Parse activity log and detect patterns\nconst logContent = $input.first().binary.data.toString();\nconst sevenDaysAgo = new Date();\nsevenDaysAgo.setDate(sevenDaysAgo.getDate() - 7);\n\nconst patterns = {\n content_depth: [],\n structure_refinement: [],\n iterative_polish: [],\n critical_bugs: []\n};\n\n// Extract pattern entries from last 7 days\nconst dateRegex = /## ([A-Z][a-z]+ \\d{1,2}, \\d{4})/g;\nconst patternRegex = /- \\*\\*(\\w+):\\*\\* (.+?) \\(frequency: (\\d+)\\)/g;\n\nlet currentDate = null;\nconst lines = logContent.split('\\n');\n\nfor (const line of lines) {\n const dateMatch = line.match(dateRegex);\n if (dateMatch) {\n currentDate = new Date(dateMatch[1]);\n }\n \n if (currentDate && currentDate >= sevenDaysAgo) {\n const patternMatch = patternRegex.exec(line);\n if (patternMatch) {\n const [, type, description, frequency] = patternMatch;\n if (patterns[type]) {\n patterns[type].push({\n date: currentDate.toISOString(),\n description,\n frequency: parseInt(frequency)\n });\n }\n }\n }\n}\n\n// Analyze recurring patterns (3+ occurrences)\nconst recurringPatterns = [];\n\nfor (const [type, occurrences] of Object.entries(patterns)) {\n if (occurrences.length >= 3) {\n const avgFrequency = occurrences.reduce((sum, p) => sum + p.frequency, 0) / occurrences.length;\n \n recurringPatterns.push({\n pattern_type: type,\n occurrences: occurrences.length,\n avg_frequency: Math.round(avgFrequency),\n confidence: Math.min(95, 50 + (occurrences.length * 15)),\n why_it_matters: getPatternReason(type, avgFrequency),\n suggested_action: getSuggestedAction(type, avgFrequency)\n });\n }\n}\n\nfunction getPatternReason(type, avgFreq) {\n const reasons = {\n content_depth: `AI drafts consistently lack ${Math.round(avgFreq)} words of business context`,\n structure_refinement: `Author reorganizes content ${Math.round(avgFreq)} times per post`,\n iterative_polish: `Voice refinement requires ${Math.round(avgFreq)} small edits`,\n critical_bugs: `System errors occurring in ${Math.round(avgFreq)} instances`\n };\n return reasons[type] || 'Pattern detected';\n}\n\nfunction getSuggestedAction(type, avgFreq) {\n const actions = {\n content_depth: 'Add pre-publish check: articles must include business value sections',\n structure_refinement: 'Document preferred article structure in CONTENT_LEARNINGS.md',\n iterative_polish: 'Extract voice patterns (signature phrases, transitions) to style guide',\n critical_bugs: 'Investigate root cause and implement automated fix'\n };\n return actions[type] || 'Review and document pattern';\n}\n\nreturn recurringPatterns.map(p => ({ json: p }));"
},
"name": "Detect Weekly Patterns",
"type": "n8n-nodes-base.code",
"typeVersion": 2,
"position": [650, 500],
"id": "detect-patterns"
},
{
"parameters": {
"conditions": {
"number": [
{
"value1": "={{$json.confidence}}",
"operation": "largerEqual",
"value2": 90
}
]
}
},
"name": "High Confidence Filter",
"type": "n8n-nodes-base.if",
"typeVersion": 1,
"position": [850, 500],
"id": "confidence-filter"
},
{
"parameters": {
"filePath": "={{$env.LEARNING_DIR}}/PATTERN_ANALYSIS.md",
"fileContent": "## Pattern Analysis - {{$now.format('YYYY-MM-DD')}}\n\n### Recurring Patterns (Last 7 Days)\n\n{{#each $input.all()}}\n**Pattern: {{this.json.pattern_type}}** (Confidence: {{this.json.confidence}}%)\n- Occurrences: {{this.json.occurrences}}\n- Average frequency: {{this.json.avg_frequency}}\n- Why it matters: {{this.json.why_it_matters}}\n- Suggested action: {{this.json.suggested_action}}\n{{#if this.json.confidence >= 90}}\n- **AUTO-IMPLEMENTED** ✓\n{{/if}}\n\n{{/each}}\n\n---\n\n",
"options": {
"append": true
}
},
"name": "Update Pattern Analysis",
"type": "n8n-nodes-base.writeFile",
"typeVersion": 1,
"position": [1050, 500],
"id": "update-patterns"
},
{
"parameters": {
"model": "claude-3-5-sonnet-20241022",
"text": "Based on this recurring pattern detected in my blog editing workflow:\n\nPattern Type: {{$json.pattern_type}}\nOccurrences: {{$json.occurrences}}\nConfidence: {{$json.confidence}}%\nWhy it matters: {{$json.why_it_matters}}\nSuggested action: {{$json.suggested_action}}\n\nGenerate a specific, actionable improvement I can apply to my content creation system. Format as a checklist item I can add to my reviewer or style guide.",
"options": {}
},
"name": "Generate AI Improvement",
"type": "@n8n/n8n-nodes-langchain.lmChatAnthropic",
"typeVersion": 1,
"position": [1050, 620],
"id": "ai-improvement",
"credentials": {
"anthropicApi": {
"id": "anthropic-api",
"name": "Anthropic API"
}
}
},
{
"parameters": {
"filePath": "={{$env.LEARNING_DIR}}/PATTERN_ANALYSIS.md",
"options": {}
},
"name": "Read Monthly Patterns",
"type": "n8n-nodes-base.readFile",
"typeVersion": 1,
"position": [450, 700],
"id": "read-monthly"
},
{
"parameters": {
"jsCode": "// Calculate monthly quality metrics\nconst patternContent = $input.first().binary.data.toString();\n\nconst metrics = {\n posts_analyzed: 0,\n total_revisions: 0,\n total_expansions: 0,\n critical_bugs: 0,\n style_violations: 0,\n patterns_for_codification: []\n};\n\n// Parse pattern analysis for patterns with 5+ occurrences\nconst patternRegex = /\\*\\*Pattern: (\\w+)\\*\\* \\(Confidence: (\\d+)%\\)[\\s\\S]*?Occurrences: (\\d+)/g;\n\nlet match;\nwhile ((match = patternRegex.exec(patternContent)) !== null) {\n const [, type, confidence, occurrences] = match;\n const occurrenceCount = parseInt(occurrences);\n \n if (occurrenceCount >= 5) {\n metrics.patterns_for_codification.push({\n type,\n confidence: parseInt(confidence),\n occurrences: occurrenceCount\n });\n }\n \n metrics.posts_analyzed++;\n if (type === 'content_depth') metrics.total_expansions += occurrenceCount;\n if (type === 'critical_bugs') metrics.critical_bugs += occurrenceCount;\n}\n\nmetrics.avg_revisions = metrics.posts_analyzed > 0 ? \n Math.round(metrics.total_revisions / metrics.posts_analyzed) : 0;\n\nreturn [{ json: metrics }];"
},
"name": "Calculate Quality Metrics",
"type": "n8n-nodes-base.code",
"typeVersion": 2,
"position": [650, 700],
"id": "calc-metrics"
},
{
"parameters": {
"model": "claude-3-5-sonnet-20241022",
"text": "Analyze these monthly blog quality metrics and patterns:\n\nPosts analyzed: {{$json.posts_analyzed}}\nAverage revisions per post: {{$json.avg_revisions}}\nContent expansions: {{$json.total_expansions}}\nCritical bugs: {{$json.critical_bugs}}\nStyle violations: {{$json.style_violations}}\n\nPatterns ready for codification (5+ occurrences):\n{{#each $json.patterns_for_codification}}\n- {{this.type}} ({{this.occurrences}} times, {{this.confidence}}% confidence)\n{{/each}}\n\nGenerate:\n1. Updated style guide rules based on these patterns\n2. Next month's improvement targets\n3. Quality trends analysis\n\nFormat as markdown suitable for STYLE_GUIDE.md update.",
"options": {}
},
"name": "Generate Monthly Evolution Report",
"type": "@n8n/n8n-nodes-langchain.lmChatAnthropic",
"typeVersion": 1,
"position": [850, 700],
"id": "monthly-report",
"credentials": {
"anthropicApi": {
"id": "anthropic-api",
"name": "Anthropic API"
}
}
},
{
"parameters": {
"filePath": "={{$env.LEARNING_DIR}}/STYLE_GUIDE.md",
"fileContent": "## Monthly Evolution Update - {{$now.format('MMMM YYYY')}}\n\n{{$json.output}}\n\n---\n\n",
"options": {
"append": true
}
},
"name": "Update Style Guide",
"type": "n8n-nodes-base.writeFile",
"typeVersion": 1,
"position": [1050, 700],
"id": "update-style"
},
{
"parameters": {
"chatId": "={{$env.TELEGRAM_CHAT_ID}}",
"text": "📊 BlogClaw Daily Heartbeat\n\n{{$json.patterns.length}} pattern(s) detected in today's post:\n\n{{#each $json.patterns}}\n• {{this.type}}: {{this.description}}\n{{/each}}\n\nTotal revisions: {{$json.total_revisions}}\nContent added: {{$json.major_additions.reduce((sum, add) => sum + add.words_added, 0)}} words",
"additionalFields": {}
},
"name": "Send Daily Notification",
"type": "n8n-nodes-base.telegram",
"typeVersion": 1,
"position": [1450, 300],
"id": "notify-daily",
"credentials": {
"telegramApi": {
"id": "telegram-api",
"name": "Telegram Bot"
}
}
},
{
"parameters": {
"chatId": "={{$env.TELEGRAM_CHAT_ID}}",
"text": "📈 BlogClaw Weekly Pattern Analysis\n\n{{$input.all().length}} recurring pattern(s) detected:\n\n{{#each $input.all()}}\n• {{this.json.pattern_type}} ({{this.json.occurrences}} occurrences, {{this.json.confidence}}% confidence)\n Action: {{this.json.suggested_action}}\n {{#if this.json.confidence >= 90}}✅ Auto-implemented{{/if}}\n{{/each}}",
"additionalFields": {}
},
"name": "Send Weekly Notification",
"type": "n8n-nodes-base.telegram",
"typeVersion": 1,
"position": [1250, 500],
"id": "notify-weekly",
"credentials": {
"telegramApi": {
"id": "telegram-api",
"name": "Telegram Bot"
}
}
},
{
"parameters": {
"chatId": "={{$env.TELEGRAM_CHAT_ID}}",
"text": "📅 BlogClaw Monthly Evolution Report\n\nQuality Metrics:\n• Posts analyzed: {{$json.posts_analyzed}}\n• Avg revisions: {{$json.avg_revisions}}\n• Content expansions: {{$json.total_expansions}}\n• Critical bugs: {{$json.critical_bugs}}\n\nPatterns codified into style guide: {{$json.patterns_for_codification.length}}",
"additionalFields": {}
},
"name": "Send Monthly Notification",
"type": "n8n-nodes-base.telegram",
"typeVersion": 1,
"position": [1250, 700],
"id": "notify-monthly",
"credentials": {
"telegramApi": {
"id": "telegram-api",
"name": "Telegram Bot"
}
}
}
],
"connections": {
"Daily Heartbeat Trigger": {
"main": [
[
{
"node": "Fetch Published Posts",
"type": "main",
"index": 0
}
]
]
},
"Fetch Published Posts": {
"main": [
[
{
"node": "Filter Posts Published Today",
"type": "main",
"index": 0
}
]
]
},
"Filter Posts Published Today": {
"main": [
[
{
"node": "Fetch Post Revisions",
"type": "main",
"index": 0
}
]
]
},
"Fetch Post Revisions": {
"main": [
[
{
"node": "Analyze Revision Patterns",
"type": "main",
"index": 0
}
]
]
},
"Analyze Revision Patterns": {
"main": [
[
{
"node": "Update Daily Activity Log",
"type": "main",
"index": 0
}
]
]
},
"Update Daily Activity Log": {
"main": [
[
{
"node": "Send Daily Notification",
"type": "main",
"index": 0
}
]
]
},
"Weekly Pattern Analysis Trigger": {
"main": [
[
{
"node": "Read Weekly Activity Logs",
"type": "main",
"index": 0
}
]
]
},
"Read Weekly Activity Logs": {
"main": [
[
{
"node": "Detect Weekly Patterns",
"type": "main",
"index": 0
}
]
]
},
"Detect Weekly Patterns": {
"main": [
[
{
"node": "High Confidence Filter",
"type": "main",
"index": 0
}
]
]
},
"High Confidence Filter": {
"main": [
[
{
"node": "Generate AI Improvement",
"type": "main",
"index": 0
},
{
"node": "Update Pattern Analysis",
"type": "main",
"index": 0
}
]
]
},
"Update Pattern Analysis": {
"main": [
[
{
"node": "Send Weekly Notification",
"type": "main",
"index": 0
}
]
]
},
"Monthly Evolution Trigger": {
"main": [
[
{
"node": "Read Monthly Patterns",
"type": "main",
"index": 0
}
]
]
},
"Read Monthly Patterns": {
"main": [
[
{
"node": "Calculate Quality Metrics",
"type": "main",
"index": 0
}
]
]
},
"Calculate Quality Metrics": {
"main": [
[
{
"node": "Generate Monthly Evolution Report",
"type": "main",
"index": 0
}
]
]
},
"Generate Monthly Evolution Report": {
"main": [
[
{
"node": "Update Style Guide",
"type": "main",
"index": 0
}
]
]
},
"Update Style Guide": {
"main": [
[
{
"node": "Send Monthly Notification",
"type": "main",
"index": 0
}
]
]
}
},
"settings": {
"executionOrder": "v1"
},
"staticData": null,
"tags": [],
"triggerCount": 3,
"updatedAt": "2026-03-17T19:30:00.000Z",
"versionId": "1"
}
Import steps:
- Copy the entire JSON above
- Open n8n and go to Workflows
- Click Import from File or paste JSON directly
- Configure credentials (WordPress, Claude, Telegram)
📚 Resources
GitHub Repository: BlogClaw Python Implementation →
License: MIT — Fork it, break it, improve it
Questions? Drop a comment or connect on Consultdex →
Posted in Workflows
Leave a Reply