TL;DR:
Dev/tech news moves fast — TechCrunch, HN, Dev.to, The Verge, and other platforms probably publish dozens of stories daily. Only a handful would actually matter to you, but manually checking them all every morning is a time sink. Paid aggregators like Feedly Pro cost $60–100/year and still drown you in noise with algorithm-driven feeds that prioritize engagement over relevance. So I built a $0 alternative using n8n + Bright Data MCP + a local LLM that scrapes 60–100+ articles every morning, uses AI to surface the 5 most important stories for me based on my needs, and delivers a clean digest straight to my Discord DMs. Total build time: 60 minutes.
I have a FAQ section at the end if you have questions 👇
The Problem
You’re stuck with three options, all of them terrible:
- Doomscroll across platforms. Open 10+ browser tabs every morning and try to mentally filter signal from noise while your coffee grows cold 😅
- Use aggregators or paid services. Subscribe to free tools that flood your inbox with everything (minor package updates you don’t use, opinion pieces disguised as tutorials, vendor blog spam, stories hitting 8 sites with identical headlines) OR drop $60–100/year on Feedly Pro/Inoreader only to discover they still surface fluff and you’re still manually deciding what’s important. Oh, and good luck if you need API access.
- Suffer FOMO paralysis. Skip news entirely and miss critical updates that could’ve saved you hours (breaking API changes, security vulnerabilities in your dependencies, or that perfect open source tool you needed last week)”
Ultimately, the real cost isn’t the subscription fees or even the time — it’s the context switching. Every morning you’re making hundreds of micro-decisions about what deserves your attention, and by the time you’re done “catching up,” your flow state is already destroyed.
What if your news feed just… knew what mattered?
The Solution: What You’ll Learn
In this post, I’ll show you how to build a free n8n workflow that:
- Uses Bright Data MCP to scrape the latest stories from multiple dev news sites every morning
- Extracts article titles, URLs, and text from each source
- Feeds that into an AI Agent that analyzes + ranks them by my personal criteria of what is important/critical
- Generates a clean markdown digest with the TOP 5 must-reads first up + a section for the rest after
- Delivers this digest to me as a Discord DM every morning at 8 AM
Cost: $0. Build time: 60 minutes. No subscriptions, no API limits, no vendor lock-in.
It’s a simple, set-it-and-forget-it way to stay informed without the hassle/FOMO, and a great weekend project that can be adapted to your tastes no matter which industry you’re in.
Let’s get to it!
Technologies Used:
The system combines four key components:
- n8n for workflow automation.
- Bright Data Web MCP for structured web scraping. For a primer on what Model Context Protocol (MCP) is — see here. For an advanced MCP use-case — see here.
3. Local LLMs via Ollama — I’m using Qwen3–4B-Instruct-2507 to scrape using MCP, and gpt-oss:20b to analyze and create the digest (use OpenAI/Anthropic for both if you’d prefer that, or are not running locally — the setup isn’t much different)
4. Discord bots for delivering the digest as a DM. If you want to get notified via Slack or email instead, n8n has support for them, too.
Here’s the full n8n workflow:
Prerequisites
Before running this workflow, make sure you have the following in place:
- n8n installed: Install n8n globally via npm (
npm install n8n -g). You must have a Node.js version between 20.19 and 24.x, inclusive.
- Bright Data Account: Sign up here. The free tier gives you 5,000 requests/month for the MCP. Then grab your API token. New users get a test API key via welcome email, but you can always generate one from Dashboard -> Settings -> User management.
- A Discord bot authenticated and set up: This is really outside the scope of my tutorial, but essentially: create a new application at the Discord Developer Portal, add a Bot, copy your Bot Token, and invite it to your server via the OAuth2 URL Generator (select bot scope and Send Messages/Direct Messages permissions). You’ll use that Bot Token later in n8n as a Discord Bot credential. You’ll also need your Discord User ID (right-click your profile in Discord with Developer Mode enabled → Copy User ID) for DM routing.
- Optionally, some sort of database (I’m using Postgres in a Docker) to keep track of articles you’ve already seen, so your digests stay compact and readable. Here’s the schema I’ll use for that:
CREATE TABLE seen_articles (
url TEXT PRIMARY KEY,
seen_at TIMESTAMP DEFAULT NOW()
);
- Run n8n: When ready, start the n8n server locally at http://localhost:5678 with
n8n start(or justn8nand then hitofor open when it tells you to)
Step 1: Ingesting Releases + Filtering by Version
TL;DR: Configure the news sources you want in a Set, then add n8n AI Agent nodes in parallel that can use MCP-provided tools to scrape those pages.
What we’re doing in this stage:
First, we’ll set up a n8n Set node (basically creating a JSON that has all our news sources in it), then fan it out to n8n’s AI Agent nodes in parallel. Each agent uses an LLM and tools — from Bright Data’s MCP — and extracts content in markdown, according to our prompt. And of course, we’ll use a Code node (lets us write + execute arbitrary JS/Python code) to clean LLM output each time. The whole thing will be hooked up to a n8n Schedule trigger node.
Step-by-Step Setup
1. Add your news sources
Right click the canvas + add a node (or press the Plus button on the UI) and add a Set node. Double click to edit it, and use the JSON mode to define your news source URLs like so:
{
"source_1": "https://techcrunch.com/latest/",
"source_2": "https://dev.to/top/week",
"source_3": "https://news.ycombinator.com/best?h=168"
}
I’m just using three sources here, with no pagination, and using the URLs to fetch top items for the week. Adjust accordingly.
2. Add AI agents for each source, in parallel.
After that, set up an AI Agent node, then pick the Ollama Chat Model for its Model subnode (add a new credential using only the default Ollama URL — http://localhost:11434 or http://127.0.0.1:11434, whichever works — as the Base URL), and select your model of choice from the list.
As you can see, you could just as easily use OpenAI/Anthropic/Gemini etc. (you can find a full list of langchain models n8n supports in their docs) chat models here instead of the Ollama one. The setup is just as easy.
3. Give the Agent access to Bright Data MCP’s Scraping Tools
Next, for the AI Agent’s Tools subnode, choose the MCP Client Tool from the list. This will let us use tools from an external MCP server, and make them available to our AI Agent. Then, we’ll have to add the remote Bright Data MCP server, and select which tools we want.
We’ll connect using Server Sent Events (SSE), so use this endpoint: https://mcp.brightdata.com/sse?token=MY_API_TOKEN
Substitute that with your own Bright Data API token. This MCP server doesn’t need extra auth beyond the token being included in the token param.
The Bright Data MCP has 60+ tools available with a opt-in paid ‘Pro’ mode, but all we need is this tool in the free tier:
scrape_as_markdown: Extracts any page as markdown.
If you’re sure you need the Pro mode tools, append &pro=1 to your endpoint URL.
This MCP client gives our agent access to industrial-grade scraping capabilities that automatically handle proxy use/rotation, JavaScript rendering, bot detection, and CAPTCHAs — crucial for scraping that bypasses the anti-bot measures or geoblocks these news sites might use.
4. Prompt the Agent for Scraping
Finally, let’s set up parameters for this AI agent (double click on the main AI agent node).
Use Bright Data MCP (the 'scrape_as_markdown' tool) to extract only the latest news articles from this page: {{ $json.source_1 }}
Put them in a JSON array as these objects:
{
title: "Some title",
link: "https://hyperlink-to-story"
}
The prompt outlines these tasks for our AI Agent:
- Use the Bright Data MCP’s
scrape_as_markdowntool to grab the fully rendered page from the first source in our JSON Set (important:scrape_as_markdownhandles JavaScript rendering, so that’s great for us — lazy loaded content loads properly) - Structure the output as JSON, with the title of the page + the link. That’s all we’ll need for our digest.
5. Cleaning the JSON
Just in case, we’ll chain a code node to our LLM output to make sure we do have proper JSON.
let raw = $input.first().json.output;
// If it's an object already (rare case), skip string parsing
if (typeof raw === "object") {
raw = JSON.stringify(raw);
}
// Clean up Markdown fences, language tags, and stray whitespace
raw = raw
.replace(/```(?:json)?/gi, "") // remove ```json or ```
.replace(/```/g, "") // remove leftover ```
.trim();
// Try parsing as JSON
let data;
try {
data = JSON.parse(raw);
} catch (err) {
throw new Error(
"Failed to parse MCP output JSON (after cleaning): " +
err.message +
"\nRaw text:\n" +
raw,
);
}
return data;
Pretty self explanatory, we parse it out into a JSON array of story objects that each have that title and link format.
This will output scraped stories in this format.
[
{
title: "Silicon Valley spooks the AI safety advocates",
link: "https://techcrunch.com/2025/10/17/silicon-valley-spooks-the-ai-safety-advocates/",
},
{
title: "Your AI tools run on fracked gas and bulldozed Texas land",
link: "https://techcrunch.com/2025/10/17/your-ai-tools-run-on-fracked-gas-and-bulldozed-texas-land/",
},
{
title: "Should AI do everything? OpenAI thinks so",
link: "https://techcrunch.com/video/should-ai-do-everything-openai-thinks-so/",
},
// more
];
Looking good!
6. Duplicate the AI Agent for other sources.
Select the whole AI Agent node (with its Model and Tools subnodes) and hit Ctrl + D to duplicate it once for each source, wire it up to the original Set node like you saw in the picture, and adjust their prompts to use different URLs from the Set (so, {{ $json.source_2 }}, {{ $json.source_3 }} and so on).
7. Merge and Flatten all 3 cleaned LLM Outputs
Finally, wire the outputs of all those Code nodes into a n8n Merge node (pick Append, with ‘inputs’ set to however many sources/AI agents you now have), followed by another Code node to flatten them.
And here’s the ‘Flatten’ code node:
// Get all items from all inputs
const allItems = $input.all();
// Extract all articles into a single flat array
const stories = allItems.map((item) => item.json);
return [{ json: { stories } }];
Why did we need another Code node to flatten everything? That’s because after the Merge node, our output would look like this:
[
{
title: "How I bypassed Amazon's Kindle web DRM",
link: "https://blog.pixelmelt.dev/kindle-web-drm/",
},
{
title: "FSF announces Librephone project",
link: "https://www.fsf.org/news/librephone-project",
},
{
title: "NanoChat - The best ChatGPT that $100 can buy",
link: "https://github.com/karpathy/nanochat",
},
// more
];
Which is fine, but it would make things difficult for us in the next step. We don’t want 100+ or however many loose items — we want it all in one neat, named JSON array to make it easy for subsequent n8n nodes to see them. Adding the flattening logic turns the above into this:
[
{
stories: [
{
title: "How I bypassed Amazon's Kindle web DRM",
link: "https://blog.pixelmelt.dev/kindle-web-drm/",
},
{
title: "FSF announces Librephone project",
link: "https://www.fsf.org/news/librephone-project",
},
{
title: "NanoChat - The best ChatGPT that $100 can buy",
link: "https://github.com/karpathy/nanochat",
},
// more
],
},
];
This is much better — we can immediately select the ‘stories’ array as input in the next step. This seems tedious at first, but will help you understand the flow right away in the next stage.
Let’s move on.
Step 2: Filter out releases we’ve seen before
TL;DR: Use a n8n Postgres node to quickly query a database for releases we’ve already seen, and use that data to filter our Step 1 output with a n8n Code node. You’ll need a database for this.
What we’re doing in this stage:
FYI, this step is technically optional. If you don’t want to set up a database and don’t mind stories you already saw yesterday in your morning digest, skip ahead to Step 3.
We’ll store articles we’ve already seen, in a database (I’m using n8n’s PostgreSQL node here, but n8n has built-in nodes for pretty much every popular database — it doesn’t even have to be relational), and before creating the digest, filter out those stories so our final digest has only new, fresh stories.
The storing in database part happens later — we do it just before sending out the digest in the last step — don’t worry about that. Right now, all we have to do is search the DB and filter out the articles we’ve put in the digest yesterday, before proceeding any further.
Step-by-Step Setup
1. Add a Postgres Node
See your AI agent fleet in the last stage? Add a Postgres node above, still connected to your trigger, in parallel with them. Pick the “Execute a SQL Query” action.
2. Add a Database Connection
Double click the node you just added, and click the Edit button beside Credential to add a Postgres credential for n8n. Replace with your own values:
3. Select already seen releases from the DB
Here’s the SQL Query we’re going to run.
SELECT url FROM seen_articles;
That will give us an output like this. These are all the URLs we’ve already seen, and are in the DB (Of course, this means you’ll get an empty output on your first run).
[
{
"url": "https://github.com/karpathy/nanochat"
},
{
"url": "https://www.fsf.org/news/librephone-project"
},
{
"url": "https://www.apple.com/newsroom/2025/10/apple-unleashes-m5-the-next-big-leap-in-ai-performance-for-apple-silicon/"
}
]
4. Cleanup
Again, we can clean this up further by attaching another code node.
const seen_urls = items.map(item => item.json.url);
return [{ json: { seen_urls } }];
Much better. Now you can directly feed the seen_urls array to the next node.
[
{
seen_urls: [
"https://github.com/karpathy/nanochat",
"https://www.fsf.org/news/librephone-project",
"https://www.apple.com/newsroom/2025/10/apple-unleashes-m5-the-next-big-leap-in-ai-performance-for-apple-silicon/",
//more
],
},
];
5. Merging Seen Releases + LLM output into one array
Next, we’ll use a n8n Merge node to merge the two arrays — this one (list of seen URLs) and the flattened list of scraped articles from before. We need this because next, we’re about to use this data to only include articles that we haven’t seen yet, but n8n Code nodes can’t handle more than one input.
For reference, this is where we’re at:
No setup needed here, just the default Append with 2 inputs will do.
6. Filter out articles we’ve already seen (Finally!)
Now we add the code node for filtering.
const allInputs = $input.all();
const seenUrls = (allInputs[0].json.seen_urls || []).filter(url => url !== null);
const articles = allInputs[1].json.stories
const newArticles = articles.filter(article =>
article && article.link && !seenUrls.includes(article.link)
);
return [{ json: { stories: newArticles } }];
Output at the end of this stage will be the same as that of Step 1, minus the articles we’ve already seen in past runs (obviously, we’ll get everything on first run as there’s nothing in the ‘seen URLs’ database yet.)
Step 3: Digest generation + Sending Discord DMs
TL;DR: Use a n8n if node to define two paths: a) If we actually have fresh articles we haven’t seen yet, use an AI agent node to generate the digest from them based on our criteria, then DM us on Discord about it. b) Else, just DM to say so.
What we’re doing in this stage:
n8n has a built-in If node that lets us branch logic based on conditions. So here, we use it to ask a simple question: is the length of the stories array in the last step > 0 ?
- If yes: use those story titles + links to generate our digest and send them to ourselves on Discord (but first chunk them because Discord has message lengths capped at 2000 for free, 4000 for Nitro)
- If no: send a “nothing new” message on Discord
This keeps our workflow clean and prevents unnecessary API calls to the LLM when there’s nothing to process.
Step-by-Step Setup
1. Add an IF Node
After “Filter out seen articles” from the last step, add an If node:
With this {{ $json.stories.length }} is not equal to 0 condition, we check if the filtered articles/stories array has at least one item.
- True: Continue to digest generation via AI Agent (default path)
- False: Send “no new articles” message via Discord node
Let’s tackle the TRUE path first.
2. TRUE Path: Digest Generation
On the TRUE output of the If node, connect an AI Agent node.
We’ll prompt this with the following (remember: this is optimized to my personal tastes, so of course you should make changes to fit your own needs):
You have been given {{ $json.stories.length }} tech news articles scraped from multiple sources today.Your task:1) FILTER OUT any articles that are announcements, promotional content, discount/sale alerts, event invitations, webinar > promotions, course advertisements, or marketing fluff 2) Analyze the remaining articles and identify the TOP 5 MUST-READ stories that would be most valuable to developers 3) Sort any remaining quality articles into a "Other Stories" section## Selection Criteria for Top 5:- Impact: Will this affect how developers work or build products?- Relevance: Does this matter to full-stack/web developers specifically?- Novelty: Is this genuinely new/breaking, or just rehashed content?- Actionability: Can developers do something with this info right now?Personal Priorities (HIGHLY favor these):- AI/LLM news: New model releases (especially small/local models like Gemma, DeepSeek, Qwen), API changes, benchmark results, novel use cases- JavaScript/TypeScript ecosystem: Next.js updates, React changes, new frameworks/libraries, performance improvements- Developer tools: Automation tools, n8n/workflow automation, MCP servers, productivity hacks
- Open source projects: New launches, interesting experiments, developer tools going open source
- Infrastructure/Backend: PostgreSQL updates, database innovations, API design patterns, email/transactional services
- Indie hacker/startup news: Acquisitions, shutdowns, pivots, revenue milestones, bootstrapping stories
- Security: Vulnerabilities in popular packages, breach disclosures, supply chain attacksGenerally, prioritize: New frameworks/libraries, major company announcements (acquisitions, shutdowns, pivots), security vulnerabilities, AI model releases, API changes, open source launches, developer tool updates, and breaking news.EXPLICITLY EXCLUDE:- Product launch announcements from companies promoting their own tools
- Discount codes, Black Friday deals, subscription offers
- "Join our webinar" or event registration posts
- Generic tutorials ("How to build X in React")
- Opinion pieces and hot takes
- Minor version bumps or patch releases
- Listicles ("10 best tools for...")
- Promotional blog posts disguised as news
- Job postings or hiring announcements
- Mobile app news (iOS/Android) unless it's architectural/framework related
- Gaming industry news
- Hardware reviews (unless it's dev-related like M-series chips for local LLMs)De-prioritize: Opinion pieces, minor updates to existing tools, consumer tech news, and any content that feels like content marketing rather than news.## Output FormatReturn a properly formatted Markdown digest following this EXACT structure:```# 🔥 Today's Dev Digest{{ $now.toFormat('MMMM dd, yyyy') }}## 📌 TOP 5 MUST-READS### 1. [Article Title Here]🔗 Read more### 2. [Article Title Here]🔗 Read more### 3. [Article Title Here]🔗 Read more### 4. [Article Title Here]🔗 Read more### 5. [Article Title Here]🔗 Read more## OTHER STORIES- [Title] - Link
- [Title] - Link
- [Title] - Link [... continue for all remaining articles ...]***Powered by n8n + Bright Data
**NOTE**: If after filtering there are fewer than 5 quality articles, only include what's genuinely worth reading. Do not pad the list with mediocre content just to hit 5 items.## Input DataHere are the {{ $json.stories.length }} articles to analyze:json {{ JSON.stringify($json.stories, null, 2) }}```**CRITICAL**: Return ONLY the markdown digest. Do not include any preamble, explanation, or meta-commentary. Start directly with the markdown heading.````````````
Once again, we’ll chain a code node to clean up LLM output. Only this time, we will want this to also chunk the output so we don’t hit Discord’s 2000 characters limit and error out.
// Extract and clean the LLM output
const raw = $input.first().json.output;
// Step 1: Remove Markdown-style code fences and trim
let cleaned = raw
.replace(/```json|```markdown|```/g, '') // remove code block markers
.trim();
// Step 1.5: Wrap bare URLs in < > to prevent Discord preview embeds
// This matches http:// or https:// URLs not already inside <>
cleaned = cleaned.replace(
/(?<!<)(https?:\/\/[^\s>]+)(?!>)/g,
'<$1>'
);
// Step 2: Split into chunks that fit Discord's 2000 char limit
const DISCORD_LIMIT = 2000;
const chunks = [];
if (cleaned.length <= DISCORD_LIMIT) {
chunks.push(cleaned);
} else {
const lines = cleaned.split('\n');
let currentChunk = '';
for (const line of lines) {
if ((currentChunk + '\n' + line).length > DISCORD_LIMIT) {
if (currentChunk.trim()) chunks.push(currentChunk.trim());
currentChunk = line;
} else {
currentChunk += (currentChunk ? '\n' : '') + line;
}
}
if (currentChunk.trim()) chunks.push(currentChunk.trim());
}
// Step 3: Return each chunk as a separate item for the Discord node
return chunks.map((chunk, index) => ({
json: {
output: chunk,
chunkNumber: index + 1,
totalChunks: chunks.length,
isLastChunk: index === chunks.length - 1
}
}));
💡 This is a personal preference thing, but note that I’m wrapping links in brackets (“<” and “>”) — I do this because I don’t want my digest to have link previews in Discord. This special formatting for hyperlinks disables them.
Almost done! We only need two things now.
- Add the articles we’ve just processed to our seen URLs database, so we can filter using them on the next run
- Send the digests we generated as DMs on Discord
For the first part, we should extract only the URLs first (because remember our schema!), so add a Code node:
// Extract *just* the article links from the stories array
const stories = $input.first().json.stories || [];
return stories.map(article => ({
json: { link: article.link }
}));
And then another Postgres node for query execution will wrap this up:
INSERT INTO seen_articles (url)
VALUES ('{{ $json.link }}')
ON CONFLICT (url) DO NOTHING;
And for the second part, n8n’s built-in Discord node will serve us perfectly. Make sure you add a Discord credential first (I’m using a simple Bot Token rather than OAuth, but this is up to you really. Building Discord bots is outside the scope of this tutorial).
The message this bot will send you on Discord can use templating, so if you want your digest to be fancier, customize away. For now, I’m just going to put the LLM generated digest directly in the message as {{ $json.output }}
3. FALSE Path: “No new articles” DM
Another Discord node, exact same settings, except the message will now simply read “No new articles” or similar.
Output of this stage:
Here’s the DM I received this morning:
# 🔥 Today's Dev Digest
*October 18, 2025*
---
## 📌 TOP 5 MUST-READS
### 1. Ruby core team takes ownership of RubyGems and Bundler
🔗 <https://www.ruby-lang.org/en/news/2025/10/17/rubygems-repository-transition/>
### 2. Claude Skills are awesome, maybe a bigger deal than MCP
🔗 <https://simonwillison.net/2025/Oct/16/claude-skills/>
### 3. Your AI tools run on fracked gas and bulldozed Texas land
🔗 <https://techcrunch.com/2025/10/17/your-ai-tools-run-on-fracked-gas-and-bulldozed-texas-land/>
### 4. Facebook’s AI can now suggest edits to the photos still on your phone
🔗 <https://techcrunch.com/2025/10/17/facebooks-ai-can-now-suggest-edits-to-the-photos-still-on-your-phone/>
### 5. Senate Republicans deepfaked Chuck Schumer, and X hasn’t taken it down
🔗 <https://techcrunch.com/2025/10/17/senate-republicans-deepfaked-chuck-schumer-and-x-hasnt-taken-it-down/>
---
## OTHER STORIES
- **Amazon’s Ring to partner with Flock** - <https://techcrunch.com/2025/10/16/amazons-ring-to-partner-with-flock-a-network-of-ai-cameras-used-by-ice-feds-and-police/>
- **Silicon Valley spooks the AI safety advocates** - <https://techcrunch.com/2025/10/17/silicon-valley-spooks-the-ai-safety-advocates/>
- **Stellantis teams up with Pony AI to develop robotaxis in Europe** - <https://techcrunch.com/2025/10/17/stellantis-teams-up-with-pony-ai-to-develop-robotaxis-in-europe/>
… more
And we’re all done!
Frequently Asked Questions
Q: Can I scrape paywalled news sites with this? A: Nope. For paywalled sources (Bloomberg, WSJ, etc.), you’d need an account + then pass those authentication cookies. For that, I would try Bright Data’s Pro mode tools/Chrome Devtools MCP for browser automation.
Q: My local LLM keeps picking terrible articles for the Top 5. What’s wrong?
A: I’ve been there. 😅 Some small models will struggle with nuanced ranking tasks. Either upgrade to a bigger model if you have the VRAM (Qwen3–8B/gpt-oss:20b one-shots all stages for me), or switch to OpenAI/Anthropic etc. Cloud models will obviously be way better at reasoning based on your personal “developer impact” criteria.
Q: The AI is including too many opinion pieces and fluff. How do I fix this? A: Tighten your prompt. You can also add a keyword blocklist in a Code node before the AI agent. Or even better: provide few-shot examples in the prompt showing what you think counts as fluff vs. real news.
Q: Can I customize the digest format? A: Sure. Edit the AI Agent prompt to output JSON, HTML, or plain text instead of Markdown (and then clean it up). You can always add a Code node to manually transform the output for different channels too.
Q: How much does this cost to run? A: Let’s break this down:
- If you use a local LLM via Ollama, free. If you use OpenAI/Anthropic etc. check their pricing.
- Postgres in a Docker is completely free, and a hosted one like MongoDB Atlas/Supabase would probably be, too, with this usage.
- Bright Data MCP is free for 5000 usages per month (you don’t pay for attempts that error out).
Total cost, if you do as I’ve shown: $0.
The Bigger Picture
This workflow is a template for solving information overload in any domain, really, not just dev/tech news.
n8n is what makes this practical. Without it, you’d be writing boilerplate orchestration code. n8n gives you that infrastructure out of the box, so you can focus on the logic that matters: what to scrape, how to filter, where to send it.
And MCP is what makes it composable. Instead of hardcoding scraper logic or wrestling with Puppeteer, you’re calling a standardized protocol that any AI assistant or automation tool can speak. Bright Data’s MCP servers handle the scraping complexity, n8n orchestrates the flow, and you just wire them together. That’s the power of MCP: turning proprietary tools into interoperable building blocks
And here’s the kicker: you built this in an hour for $0. No subscriptions, no vendor lock-in, no API rate limits. Just a workflow that runs every morning and saves you 30–60 minutes of your day.