If you have unique health/fitness needs, obtaining nutrition facts from recipes should be straightforward. Instead, it’s a 20-30 minute slog through databases, ingredient labels, and spreadsheet math — cross-referencing portion sizes, tallying macros, and praying you didn’t fat-finger a decimal. One digit off and your entire nutritional profile is worthless.
So people turn to AI. “Hey ChatGPT, what’s the nutrition for this chicken pasta?” And it sounds right… until you check the numbers. They’re completely made up. They can’t access real-time USDA databases, and don’t show their work, so you can’t verify anything.
I tested this with a simple recipe for cocoa fudge brownies. ChatGPT confidently broke down the requirements and told me it had ~4800 calories of macronutrients. The actual USDA data? 3857 calories. That’s a ~25% error — completely unusable for anyone who actually needs accurate nutrition facts.
The “professional” solution, then, is to pay for tools like MyFitnessPal Premium ($300/year), Cronometer Gold ($130/year), or if you’re tech-savvy enough, wire up the Spoonacular API (~$400/year). And even then, you’re stuck with their interface, their limitations, their rules.
What if you could just… ask, for free?
So I built this because, basically, I was exasperated and just found myself saying out loud “There HAS to be a better way to do this.” You just type:
> “What’s the nutritional breakdown of this Thai basil chicken recipe? [LINK]”
> “How many calories per serving if I make it with 6 servings instead of 4?”
> “Which ingredient contributes the most protein?”
…and have your answer. Real USDA data + natural language queries + zero subscription fees + you own the data.
You can build it too, using Claude (or local MCP Client + local LLM) + Model Context Protocol (MCP) — and it costs $0, requires almost no coding, and gives you complete control over your data.
The Core 90% of MyFitnessPal/Spoonacular
Let’s break down the requirements. After talking to a half dozen food bloggers and nutrition coaches, here’s what people like me actually need:
- Instant nutritional data — Get complete nutrition for any ingredient (powered by USDA data)
- Ingredient-level breakdown — See exactly which ingredients contribute what nutrients
- Per-serving calculations — Automatically scale nutrition by servings
- Natural language queries — Just describe your recipe in plain English
The 10% most WON’T need:
- Pre-built recipe databases (you have your own recipes)
- Wine pairing AI (seriously? 🙄)
- Restaurant menu nutrition (irrelevant for home cooking)
- Packaged food barcodes (you’re cooking from scratch)
Replacing a premium app sounds daunting, but if you think about it, we only have to build this core 90%. This gives you exactly that — for $0 instead of ~$300-$400/year. And you can cache anything behind-the-scenes however you like.
Let’s get started.
The Solution: Claude + Two MCP Servers + One Prompt
Here’s what we’re going to use:
1. Bright Data MCP — For extracting recipe content from any URL (bypassing anti-bot measures, geoblocks, and handling dynamic JavaScript)
2. OpenNutrition MCP — For querying the USDA FoodData Central database (478,000+ foods with complete nutritional information)
3. Claude (or any MCP client + LLM combo) To orchestrate everything, perform the calculations, and generate a report. You can get Claude desktop here.
The entire setup takes ~5 minutes, and the result is a flexible, cost-effective alternative to expensive nutrition apps.
Architecture Overview: How It Works Under the Hood
At a high level, this system is a two MCP orchestration pipeline. Each component plays a very specific role, and when chained together through an LLM like Claude (or any other LLM with MCP support), they act as a self-contained Spoonacular replacement.
What is MCP?MCP (Model Context Protocol) is a framework that allows AI models like Claude to connect to external data sources and tools through standardized server interfaces, stopping hallucinations cold by providing real-time access to authoritative data, instead of relying on the LLM’s training data.
Here’s how it all fits together:
The Core MCP Servers
1. Bright Data MCP
Repository: https://github.com/brightdata/brightdata-mcp Documentation: https://docs.brightdata.com/mcp-server/overview License: MIT Free Tier: 5,000 requests/month
This is an MCP wrapper around all Bright Data’s offerings — exposing search, crawling, site navigation, and even browser automation through one server. Point it at a URL and it’ll extract the page as clean Markdown — even dynamic (JavaScript-heavy) or geo-blocked ones.
It also handles anti-bot mechanisms, automatically uses residential proxies, and bypasses CAPTCHA. Think of it as a “headless browser + HTML-to-Markdown sanitizer.” — except you don’t have to host the Chromium instance.
2. MCP OpenNutrition
Repository: https://github.com/deadletterq/mcp-opennutrition License: GPL-3.0 for the MCP, Open Data Commons Open Database License (ODbL) for the dataset Free Tier: Unlimited; it’s entirely local.
A locally hosted MCP server that comes with the OpenNutrition dataset and can access it for detailed nutrition data per ingredient.
Returns full nutrient profiles: macros, fiber, sugars, sodium, iron, vitamins, etc.
Since it’s local:
- No rate limits.
- Fully auditable — the dataset is just a 275 MB .tsv file that sits on your filesystem. You can open that up and see where every number comes from.
- Extensible — add your own ingredients or corrections. Just be mindful of the dataset’s license:
# Licensing & Attribution
This dataset is made available under the Open Database License (ODbL). Any rights in individual contents are licensed under a modified Database Contents License (DbCL).
Attribution Requirements: If you display or use any data from this dataset, you must provide clear attribution to "OpenNutrition" with a link to https://www.opennutrition.app in:
* Every interface where data is displayed
* Application store listings
* Your website
* Legal/about sections
Any derivative database must be shared under the same license terms (Share-Alike).
We’re relying on the LLM itself — Claude Sonnet 4.5 if you’re following this guide, or any local/API-based LLM of your choice — to act as the orchestrator and reasoning layer.
- It initiates the Bright Data MCP tool call, receives the Markdown recipe it extracts,
- Then parses it for data (ingredient list, units, servings).
- Having that information, it then sends individual ingredient lookups to the OpenNutrition MCP.
- Receives data per ingredient, aggregates it all.
- Synthesizes a clean final report (human-readable markdown, machine-friendly JSON, or whatever the user needs).
Naturally, if you’re using a different MCP client (LM Studio, Cherry, etc.), you can easily swap the Claude model out for another (Gemini, GPT-5, any local model at all via Ollama) — as long as it supports MCP tool calls.
Let’s get to it.
Setting Up the Bright Data MCP Server
Before anything, sign up for an account here— then grab your API token.
Bright Data - All in One Platform for Proxies and Web Scraping
New users get a test API key via welcome email, but you can always generate one from Dashboard -> Settings -> User management.
Option 1: Remote Server (Easiest)
Add this to your Claude/MCP client config for MCP. This connects directly to their remote hosted MCP server.
If you’re using Claude, this goes in the claude_desktop_config.json file. You can find that option in Claude Desktop → Settings → Developer Options.
{
"mcpServers": {
"Bright Data": {
"command": "npx",
"args": [
"mcp-remote",
"https://mcp.brightdata.com/mcp?token=YOUR_API_TOKEN_HERE"
]
}
}
}
Add &pro=1 to that URL (before the token param) if you need to turn Pro mode on (more details on Pro mode below.)
Option 2: Running Locally + Optional Advanced Config
Of course, you could be running it locally, too. Everything except API_TOKEN in env is optional.
{
"mcpServers": {
"Bright Data": {
"command": "npx",
"args": ["@brightdata/mcp"],
"env": {
"API_TOKEN": "YOUR_API_TOKEN_HERE",
"PRO_MODE": "true", // Enable all 60+ tools
"RATE_LIMIT": "100/1h", // Custom rate limiting
"WEB_UNLOCKER_ZONE": "custom", // Custom unlocker zone
"BROWSER_ZONE": "custom_browser" // Custom browser zone
}
}
}
}
Pro vs Basic Mode:
By default, you get basic tools (search, and scraping). Enable Pro mode to access nearly 60 tools including browser automation and web data extraction — note that this incurs additional PAYG charges.
Setting Up the OpenNutrition MCP Server
Then, we need the OpenNutrition dataset and its MCP server up and running. This is local-only, meaning you’ll be running it + its dataset on your PC while you use it on Claude.
Start by cloning this repo: https://github.com/deadletterq/mcp-opennutrition.git
Of course, if you have the GitHub CLI you can do this directly:
gh repo clone deadletterq/mcp-opennutrition
…and npm install the thing.
Now, if you open this project in your IDE of choice, you’ll see what it wants to do in package.json > scripts
"scripts": {
"build": "rm -rf build && tsc && npm run convert-data && chmod 755 build/index.js",
"inspector": "tsc && npm run convert-data && npx @modelcontextprotocol/inspector npx tsx src/index.ts",
"convert-data": "tsx scripts/decompress-dataset.ts && tsx scripts/tsv-to-sqlite.ts && rm -rf data_local_temp"
}
After installing dependencies, you’re supposed to run this MCP server with npm run build — that’s it. It will use its built-in scripts (decompress-dataset.ts & tsv-to-sqlite.ts) to convert the OpenNutrition DB from TSV to a ~350 MB SQLite file (store it in data_local_temp folder), and create an index.js file in the /build directory.
IMPORTANT: If you’re a Windows user, npm run build will fail on this step and you won’t get any of the required files those two scripts generate. Scroll down to see the fix 👇
That index.js is the script you’ll run to get the MCP server up and running. So after the above, you’ll have to add this to your Claude/MCP client config like so:
"mcp-opennutrition": {
"command": "/PATH/TO/your_nodejs_executable",
"args": [
"/ABSOLUTE/PATH/TO/PARENT/FOLDER/mcp-opennutrition/build/index.js"
]
}
Of course, if you have node in your system environment variables already, you can leave this at :
"mcp-opennutrition": {
"command": "node",
"args": [
"/ABSOLUTE/PATH/TO/PARENT/FOLDER/mcp-opennutrition/build/index.js"
]
}
If you’re using something like Node Version Manager (NVM), make double sure you’re using the NVM path for the exact version of Node.js you used to build the project and get index.js in the first place.
And once you’ve done that and restarted Claude, this MCP server will run fully locally on your machine, and automatically provide food and nutrition query capabilities to Claude. All data processing and queries happen locally with no external API calls, meaning you get the best of both worlds — privacy AND instant response times.
IMPORTANT: Fixes for Windows users
The build and convert data scripts use Unix commands (rm, chmod) and make some assumptions about paths that don’t work natively on Windows. We need to work around that by using a cross-platform alternative, shx, that provides Unix-like commands that work on all platforms: all Unix based ones, MacOS, and even Windows.
Step 1: Fix the scripts in package.json
So first, do this:
npm install --save-dev shx
Then, we can replace the scripts section in package.json with this:
"scripts": {
"build": "shx rm -rf build && tsc && npm run convert-data && shx chmod 755 build/index.js",
"inspector": "tsc && npm run convert-data && npx @modelcontextprotocol/inspector npx tsx src/index.ts",
"convert-data": "tsx scripts/decompress-dataset.ts && tsx scripts/tsv-to-sqlite.ts && shx rm -rf data_local_temp"
}
Pretty much the same thing, except we’re prefixing any Unix-only command with shx.
That’s not everything, though.
Step 2: Fix path normalization in decompress-dataset.ts and tsv-to-sqlite.ts
If you look at ./scripts/decompress-dataset.ts, you’ll find this on line 72–74
if (import.meta.url === file://${process.argv[1]}) {
decompressDataset().catch(console.error);
}
This is a simple “is this main() aka the entrypoint module?” check — but it fails on Windows because of how import.meta.url and process.argv[1] are formatted differently across OSes.
So import.meta.url always returns a file URL, not a plain path string. On Windows, that gives you
file:///C:/Users/you/project/src/script.js
Fair enough. But then, process.argv[1] returns a filesystem path, not a file URL. So on Windows, you get:
C:\Users\you\project\src\script.js
(And file://${process.argv[1]} will be file://C:\Users\you\project\src\script.js.)
Meaning that check from before will always error out on Windows because those values will always be unequal on Windows.
Luckily, the fix is simple. We have to use the url module (Node.js builtin) to convert import.meta.url into a real file path (using fileURLToPath) before comparing.
./scripts/decompress-dataset.ts
Before:
if (import.meta.url === file://${process.argv[1]}) {
decompressDataset().catch(console.error);
}
After:
import { fileURLToPath } from 'url'; // add at top along with other imports
//...
const isMain = fileURLToPath(import.meta.url) === path.resolve(process.argv[1]);
if (isMain) {
decompressDataset().catch(console.error);
}
Do the same for the other script, too. ./scripts/tsv-to-sqlite.ts
Before:
if (import.meta.url === `file://${process.argv[1]}`) {
convertTsvToSqlite();
}
After:
import { fileURLToPath } from 'url'; // add at top along with other imports
//...
const isMain = fileURLToPath(import.meta.url) === path.resolve(process.argv[1]);
if (isMain) {
convertTsvToSqlite();
}
The npm run build command should run without a hitch now.
Putting it all together — Prompting Claude
This is as simple as it gets. I’ll just grab whatever recipe from the internet and feed it to Claude with this prompt.
Use Bright Data MCP to extract this recipe: https://bakerbynature.com/the-best-cocoa-fudge-brownies/
Then provide COMPLETE nutritional information using the mcp-opennutrition MCP server by following these steps:
1. Extract ALL ingredients from the recipe with their quantities and units 2
2. For EVERY ingredient, query the opennutrition database to get its nutritional data
3. Calculate the nutritional contribution of each ingredient based on its quantity
4. Sum the total nutrition for the entire recipe
5. Divide by the number of servings to get per-serving nutrition
6. Present results in this format:
* Full ingredient breakdown (show calories, protein, fat, carbs per ingredient)
* Total recipe nutrition
* Per-serving nutrition
* Health-conscious analysis with recommendations
IMPORTANT:
* Query the nutrition DB for EVERY ingredient, even if it seems minor (salt, vanilla, etc.)
* If an ingredient isn’t found in the DB, note it explicitly and estimate if possible
* Show your calculation steps for transparency
* Assume standard serving sizes if the recipe doesn’t specify
* I’m health-conscious, so prioritize: calories, macros (protein/fat/carbs), fiber, sugar, sodium, and key micronutrients (iron, calcium, vitamin D).
💡 Remember, the Bright Data MCP comes with the
search_enginetool which lets you make Google/Yandex etc. searches and get SERP data, so you could actually alter this prompt to search for recipes automatically, and *then* usescrape_as_markdownto extract them as I do here.
Let’s break down what happens when Claude processes this prompt.
Phase 1: Recipe Extraction
Claude starts by calling the Bright Data MCP:
I'll extract the recipe and provide complete nutritional information for you.
Let me start by getting the recipe details.
> scrape_as_markdown
The scrape_as_markdown tool extracts any page as markdown, avoiding anti-bot measures, geoblocks, and handling dynamic JavaScript content. Our detailed prompt helps Claude extract just the relevant parts — the recipe itself — while cutting out the rest of the website content (ads, headers, sidebars, etc.).
What happens behind the scenes:
- Bright Data fetches the page through their proxy network
- The MCP converts HTML to clean markdown
- Claude receives structured recipe data
- Claude parses ingredients, quantities, and instructions
Phase 2: Nutritional Data Retrieval
Next, Claude systematically queries the OpenNutrition database:
Perfect! I've extracted the recipe. Now let me get the nutritional
data for each ingredient. I'll query the nutrition database
systematically for every ingredient.
> search-food-by-name
This is where the OpenNutrition MCP shines. The search-food-by-name tool gets called once per ingredient — Claude handles the orchestration automatically. For this brownie recipe with 11 ingredients, Claude makes exactly 11 tool calls.
What happens behind the scenes:
- Claude identifies each ingredient: “all-purpose flour”, “unsalted butter”, “cocoa powder”, etc.
- For each ingredient, it queries the USDA FoodData Central database
- The database returns nutritional data per 100g
- Claude stores this data for the calculation phase
Phase 3: Calculation & Synthesis
Now Claude does the math — scaling nutritional data based on actual quantities and summing everything up:
https://gist.github.com/sixthextinction/9740ac0333290d59469f772ad9e9a569#file-output-md
I particularly like the healthier alterations to the recipe. I don’t think this will work for every recipe, though, so keep an eye on what it recommends for your recipe.
Note: Claude intelligently skipped the optional “1 Tbsp Espresso Powder” since it was marked as optional in the recipe. If you want optionals included, just update your prompt.
And we’re done! Of course you can play around with the prompt if you want something else, or want it in a different format.
Taking It Even Farther
The MCP ecosystem is growing rapidly, and new servers are being released weekly. I recommend MCP registries like this one to keep up:
Here are some ideas to extend this project, just off the top of my head, using some MCP servers I can see here:
- Meal Planning: Combine this with a calendar MCP (like the Cal.com MCP) to generate weekly meal plans, and optimize for nutritional targets (macros, calories, specific vitamins).
- Cost Optimization: Add a grocery price MCP (Like this one for Kroger’s), and use it to have the LLM suggest budget-friendly ingredient swaps
- Extract and analyze recipes shared on social media using these BlueSky, X, or Reddit MCPs.
The beauty of this approach is that everything is modular. You can mix and match MCP servers to build exactly what you need, and pick only the tools you want so you don’t pay for features you don’t use.
Final Thoughts
Expensive APIs exist because companies need to monetize their data and infrastructure. Look, I completely get it. But when you break down what they actually do, you often find:
- They’re querying public datasets (like USDA FoodData Central)
- They’re scraping public websites (recipe blogs)
- They’re doing calculations you could do yourself
- They’re wrapping everything in a nice API
With two open-source MCPs — Bright Data and OpenNutrition — together with Claude, you can replicate much of this functionality yourself. This is simply building on public data and open standards.
Is this perfect? No. Spoonacular/MyFitnessPal have years of refinement, edge case handling, 24/7 support backed by 99% uptime SLA’s, and a massive pre-computed database. But for indie hackers, side projects, and MVPs, this approach gives you 90% of the value at 0% of the cost.
And that’s the power of the Model Context Protocol and its ecosystem: it democratizes access to data and tools, letting you build sophisticated features without needing much code, or breaking the bank.
Have questions or built something cool with MCP? I’d love to hear about it. Drop a comment below.