The AI industry is experiencing a gold rush, but not the kind you might expect. While developers chase the latest multi-agent architectures promising revolutionary capabilities, cloud providers are quietly counting tokens—and the revenue that comes with them. AWS’s new Strands Agents SDK exemplifies this trend, packaging complexity as innovation while driving up computational costs for enterprises.
The Multi-Agent Business Strategy Unveiled
AWS has positioned their Strands Agents SDK as the future of enterprise AI, built on three core components: foundation models, tools, and prompts. The framework prominently features multi-agent orchestration primitives, encouraging developers to build systems where multiple AI agents collaborate on tasks.
This isn’t just about technological advancement—it’s a calculated business strategy with clear economic incentives:
Token Consumption Economics: Multi-agent systems inherently consume significantly more tokens than single-agent approaches. Each agent interaction requires separate model calls, context management, and coordination overhead. When you multiply this across enterprise-scale deployments, the revenue impact becomes substantial.
Platform Lock-in: By providing pre-built multi-agent collaboration tools through Amazon Bedrock, AWS creates deeper integration touchpoints. The convenience of managed orchestration, session handling, and memory management generates switching costs that keep customers within their ecosystem.
Service Differentiation: Multi-agent capabilities allow AWS to justify premium pricing through enterprise features like supervisor-based coordination and automated task delegation.
The Single Agent Reality Check
Here’s the uncomfortable truth that cloud providers don’t want to highlight: single agents are often fully capable of handling complex tasks without the overhead of multi-agent architectures.
Single-agent systems offer compelling advantages:
Simpler architecture with fewer coordination complexities
Lower computational overhead and reduced token consumption
Faster decision-making without inter-agent communication delays
Easier debugging and maintenance
Research consistently shows that single-agent systems excel in controlled environments where problems can be fully modeled by one entity. The question becomes: when does the added complexity of multiple agents actually justify the increased costs?
The Workflow Alternative: A Better Path Forward
Instead of falling into the multi-agent trap, smart organizations are embracing workflow-based approaches that deliver similar outcomes at a fraction of the cost.
Why Workflows Beat Multi-Agent Systems
Predictable Structure: Workflows provide deterministic execution paths with clear checkpoints, timeouts, and human oversight capabilities. This contrasts sharply with the sometimes unpredictable nature of autonomous agent interactions.
Cost Efficiency: Workflow orchestration avoids the token-burning overhead of agent coordination. A single orchestrator can manage multiple tools and services without requiring separate agent instances, leading to dramatic cost savings.
Better Governance: Workflows enable validation, decision overriding, and human-in-the-loop steps that are challenging to implement in purely autonomous multi-agent systems. This is crucial for enterprise compliance requirements.
Easier Debugging: Workflow systems provide visual diagrams, execution logs, and clear audit trails that make troubleshooting straightforward compared to debugging complex agent interactions.
The Token Consumption Reality
The numbers don’t lie. Multi-agent approaches can consume substantially more tokens due to:
Context replication across agents
Inter-agent communication overhead
Redundant processing when agents duplicate work
Organizations implementing multi-agent systems often discover that well-designed workflows with powerful single agents achieve equivalent functionality at significantly lower computational costs.
Making the Right Architecture Decision
The choice between multi-agent and workflow approaches shouldn’t be driven by marketing hype but by practical considerations:
Use Multi-Agent When:
Tasks require genuinely distinct personas with specialized knowledge
Parallel execution by different specialists provides measurable benefits
Dynamic task routing based on content analysis is essential
Use Workflows When:
Tasks can be decomposed into predictable steps
Cost control and token optimization are priorities
Compliance and auditability requirements are strict
The problem requires structured orchestration rather than autonomous collaboration
The Path Forward for Smart Organizations
AWS’s Strands SDK represents sophisticated engineering, but it also exemplifies how cloud providers package complexity as necessity. Before implementing multi-agent architectures, ask these critical questions:
Can a single agent with proper tooling solve this problem?
What are the true token consumption implications?
Do we need agent autonomy or just workflow automation?
Are we solving a technical problem or creating vendor dependency?
The most successful AI implementations often follow the principle of simplicity: start with the least complex solution that meets your requirements, then add complexity only when clearly justified by measurable benefits.
Bottom Line
The multi-agent revolution might be real, but so is the bill that comes with it. While AWS and other cloud providers promote increasingly complex AgenticAI architectures, smart organizations are discovering that workflow-based solutions with capable single agents often deliver better results at lower costs.
Don’t let the #GenAI hype drive your architecture decisions. In the world of AI development, sometimes the most innovative choice is choosing simplicity over complexity—and keeping your cloud bills manageable in the process.
The next time someone suggests a multi-agent solution, ask them to justify why a workflow won’t work. Your budget will thank you.
You need to decide something big. Do you want one AI to do all the work? Or do you want many AIs to handle different tasks? This choice matters for building smart AI systems. These systems can study company data, run apps, or manage websites. The answer isn’t a simple “yes” or “no.” Instead, it depends on your goals. It also depends on how tricky the task is. And it relies on how well you control the system. In this blog, we’ll explore the debate between one AI and many AIs. We’ll look at real-world issues too. Finally, we’ll give tips for picking the best option in June 2025.
Breaking Down the Tech Jargon
Let’s make some terms clear and simple:
AI Agent: A clever program, like a chatbot or data checker, that uses AI for specific jobs.
MCP (Model Context Protocol): A method to link AI models to outside tools and data. It helps AI work with things like CRMs or design software.
RAG (Retrieval-Augmented Generation): A trick that lets AI grab fresh info from the web or databases to improve its answers.
Tools: Add-ons for AI, such as APIs for stock prices or text checkers.
Tool Confusion: When AI gets mixed up with too many tools or wrong data, causing mistakes.
Single-Agent Design: One AI handles everything, like a lone cook making a meal.
Multi-Agent Design: Several AIs team up, each with its own role, like a kitchen crew with different skills.
Imagine an AI as an office worker. One AI might manage emails, spreadsheets, and meetings alone. But that can be overwhelming. A multi-agent setup splits the tasks among experts. So, which is better? Let’s find out.
The Single-Agent Approach: Simplicity Wins (Sometimes)
Suppose you build an AI to track Tesla. It needs to share news, stock prices, and social media vibes. A single-agent design puts all this in one AI’s hands. Here’s what it uses:
RAG to fetch news articles.
MCP to tap into tools like a stock API.
Tools to check feelings on social media.
This works well for straightforward jobs. For example, the Cognition AI team, creators of the Devin tool, wrote a blog on June 12, 2025. They said one AI is more dependable. Multi-agent setups often mess up because they don’t share info well or they clash. Instead, they recommend one AI with strong info control, like Claude Code’s long-trace system, for tasks with many steps.
Good Things:
It’s easier to handle and fix.
There’s less chance of disagreements.
It suits tasks that focus and flow in order.
Bad Things:
It struggles with varied or side-by-side tasks.
It can fail when things get too complicated.
The image you shared shows a single-agent system splitting a task into smaller bits. It uses context crunching (like with LLMs) to stay steady on long jobs. This proves a smart single AI can tackle tough stuff if it handles info right.
The Multi-Agent Approach: Teamwork Makes the Dream Work (With Caveats)
Now, picture a multi-agent design for the Tesla project. Each AI takes a role:
Research Agent: Uses RAG and text tools for news.
Financial Agent: Tracks stocks with an API.
Sentiment Agent: Checks social media buzz.
Each AI focuses on one thing and uses MCP to reach tools. Jason Zhou posted on X on June 13, 2025, backing this idea. He said sub-agents can manage small tasks if they share stuff like chat history. However, he warned about “merge conflicts” if tasks overlap without good planning. Cognition AI mentions this problem too.
Good Things:
Each AI is a pro, so answers are sharper.
You can grow it for big, messy tasks.
It’s like people teaming up for huge projects.
Bad Things:
Without good info sharing, it gets messy.
Today’s tech makes it tough to sync up.
It breaks easily if not built just right.
Why It’s Case by Case
No single way wins every time—it depends on the situation:
Simple Tasks: For stuff like daily news summaries, one AI with RAG and MCP is enough. This fits Cognition AI’s love for easy solutions.
Complex Tasks: For a full Tesla breakdown (news, stocks, vibes), many AIs shine if they share info via MCP and stay in sync, as Jason Zhou points out.
Tech Today: In June 2025, one AI is safer since multi-AI setups can be shaky. But tools like MCP might change that soon.
Think of making an app. A calculator app needs one piece. But an online shop needs parts for listings, payments, and shipping—each with its own crew. AI design works the same way.
Can One Agent Handle Multiple Tools?
Yes, one AI can use tools like RAG and MCP if they match its purpose. For instance, a Research Agent might use RAG for news and MCP for extra tools. But toss in a stock API, and it might stumble—like asking a writer to do math homework. Many AIs keep tools clear and focused.
The Verdict: Choose Wisely
This debate isn’t about crowning a winner. It’s about picking what fits the job. One AI rocks for simple, steady tasks, as Cognition AI pushes. Many AIs flex for big, team-based jobs, but only with solid info sharing, as Jason Zhou and Anthropic’s studies suggest. MCP boosts both by making tools standard.
What to Do:
Check your needs: Pick one AI for step-by-step tasks, or many for tasks at once.
Use MCP: Keep tools and info easy to reach and consistent.
Watch 2025 tech: As info sharing gets better, multi-AI setups might take off.
In June 2025, AI keeps changing fast. Staying flexible and choosing smart are key. Start small, grow wisely, and adjust as tech improves.
Want to create a Chrome extension but dreading the setup? With create-chrome-ext, you can scaffold a fully functional extension in minutes, using frameworks like React, Vue, or Svelte. This guide walks you through building a simple Chrome extension with a React and TypeScript template. It’s perfect for anyone who have the interests in building a browser tools to enhance your daily productivities.
create-chrome-ext is a scaffolding tool that generates Chrome extension boilerplates for popular frameworks. Powered by Vite, it offers fast development with hot module replacement (HMR) and supports TypeScript, JavaScript, and multiple languages. Whether you’re building a popup, content script, or options page, this tool simplifies the process.
Why Build a Chrome Extension?
Chrome extensions enhance browsing by adding custom features, like ad blockers, productivity tools, or UI tweaks. With create-chrome-ext, you can prototype ideas quickly, leveraging modern web technologies. And using copilot to quickly make a production level extension.
Prerequisites
Node.js: Version 14.18.0 or higher.
Chrome Browser: For testing your extension.
A basic understanding of JavaScript and your chosen framework (e.g., React).
Step-by-Step Guide
Step 1: Install create-chrome-ext
Run the following command to create a new project with a React and TypeScript template:
npm create chrome-ext@latest my-crx-app
This sets up a project folder named my-crx-app with all necessary files.
Expected Output:
Success! Created my-crx-app
Step 2: Navigate to Your Project
Move into the project directory:
cd my-crx-app
Install dependencies:
npm install
Run the build command to generate the production files:
npm run build
Step 4: Explore the Project Structure
Your project includes several key folders and files. Don’t worry—most of these are optional and you can start simple!
Essential Files (Required):
manifest.json: The “ID card” of your extension. Tells Chrome what your extension does, what permissions it needs, and which files to load.
src/manifest.ts: The source file that generates the final manifest.json during build.
Core Extension Pages (Choose What You Need):
src/popup/: Most Common – The small window that appears when users click your extension icon. Perfect for quick actions like a calculator or note-taking.
src/background/: Optional – Runs behind the scenes even when Chrome is closed. Use for tasks like monitoring websites or sending notifications.
src/contentScript/: Optional – Injects code directly into web pages. Great for modifying how websites look or adding new features to existing sites.
Additional UI Options (All Optional):
src/options/: Settings page accessible via right-clicking your extension icon → Options.
src/sidepanel/: Modern sidebar that stays open while browsing (like bookmarks panel).
src/newtab/: Replaces Chrome’s new tab page with your custom design.
src/devtools/: Adds a tab to Chrome DevTools for developer-focused extensions.
Supporting Files:
public/: Icons and static assets (logos, images).
src/assets/: Additional resources like fonts or images.
The Beauty: You only need to build what you actually use! Start with just a popup for your first extension.
Simple Guide: What Should You Build?
For Beginners – Start Here: Just use src/popup/ for a simple tool like a password generator, unit converter, or quick note-taker.
Common Use Cases:
Popup Only: Calculator, color picker, quick tools (easiest to build)
Content Script: Page modifier (dark mode, ad blocker, text highlighter)
Options Page: Any extension with user settings
Side Panel: Dashboard for productivity apps, bookmark managers
Real Examples:
src/popup/: A simple counter app (already included in the template!)
src/background/: Listens for messages from popup, runs in background
src/contentScript/: Could inject a “Reading Time” indicator on blog posts
src/options/: User preferences like theme colors or API keys
Step 4: Test Your Extension
Load the extension in Chrome:
Open Chrome and navigate to chrome://extensions/.
Enable “Developer mode” (top-right toggle).
Click “Load unpacked” and select the my-crx-app/build folder. You’ll see a default extension popup (from the React template) when clicking the extension icon.
Step 5: Customize and Expand
Now the fun begins! Here’s how to make the extension your own:
Start Simple:
Edit the Popup: Open src/popup/Popup.tsx and change the counter into your own tool
Update the Name: Edit package.json to change "name" and "displayName"
Change the Icon: Replace files in public/img/ with your own 16×16, 32×32, 48×48, and 128×128 pixel PNG files
Add More Features:
Need Settings? Build an options page in src/options/
Want to Modify Websites? Add code to src/contentScript/
Need Background Tasks? Use src/background/ for notifications or data sync
Pro Tip: The template includes a working example where the popup, background, and options pages all communicate with each other using Chrome’s messaging system. Study how the counter value syncs between different parts!
Don’t Overthink It – Chrome Extensions Are Just Web Pages!
Here’s the secret that makes Chrome extensions feel less intimidating: they’re just web pages with special powers.
Popup: A tiny web page (like a modal window)
Options: A normal web page for settings
Content Script: JavaScript that runs on other websites
Background: A web page that runs invisibly
If you can build a React component, you can build a Chrome extension. The create-chrome-ext template handles all the complex Chrome-specific setup, so you can focus on building your actual features.
Quick Confidence Booster: The template you just created is already a fully functional extension! You’ve successfully built and installed a Chrome extension in under 5 minutes. Everything from here is just customisation.
Copy-and-Paste Example
Here’s a full example to create and test a Chrome extension:
# Create and set up the project
npm create chrome-ext@latest my-crx-app --template react-ts
cd my-crx-app
npm install
npm run build
Test the Extension:
Open chrome://extensions/ in Chrome.
Enable “Developer mode.”
Click “Load unpacked” and choose my-crx-app/build.
Expected Result: A popup appears when you click the extension icon, displaying the default React UI.
Why Use create-chrome-ext?
Speed: Generate a project in seconds with pre-configured boilerplates.
Flexibility: Choose from React, Vue, Svelte, or other frameworks.
Modern Tooling: Enjoy Vite’s fast HMR and optimized builds.
Multilingual Docs: Available in English, Chinese, French, and more.
Next Steps
Customize: Add features like content scripts or background services by editing manifest.json and src/.
Deploy: Package your extension for the Chrome Web Store using npm run build and follow Google’s guidelines.
Website migrations—whether to upgrade technology, improve performance, or rebrand—are critical for businesses aiming to stay competitive. However, migrations come with risks like lost traffic, broken functionalities, or SEO damage. A robust migration strategy, paired with comprehensive test cases and tools like Playwright, enhanced by AI capabilities, can ensure a seamless transition. This guide explores how Playwright facilitates successful migrations, best practices for end-to-end (E2E) testing, and the distinction between E2E and unit tests.
Why Playwright for Website Migrations?
Playwright, an open-source automation framework developed by Microsoft, is designed for testing and automating modern web applications. Its features make it an excellent choice for website migrations, ensuring functionality, performance, and user experience remain intact. Key features include:
Cross-Browser Support: Tests applications across Chromium, Firefox, and WebKit, ensuring compatibility.
Auto-Wait: Automatically waits for elements to be actionable, reducing test flakiness.
Parallel Testing: Runs multiple tests simultaneously, speeding up validation for large sites.
Trace Viewer: Provides detailed insights with screenshots, videos, and network logs for debugging.
Performance Monitoring: Tracks load times and optimises user experience.
AI Integration: Can be paired with AI tools for smarter test generation, as seen in cases like migrating 4,000+ Cypress tests to Playwright in three months (Medium: Fast Migration with OpenAI).
While specific case studies of large companies using Playwright for website migrations are limited, its adoption for testing is well-documented. Microsoft leverages Playwright extensively, as seen in the Microsoft Playwright Testing service, which supports scalable E2E testing. Similarly, WordPress has adopted Playwright for improving E2E testing (WordPress: Migrate E2E to Playwright), and companies like Allegro have used it for test automation migration (Allegro: Selenium to Playwright). These examples suggest Playwright’s reliability for complex web projects, likely including migrations.
A Robust Migration Strategy with Playwright
A successful website migration requires a structured approach. Here’s how Playwright fits into each phase:
Define Objectives and Scope: Use Playwright to crawl the existing site and map critical user flows (e.g., login, checkout) to prioritise testing.
Build a Test Plan: Create detailed test cases with Playwright to simulate user actions like form submissions and navigation, leveraging its cross-browser support.
Set Up a Staging Environment: Run Playwright tests in staging to verify functionality and debug issues using Trace Viewer.
Implement Tests: Write E2E tests to validate user flows and unit tests for individual components.
Secure with Backups: Use Playwright to test restored backups in staging before migration.
Execute Migration: Run a subset of Playwright E2E tests during deployment to catch immediate issues.
Validate in Production: Use Playwright to test across browsers and devices, ensuring performance matches pre-migration baselines.
Monitor Post-Launch: Schedule periodic Playwright tests to monitor site health and detect issues like slow-loading elements.
Best Practices for E2E Testing with Playwright
To maximise Playwright’s effectiveness during migrations, follow these best practices:
Test Real User Scenarios: Focus on critical paths like user registration, search, or checkout to ensure core functionalities work.
Use Page Objects: Organise tests into modular page objects for maintainability. For example, a LoginPage object can encapsulate login-related actions.
Leverage Parallel Testing: Run tests across multiple browsers simultaneously to save time, as supported by Playwright’s parallel execution.
Incorporate Visual Testing: Use snapshot testing to ensure UI consistency across migrations.
Test for Accessibility: Use Playwright’s accessibility assertions to ensure compliance with standards like WCAG.
Monitor Performance: Track load times and interaction delays to optimize user experience.
What to Include in E2E Tests
E2E tests should validate the entire application from the user’s perspective. Key areas to cover include:
User Journeys: Test complete flows, such as signing up, logging in, or completing a purchase.
Cross-Browser Compatibility: Verify functionality across Chrome, Firefox, Safari, and mobile browsers.
Third-Party Integrations: Test interactions with APIs, databases, or payment gateways.
Localisation: If applicable, test different languages and regional settings.
Error Handling: Simulate network failures or invalid inputs to ensure graceful degradation.
E2E Test Category
Examples
Playwright Feature Used
User Journeys
Login, checkout, search
Auto-wait, page objects
Cross-Browser Compatibility
Test on Chrome, Firefox, Safari
Cross-browser support
Third-Party Integrations
API calls, payment processing
Network interception
Localisation
Language-specific content
Emulation of locales
Error Handling
Invalid inputs, network errors
Trace Viewer for debugging
What Not to Include in E2E Tests
E2E tests should focus on external behaviour, not low-level details. Avoid including:
Unit-Level Testing: Testing individual functions or methods, which belongs in unit tests.
Internal State Verification: Checking internal application states, as E2E tests focus on user-facing outcomes.
Component-Level Testing: Testing isolated UI components, which should be handled by tools like Jest or React Testing Library.
What to Cover in Unit Tests
Unit tests ensure the reliability of individual code components. They should include:
Individual Functions and Methods: Verify each function works as expected (e.g., a function calculating discounts).
Edge Cases: Test with invalid inputs, boundary values, or extreme scenarios.
Mocking Dependencies: Isolate units by mocking external dependencies like APIs or databases.
Performance (Optional): For critical functions, test execution speed to ensure efficiency.
Unit Test Category
Examples
Tools/Techniques
Functions/Methods
Calculate discount, format data
Jest, Mocha
Edge Cases
Null inputs, large datasets
Assertions, boundary testing
Error Handling
Handle API failures, invalid formats
Mocking libraries (e.g., Sinon)
Mocking Dependencies
Mock database or API calls
Mocking frameworks
Performance
Optimise critical functions
Benchmarking tools
Wrap up!
A successful website migration hinges on a well-defined strategy, thorough testing, and the right tools. Playwright, with its robust features and AI integration potential, is a powerful ally for ensuring migrations are seamless and reliable. By following best practices for E2E testing—such as testing user journeys, leveraging parallel execution, and monitoring performance—you can minimise risks and deliver a high-quality user experience. Understanding the distinction between E2E and unit tests ensures comprehensive coverage, with E2E tests validating user-facing behaviour and unit tests securing individual components. While specific examples of large companies using Playwright for migrations are limited, its adoption by organisations like Microsoft and WordPress for testing underscores its effectiveness for complex web projects.
In the fast-evolving world of software development, legacy products often present unique challenges. Outdated codebases, redundant features, and inefficient workflows can hinder performance, increase maintenance costs, and limit scalability. As part of our investigation into modernising a legacy product, we explored tools that enable automatic user journey tracking, feature usage analysis, and seamless integration with end-to-end (E2E) testing workflows using Playwright. Our goal? To rebuild or refactor the product without introducing issues, while identifying under-utilised features to optimise costs and enhance high-impact functionalities.
This blog details our findings, focusing on three leading analytics tools—Heap, Userpilot, and Usermaven—and how they can help teams log feature usage, streamline testing, and drive data-informed decisions during a legacy product overhaul.
The Challenge: Modernising a Legacy Product
Legacy products often accumulate technical debt over time, with features that may no longer serve users effectively. Before embarking on a rebuild or refactor, it’s critical to understand how users interact with the product. Key questions include:
Which features are actively used? Identifying high-engagement features helps prioritize enhancements.
Which features are underutilized or obsolete? Removing or deprecating unused features reduces maintenance costs and simplifies the codebase.
How can we ensure a seamless transition? Refactoring without disrupting user experience requires robust testing and validation against real user behavior.
To address these challenges, we evaluated tools that meet the following criteria:
No manual tracking code changes: Developers shouldn’t need to add custom event tracking, as this increases complexity in a legacy codebase.
Automatic user journey generation: Tools must quickly map user interactions to inform refactoring decisions.
Playwright integration: Support for E2E testing to validate refactored features against real-world usage.
Feature usage analysis: Detailed insights into which features are used (or ignored) to guide optimization.
Data export for testing and documentation: Ability to export user interaction data for automated test creation and documentation.
Cost efficiency: Tools should align with budget constraints while delivering value.
After thorough research, we shortlisted Heap, Userpilot, and Usermaven for their no-code tracking capabilities and testing integration potential. Below, we share our insights and recommendations for leveraging these tools to modernize a legacy product.
Why Analytics Tools Matter for Legacy Product Refactoring
Refactoring a legacy product without understanding user behavior is like renovating a house without knowing which rooms are used. Analytics tools provide a clear picture of feature usage, enabling data-driven decisions. By integrating these tools with Playwright E2E testing, teams can:
Log all feature interactions to identify high- and low-usage features.
Generate automated test cases based on real user journeys.
Validate refactored features against actual usage patterns to avoid regressions.
Reduce costs by deprecating unused features and focusing on high-value enhancements.
Here’s how Heap, Userpilot, and Usermaven stack up in addressing these needs.
Tool Evaluation: Heap, Userpilot, and Usermaven
Heap: The Comprehensive Powerhouse
Best for: Teams with budget flexibility seeking maximum tracking and testing integration.
Why it stands out:
Industry-leading autocapture: Heap automatically logs every user interaction—clicks, page views, form submissions—without requiring manual event setup. This is ideal for legacy products where adding tracking code is impractical.
Retroactive analysis: Teams can analyze historical data, even for events not initially defined, making it perfect for uncovering insights from past usage.
Playwright integration: Heap’s robust API and webhook support enable seamless data export for test script creation. Its event visualizer simplifies feature tagging for test scenario generation.
Feature usage insights: Heap’s path analysis and event correlation provide granular visibility into which features drive engagement and which are dormant.
Challenges:
Cost: Heap’s custom pricing model (estimated at $3,600+ annually) may be prohibitive for smaller teams.
Learning curve: Non-technical users may find Heap’s advanced features complex.
Use case fit: Heap is ideal for teams refactoring complex legacy systems with diverse user interactions, especially those requiring mobile app tracking alongside web analytics.
Userpilot: The Balanced Choice
Best for: Mid-market teams seeking transparent pricing and robust documentation.
Why it stands out:
No-code feature tagging: Userpilot’s Chrome extension allows non-technical team members to tag features, reducing developer workload in legacy codebases.
Session replay: Built-in session replays help debug user interactions, making it easier to validate refactored features against real behavior.
Advanced journey analysis: Userpilot’s flow tracking and segmentation capabilities provide clear insights into user paths, helping prioritize high-impact features.
Transparent pricing: Starting at $249/month, Userpilot offers predictable costs and comprehensive documentation for easy onboarding.
Challenges:
Limited webhook support: Automation for Playwright testing requires custom development, as webhook capabilities are basic.
Web-focused: Userpilot lacks mobile app tracking, which may limit its use for multi-platform legacy products.
Data export limitations: Bulk data export requires an Enterprise or Growth add-on.
Use case fit: Userpilot suits teams with web-based legacy products who need a balance of automation, manual control, and strong documentation to guide refactoring efforts.
Usermaven: The Budget-Friendly Option
Best for: Startups or cost-conscious teams needing basic analytics with AI insights.
Why it stands out:
Affordable pricing: Starting at $49/month, Usermaven is the most cost-effective option, with unlimited data retention across all plans.
AI-powered insights: Automated funnel and journey analysis reduce manual effort in identifying feature usage patterns.
Cookieless tracking: Privacy-friendly tracking aligns with modern data regulations, reducing compliance overhead.
Simple setup: Usermaven’s straightforward implementation is ideal for teams with limited resources.
Challenges:
No session replay: Lack of visual debugging tools limits its ability to troubleshoot user interactions.
Basic integrations: Limited webhook and API capabilities hinder advanced Playwright automation.
Web-only: Like Userpilot, Usermaven lacks mobile app tracking.
Use case fit: Usermaven is perfect for smaller teams or startups refactoring simpler legacy products on a tight budget, where basic analytics suffice.
Playwright Integration: Bridging Analytics and Testing
To ensure a smooth refactoring process, we prioritized tools that integrate with Playwright, a powerful E2E testing framework. Playwright enables automated testing of refactored features by simulating real user interactions. Analytics tools enhance this process by:
Generating test scenarios from observed user behavior.
Exporting interaction data (e.g., clicks, navigation paths) for test script creation.
Debugging failed tests using session replays or heatmaps.
Supporting CI/CD pipelines via webhooks or APIs for continuous testing.
Here’s how the tools perform in these areas:
Capability
Heap
Userpilot
Usermaven
API Access for Test Data
Excellent (robust API)
Good (REST API)
Basic (limited API)
Real-time Event Capture
Yes
Yes
Yes
Data Export for Tests
Multiple formats
CSV/JSON (add-on)
CSV/JSON
Session Replay Debugging
Built-in
Built-in
None
Webhook Integration
Full support
Limited
Limited
CI/CD Pipeline Support
Good
Possible
Basic
Heap leads with its comprehensive API and webhook support, making it the easiest to integrate with Playwright for automated test creation and CI/CD pipelines. Userpilot offers solid capabilities, especially for debugging with session replays, but requires custom work for advanced automation. Usermaven lags in integration depth, suitable only for basic test data export.
Implementation Roadmap: Logging Features and Optimizing the Refactor
To leverage these tools effectively, we propose a three-phase implementation plan tailored to legacy product refactoring:
Phase 1: Quick Setup (Week 1)
Install tracking snippet: Deploy the chosen tool’s code snippet to begin autocapture. Heap and Usermaven require minimal configuration, while Userpilot’s Chrome extension simplifies feature tagging.
Enable autocapture: Activate automatic tracking of clicks, navigation, and form submissions.
Configure user identification: Set up user IDs to track individual journeys and feature usage.
Tag key features: Use no-code interfaces to label critical features for analysis.
Phase 2: Testing Integration (Weeks 2–3)
Set up API access: Authenticate APIs to export user interaction data for Playwright test scripts.
Create export scripts: Develop scripts to pull click paths and navigation data for test automation.
Configure webhooks: For Heap, set up webhooks to stream real-time data to CI/CD pipelines; for Userpilot and Usermaven, explore custom integrations.
Test Playwright integration: Validate that exported data generates accurate test scenarios reflecting real user behavior.
Phase 3: Documentation and Optimization (Week 4+)
Automate journey exports: Create reports of user journeys to document feature usage and inform refactoring priorities.
Validate features: Use analytics to confirm which features are heavily used and should be preserved or enhanced.
Deprecate unused features: Identify low-usage features for removal, reducing codebase complexity and maintenance costs.
Integrate with CI/CD: Establish continuous testing workflows to validate refactored features against real-world usage.
Cost-Benefit Analysis: Saving Costs and Enhancing Features
By logging feature usage, these tools help identify opportunities to save costs and enhance high-value features:
Eliminate unused features: For example, if analytics reveal that a legacy reporting module is rarely used, it can be deprecated, reducing maintenance overhead.
Focus on high-impact features: If users frequently engage with a specific workflow (e.g., a search function), prioritize its optimization during refactoring.
Streamline testing: Playwright integration ensures refactored features align with real user behavior, minimizing regressions and rework.
Pricing Snapshot
Tool
Starting Price
Mid-Tier
Enterprise
Free Plan
Usermaven
$49/month
$199/month
Custom
Limited (basic)
Userpilot
$249/month
$749/month
Custom
None
Heap
Custom (~$3,600+/yr)
Custom
Custom
10k sessions/month
Usermaven offers the lowest cost, ideal for startups. Userpilot provides predictable pricing for mid-market teams. Heap suits enterprises with complex needs but requires budget flexibility.
Final Recommendations
Based on our investigation, the choice of tool depends on your team’s priorities:
Choose Heap if you need:
Comprehensive autocapture and retroactive analysis.
Robust Playwright integration for advanced testing automation.
Mobile and web tracking for multi-platform legacy products.
Best for: Enterprises with complex systems and sufficient budget.
Choose Userpilot if you want:
Transparent pricing and excellent documentation.
Session replays for debugging and strong journey analysis.
A balance of automation and manual control.
Best for: Mid-market teams with web-based legacy products.
Choose Usermaven if you prioritize:
Low cost and quick setup.
AI-powered insights for basic feature analysis.
Privacy-friendly tracking.
Best for: Startups or teams with simple refactoring needs.
Standing Aspect: Why This Matters for Legacy Modernisation
Modernizing a legacy product is a high-stakes endeavor. Without clear insights into feature usage, teams risk wasting resources on low-value components or introducing regressions that frustrate users. Analytics tools like Heap, Userpilot, and Usermaven provide a data-driven foundation for refactoring by:
Logging all features automatically, eliminating developer overhead in legacy codebases.
Identifying cost-saving opportunities by highlighting unused features for deprecation.
Enhancing user experience by prioritizing high-engagement features.
Ensuring quality through Playwright integration, validating refactored features against real user behavior.
By adopting one of these tools, teams can approach refactoring with confidence, ensuring the modernized product is leaner, more efficient, and aligned with user needs. Whether you’re a startup optimizing on a budget or an enterprise tackling a complex legacy system, these tools offer a path to smarter, data-informed modernization.
Next Steps
Evaluate your budget and requirements: Determine whether cost, feature depth, or testing integration is your top priority.
Trial the tools: Start with free plans (Heap, Usermaven) or request demos (Userpilot) to test autocapture and Playwright integration.
Follow the implementation roadmap: Begin with quick setup, integrate with Playwright, and iterate based on usage insights.
Engage stakeholders: Share feature usage reports to align development priorities with business goals.
By leveraging analytics to guide your refactoring journey, you can transform your legacy product into a modern, cost-efficient, and user-focused solution. Let’s build smarter, together.
Disclaimer: Pricing and feature details are based on information available as of June 5, 2025, and may change. Always verify with vendors for the latest plans. For Playwright documentation, see https://playwright.dev.
On June 4, 2025, at 1:51 PM AEST, I embarked on an exhilarating mission to transform a mundane T3 App landing page into a dazzling Tailwind CSS Feature Demo that would showcase the full might of Tailwind CSS v4. Using Cursor, Context7 MCP, and Claude Sonnet 4, I collaborated with AI to create a repository that’s not just a demo but a testament to modern web development. The result, hosted at github.com/mingfangdev/context7-example, is a stunning collection of 3D transforms, advanced gradients, and interactive animations. This blog post dives deep into every prompt I gave, every response Cursor delivered, and how we turned a simple idea into a tutorial-ready masterpiece. Inspired by mingfang.tech/harnessing-context7-mcp/, this is the story of our journey and how you can replicate it.
The Vision: A Showcase of Tailwind’s Cutting-Edge Features
My goal was ambitious: take a basic T3 App page and rebuild it as a Tailwind CSS feature demo with three complex examples that highlight the framework’s most advanced capabilities. I wanted to push Tailwind CSS v4 to its limits, showcasing 3D effects, modern gradients, and dynamic animations, all while leveraging AI tools to make the process fast, precise, and educational. The end product needed to be a learning resource, documented thoroughly for others to follow, much like a tutorial.
I turned to Cursor, an AI-powered IDE that integrates Context7 MCP for real-time documentation access and Claude Sonnet 4 as my pair-programming partner. This setup promised to blend human creativity with AI efficiency, making the impossible possible in record time.
The Tech Stack: Powering Innovation
Next.js 15: React framework with app router, providing a robust foundation for dynamic pages.
Tailwind CSS v4: Utility-first CSS framework with modern features like conic gradients and 3D transforms.
TypeScript: Type-safe JavaScript for reliable, maintainable code.
React: Dynamic UI components for a seamless user experience.
Context7 MCP: Real-time access to Tailwind CSS documentation, ensuring we used the latest features.
Claude Sonnet 4: AI assistant for generating code, solving problems, and offering insights.
Cursor IDE: AI-powered editor that bridged human input and AI execution flawlessly.
This stack, paired with an AI-human pair programming approach, enabled us to iterate rapidly, tackle challenges, and produce a polished repository now live at github.com/mingfangdev/context7-example.
The Journey: From Prompt to Polished Repository
The steps below is what the Cursor do with Agent mode.
Step 1: The Initial Prompt
I opened Cursor and typed my first prompt, filled with excitement:
“help me rebuild the page in a tailwind css feature demo page. and make 3 complex example that show the capability of tailwind css.”
Cursor, powered by Claude Sonnet 4, responded instantly with a comprehensive plan. It proposed transforming the T3 App’s basic landing page into a feature-rich demo with three showcase sections:
3D Transform Gallery: Interactive cards with 3D flips, floating animations, and shape morphing.
Advanced Gradient System: Conic, radial, and angled gradients with color interpolation modes.
Interactive Animation Playground: Complex keyframe animations with staggered effects.
Using Context7 MCP, the AI queried Tailwind’s documentation to ensure we leveraged v4’s latest features:
This gave us access to utilities like transform-3d, bg-conic, and [animation-delay:-0.3s]. Cursor’s response included a complete page.tsx file, restructuring the page with a modern design:
Cursor’s Output (page.tsx Snippet):
import Link from "next/link";
export default function HomePage() {
return (
<main className="min-h-screen bg-gradient-to-br from-slate-900 via-purple-900 to-slate-900">
<div className="relative px-6 py-24 mx-auto max-w-7xl lg:px-8">
<h1 className="text-4xl font-bold tracking-tight text-white sm:text-6xl lg:text-7xl">
<span className="bg-gradient-to-r from-cyan-400 via-purple-400 to-pink-400 bg-clip-text text-transparent">
Tailwind CSS
</span>
<br />
<span className="text-white/90">Feature Demo</span>
</h1>
<p className="mt-6 text-lg leading-8 text-gray-300 max-w-2xl mx-auto">
Explore the power of modern CSS with Tailwind's advanced features including
3D transforms, complex animations, gradient interpolation, and cutting-edge utilities.
</p>
</div>
{/* Example sections to follow */}
</main>
);
}
This output was a revelation. The AI had taken my broad request and delivered a sleek, gradient-backed hero section with gradient text and a clear structure for the demo. I was ready to build the examples.
(Agent)Step 2: Building the 3D Transform Gallery
The Agent start to do the coding —a gallery of 3D effects that would captivate users. Cursor responded with a section featuring three interactive cards:
3D Card Flip: A card that rotates 180 degrees on hover to reveal a back side.
Floating Animation: A card that hovers and scales with a custom keyframe animation.
Morphing Shape: A card that transforms from a rectangle to a circle on hover.
The AI used perspective-[1000px] for realistic 3D depth, transform-3d for hardware-accelerated transforms, and backface-hidden for clean flips. However, the initial hover transitions were choppy. Cursor analyzed the issue and adjusted the transition-transform duration to 700ms, resulting in smooth, cinematic animations. This section became a highlight of the repo, showcasing Tailwind’s 3D capabilities.
(Agent)Step 3: Crafting the Advanced Gradient System
Next, the AI responded with a grid comparing different gradient types and interpolation modes:
Conic Gradients: Circular gradients with bg-conic and bg-conic-180.
Radial Gradients: Positioned gradients with bg-radial-[at_25%_25%].
Color Interpolation: Visual comparisons of srgb, hsl, oklab, and oklch modes.
Angled Gradients: Diagonal gradients with bg-linear-45 and bg-linear-135.
Context7 MCP was invaluable here, providing precise v4 syntax for gradients. The AI cross-referenced Tailwind’s documentation to ensure accuracy, making this section a visual and technical triumph. The interpolation mode comparison was particularly eye-opening, showing how oklch produces smoother color transitions than srgb.
(Agent)Step 4: Creating the Interactive Animation Playground
Agent give an animation playground to showcase Tailwind’s dynamic capabilities. Cursor delivered a grid with four components:
Bouncing Loader: Three dots with staggered animate-bounce effects.
Multi-Speed Rotation: Spinners with varying animation-duration.
Pulsing Hearts: Heart emojis with timed animate-pulse effects.
Interactive Showcase: A card combining multiple hover animations.
Agent found the issue and it needed a custom floating animation for one card, but the initial attempt using styled-jsx caused parsing errors. Cursor suggested moving to a global globals.css file:
This fixed the issue, and the animations ran smoothly, adding a playful yet professional touch to the repo.
(Agent)Step 5: Overcoming Challenges
Our journey had its share of challenges, but Cursor’s real-time problem-solving kept us on track then Agent start to debug:
JSX Syntax Error: An apostrophe in “Tailwind’s” broke the linter. Cursor flagged the issue: Line 20: `'` can be escaped with `'` We replaced it with ', resolving the error instantly.
Tailwind v4 Migration: The AI initially used v3 syntax: @tailwind base; @tailwind components; @tailwind utilities; Context7 MCP revealed v4’s new syntax: @import "tailwindcss"; We updated globals.css, ensuring compatibility.
Styled-JSX Incompatibility: The styled-jsx approach for custom animations failed. Cursor proposed a global CSS file, which worked flawlessly.
Each challenge was documented, turning obstacles into learning opportunities for the tutorial.
Step 6: Documenting for the Community
With the demo complete, I wanted to share our process as a tutorial. I prompted Cursor:
“help me summary about our conversation and put in a md file.”
Cursor generated a markdown file summarizing our collaboration, code snippets, and lessons learned. I realized I needed more detail for a tutorial, so I refined my prompt:
“Is that include how we made likle the logic how we chat I want to include all of those. Later I will use it for tutorial show people how to do it. The key tech stack have to be mentioned on thje top mcp that we use is also”
Cursor responded with an expanded tutorial, including our exact prompts, responses, MCP workflows, and the tech stack. This became the foundation for the repo’s documentation and this blog post.
Cursor’s Output (Markdown Snippet):
# Tailwind CSS Feature Demo - Complete Tutorial & Project Summary
## 🛠 Tech Stack & Tools Used
- **Context7 MCP**: Real-time documentation access
- **Claude Sonnet 4**: AI pair programming assistant
- **Cursor IDE**: AI-powered code editor
## 🤝 Collaborative Development Process
**Human Request**: "help me rebuild the page in a tailwind css feature demo page. and make 3 complex example that show the capability of tailwind css."
**AI Response**: Proposed a full-featured demo with 3D transforms, gradients, and animations, using Context7 MCP to fetch Tailwind v4 docs.
3D Transform Gallery: Interactive cards that flip with rotate-y-180, float with custom animate-float keyframes, and morph from rectangles to circles using hover:rounded-full. The perspective-[1000px] utility creates a cinematic 3D effect.
Advanced Gradient System: A visual feast of conic gradients (bg-conic), radial gradients (bg-radial-[at_25%_25%]), and interpolation modes (/oklch, /hsl). The comparison of color spaces is both educational and visually stunning.
Interactive Animation Playground: Bouncing loaders with staggered [animation-delay:-0.3s], multi-speed spinners with animation-duration:2s, pulsing heart emojis, and a complex interactive card combining multiple hover effects. All animations are CSS-only for optimal performance.
The repo is more than a demo—it’s a learning resource with clean code, comprehensive documentation, and reusable patterns. It’s tutorial-ready, making it perfect for developers looking to master Tailwind CSS v4 and AI-driven development.
Why This Is Amazing
This project redefined how I approach web development. Here’s what made it extraordinary:
Context7 MCP’s Precision: Real-time documentation access ensured we used v4’s latest features, like bg-conic and transform-3d, with perfect syntax.
Cursor’s Speed: Claude Sonnet 4 turned my prompts into production-ready code in minutes, from JSX components to custom keyframes.
Collaborative Problem-Solving: We tackled syntax errors, migrations, and integrations together, learning with every step.
Tutorial-Ready Documentation: The detailed markdown summary and repo make it easy for others to replicate our process, aligning with the educational spirit of mingfang.tech/harnessing-context7-mcp/.
How to Build Your Own Tailwind Masterpiece
Inspired to create your own Tailwind CSS demo? Here’s a step-by-step guide based on our process:
help me rebuild the page in a tailwind css feature demo page. and make 3 complex example that show the capability of tailwind css.
(p.s: Use Claude Sonnet 4 thinking or any good model.)
4. Enjoy and wait see the result.
Final Thoughts
Building this Tailwind CSS feature demo with Cursor, Context7 MCP, and Claude Sonnet 4 was a transformative experience. It showed me the power of AI-human collaboration, where human vision meets AI precision to create something truly remarkable. The repository at github.com/mingfangdev/context7-example stands as proof that modern tools and workflows can produce professional-grade results in record time.
Built with: Next.js 15, Tailwind CSS v4, TypeScript, Context7 MCP, Claude Sonnet 4 Explore the repo: github.com/mingfangdev/context7-example Share your creations: Post your Tailwind projects on X and join the conversation!
G’day, web developers! In the fast-paced world of web dev, keeping up with the latest frameworks, libraries, and best practices can be a bit of a slog. That’s where Context7 MCP (Multi-Context Prompting) paired with Cursor Rules comes in—a ripper approach that makes coding smoother by giving AI assistants clear, project-specific guidelines. This blog dives into how Context7 MCP and Cursor Rules help you churn out consistent, top-notch code using tools like Next.js 15, Tailwind CSS v4, and TypeScript. We’ll explore a real-world example from the “Context7 Example repository“, walk you through setting up Cursor Rules in Visual Studio Code with Cursor, and explain how to make it work with GitHub Copilot—all in Aussie English. If you’re already across Cursor Rules setup, feel free to skip to the juicy bits about the Context7 Example or Copilot integration.
What is Context7 MCP with Cursor Rules?
Context7 MCP is a cracking methodology that supercharges AI-assisted coding by tapping into a specialised knowledge base and real-time docs for fast-moving libraries like Next.js, Tailwind CSS, React Query, and Zod. It’s smart enough to figure out which framework you’re using, so you don’t need to spell it out in your prompts. For example, chuck in “use context7” in a prompt, and the AI digs into trusted sources like /tailwindlabs/tailwindcss.com for Tailwind CSS or /vercel/next.js for Next.js, ensuring bang-on, up-to-date responses.
Cursor Rules, stored in files like global.mdc and components.mdc, are like a playbook for your AI mate. They lay down the law on project-specific conventions—think coding patterns, framework setups, and tooling choices (like using bun over npm). Together, Context7 MCP and Cursor Rules make sure AI-generated code fits your project’s tech stack and quality standards like a glove.
Why Context7 MCP with Cursor Rules is a Big Deal
Modern web dev calls for precision and consistency. Without clear instructions, AI assistants can spit out code that’s a bit off or doesn’t gel with your project’s vibe. Here’s why Context7 MCP with Cursor Rules is a fair dinkum game-changer:
Framework-Specific Precision: Context7’s curated sources keep code in line with the latest framework updates, like Next.js 15’s async patterns or Tailwind CSS v4’s CSS-first approach.
Rock-Solid Consistency: Cursor Rules enforce standards like named exports and TypeScript typing, keeping your codebase tidy as a whistle.
Smoother Workflow: You can focus on the big picture while the AI handles the grunt work, sticking to your rules.
Performance Boost: Rules push best practices like Suspense for loading states, useMemo for heavy lifting, and Bun for lightning-fast builds.
Scalability: The structured setup makes it easy to bring new devs or AI assistants on board, with rules acting as the single source of truth.
A Real-World Example: The Context7 Example Repository
The Context7 Example repository is a bloody brilliant showcase of Context7 MCP and Cursor Rules in action. Built as a demo for Tailwind CSS v4 within a Next.js 15 app, it’s packed with advanced UI effects and shows off the power of AI-human teamwork. Let’s unpack how these tools made it happen.
Project Snapshot
Purpose: A feature-packed demo page showing off Tailwind CSS v4’s tricks, like 3D transforms, advanced gradients, and interactive animations.
Tech Stack: Next.js 15 (App Router), React 19, Tailwind CSS v4, TypeScript, and Bun for package management.
Development Process: Built through a collab between a human dev and an AI assistant, using Context7 for real-time docs and Cursor Rules for consistency. The whole shebang is documented in the project’s README.md.
How Cursor Rules Shaped the Codebase
The project leans on two Cursor Rules files: global.mdc and components.mdc. Here’s how they steered the ship:
global.mdc: Laying Down Project-Wide Rules
This file sets the ground rules for the whole project:
Context7 Integration: Prompts need “use context7” for framework-specific code, ensuring the AI pulls from sources like /vercel/next.js for Next.js 15’s async params or /tailwindlabs/tailwindcss.com for Tailwind CSS v4’s @theme directive.
Tailwind CSS v4 Setup: Calls for a CSS-first approach with @theme for design tokens:
Next.js 15 Best Practices: Enforces async/await for params and Suspense for loading states.
Tooling: Demands bun or bunx for commands, making the most of Bun’s speed.
In the Context7 Example, these rules ensured src/app/globals.css properly set up Tailwind CSS v4 and defined custom animations (e.g., animate-float), while src/app/page.tsx nailed Next.js 15’s async patterns.
components.mdc: Crafting Components with Finesse
This file zeroes in on React components:
Prompt Guidance: Spells out prompts like “Create a responsive card component with Tailwind v4. use context7”.
Component Structure:
Named exports (export function ComponentName() {}) for better modularity.
Logical directories (e.g., components/ui/).
TypeScript interfaces for props to keep things type-safe.
Server vs. Client Components: Defaults to Server Components, with Client Components ('use client') for interactivity, optimised with useMemo and Suspense.
In the Context7 Example, these rules shaped src/app/page.tsx to use Server Components for speed, with Tailwind v4 utilities driving 3D card flips and animations.
Showcasing Tailwind CSS v4 with Context7
The demo page (src/app/page.tsx) is a cracking display of Tailwind CSS v4’s capabilities, made possible by Context7 MCP and Cursor Rules:
3D Transform Gallery: Cards with perspective-[1000px], hover:rotate-y-180, and animate-float.
Advanced Gradients: Conic (bg-conic from-...), radial (bg-radial-[at_position]), and angled linear gradients (bg-linear-45).
Interactive Animations: Staggered animate-bounce, variable-speed animate-spin, and group-hover effects.
Context7 kept the Tailwind v4 syntax spot-on, while Cursor Rules ensured the project stayed true to its conventions.
Setting Up Cursor Rules in Visual Studio Code with Cursor
If you’re already clued up on setting up Cursor Rules, skip this bit and jump to the Copilot section or the Context7 Example. For those new to it, Cursor is a top-notch code editor that makes AI-assisted coding a breeze by supporting custom rules. Here’s how to set up Cursor Rules in Visual Studio Code with Cursor to make the most of Context7 MCP:
Step 1: Get Cursor Sorted
Download and install Cursor or use Visual Studio Code with the Cursor extension (if it’s available).
Open your project in Cursor or VS Code.
Step 2: Create Cursor Rules Files
In your project root, create two Markdown files with the .mdc extension:
global.mdc: For project-wide standards.
components.mdc: For component-specific guidelines.
Structure each file with a frontmatter section and content:
---
description: Project-wide coding standards
globs: *.js, *.jsx, *.ts, *.tsx, *.css, *.scss
alwaysApply: false
---
# Global Rules
- Use bun or bunx instead of npm or npx.
- Always add "use context7" to prompts for framework-specific code.
- Use named exports for components.
Step 3: Get Cursor to Recognise Rules
Cursor automatically picks up .mdc files in the project root.
Make sure the globs field lists the file types the rules apply to (e.g., *.tsx).
If using VS Code, double-check if the Cursor extension supports .mdc files (have a squiz at Cursor’s docs).
Step 4: Use Rules in Prompts
In Cursor’s chat panel or inline suggestions, chuck in “use context7” for framework-specific tasks.
Example: “Create a Next.js 15 Server Component with Tailwind v4 styling. use context7.”
Cursor will apply the right .mdc rules based on the file type and context.
Step 5: Test and Tweak
Give the AI-generated code a once-over to make sure it’s following the rules (e.g., named exports, Bun usage).
Tweak .mdc files to fine-tune guidelines or add new conventions.
Tips for a Smooth Ride
Keep rules short and sharp to avoid confusing the AI.
Set alwaysApply: true for rules that need to apply everywhere.
Keep rules updated to match framework changes (e.g., Next.js 16).
How Cursor Rules Work with GitHub Copilot
GitHub Copilot doesn’t natively support .mdc files, but you can adapt Cursor Rules to work with Copilot in VS Code. If you’re familiar with this setup, skip to the benefits or getting started sections.
Approach 1: Manual Prompt Engineering
Slip key rules into code comments or Copilot chat queries:
// Create a Next.js 15 Server Component with Tailwind v4. Use named exports, TypeScript props, and "use context7".
Include “use context7” to tap into Context7’s sources.
Approach 2: Custom Documentation Files
Create a docs/coding-standards.md file summing up your rules:
# Coding Standards
- Use bun instead of npm.
- Named exports: `export function ComponentName() {}`.
- Include "use context7" for Next.js and Tailwind prompts.
Point to it in prompts: “Follow docs/coding-standards.md for a Tailwind v4 component.”
Approach 3: Copilot Workspace
In Copilot Workspace, weave rules into README.md:
## AI Coding Guidelines
- Use `bun` for scripts.
- Include "use context7" in prompts.
- Follow `global.mdc` and `components.mdc` standards.
Copilot Limitations
Copilot needs manual rule enforcement through prompts or docs.
Context7 sources might need explicit mentions.
For .mdc support, run Cursor alongside or switch to it.
Benefits of Context7 MCP with Cursor Rules
The Context7 Example repository shows why this approach is a ripper:
Lightning-Fast Development: Rules make it easy to whip up complex features.
Team Consistency: Uniform code keeps everyone on the same page.
Current Knowledge: Context7 ensures framework accuracy.
Learning Tool: The README.md doubles as a collab tutorial.
Getting Started with Context7 MCP and Cursor Rules
Ready to give it a burl? Here’s how:
Set Up Cursor Rules:
Create global.mdc and components.mdc.
Define standards (e.g., Bun, named exports).
Install Cursor:
Grab Cursor or use VS Code with the Cursor extension.
Have a gander at README.md and tinker with the demo.
Wrapping Up
Context7 MCP with Cursor Rules is a bloody brilliant way to tackle modern web dev. Pairing Context7’s framework-savvy knowledge with Cursor Rules’ strict guidelines, you’ll churn out top-notch code faster than you can say “fair dinkum.” The Context7 Example repository proves it, showing off a Tailwind CSS v4 demo built with Next.js 15 and AI-human teamwork. Whether you’re using Cursor or tweaking for Copilot, this approach will streamline your workflow and keep your code ace. Give it a crack today!
In the high-stakes world of web development and quality assurance, senior developers and testers need tools that match their expertise—tools that deliver precision, efficiency, and scalability. Playwright MCP rises to the challenge, building on the Playwright framework with a server-based, AI-driven approach to browser automation. This post dives into its advanced features, real-world applications, and a hands-on examples a go-to resource for testing scenarios.
Why Playwright MCP Resonates with Senior Professionals
Playwright MCP isn’t just another automation tool—it’s a server that uses structured accessibility snapshots for fast, reliable interactions, sidestepping the inefficiencies of vision-based methods. For seasoned devs and testers, this translates to:
Workflow Efficiency: Automate complex tasks without screenshot processing delays.
Interaction Precision: Leverage the accessibility tree for consistent, deterministic results.
Advanced Scalability: Tackle multi-tab workflows, iframes, and authenticated sessions effortlessly.
Whether you’re refining end-to-end tests or embedding automation in CI/CD pipelines, Playwright MCP equips you with the tools to excel.
As you can see from the screenshot, the Playwright MCP will ask you if you approve (cmd+enter) to call the tool. Then it will navigate, click or even wait for the “Hello World!” to come out.
Dynamic web apps demand scripts that wait for the right moment to act. The browser_wait_for command ensures stability by pausing until conditions like text element Hello World! visibility are met.
Example: the input parameters
{
"text": "Hello World!"
}
SimilarUse Case: Prevent test flakiness by waiting for all network activity to settle.
2. Iframe Mastery
Simply copy and paste the example prompt below and test yourself.
Go to the page https://mingfang.tech/playwright-mcp-iframe/. check the iframe content and click submit button in the iframe.
iframe are a staple in modern UIs, and browser_frame lets you switch contexts to interact with embedded elements seamlessly.
As you can see from the screenshot, the Playwright MCP will ask you if you approve (cmd+enter) to call the tool. Then it will navigate, snapshot and then it click the button.
Example:
Similar Use Case: Automate interactions within iframe-heavy dashboards or widgets.
3. Multi-Tab Management
Simply copy and paste the example prompt below and test yourself.
Go to the homepage https://mingfang.tech/ and click link “The simplest way to code with Copilot Instructions” in a new tab. List the browser tab, select the previous homepage tab and close the new tab.
Testing multi-tab or multi-window flows is simplified with browser_list_tab, browser_new_tab, browser_switch_tab and browser_switch_close.
Similar Use Case: Validate link behaviour in new tabs, like pop-up logins.
Real-World Applications
Playwright MCP excels in scenarios you face daily:
End-to-End Testing: Automate UI and API interactions across browsers for comprehensive validation.
CI/CD Integration: Embed automation in Jenkins or GitHub Actions to catch regressions early.
Authenticated Workflows: Manage sessions to test post-login features efficiently.
These applications underscore its value in tackling modern web complexities.
Advanced Features: Handles dynamic content, iframes, and more with ease.
Integration: Fits your existing pipelines like a glove.
While it may stumble on ultra-complex sites (e.g., locator issues), it’s a powerhouse for structured testing.
Wrap-Up
Playwright MCP empowers senior developers and testers to conquer complex automation challenges with precision and speed. From dynamic content to CI/CD workflows, it’s built for your needs. Try it out and elevate your AI automation game.
Frontend development can be a rollercoaster—misaligned layouts, sneaky JavaScript errors, or unwanted scrollbars ruining your design. Debugging these issues often feels like solving a puzzle with missing pieces. But Chrome DevTools’ AI assistance is here to save the day! This AI-powered feature, built into Chrome, acts like a mentor, offering smart fixes for CSS, JavaScript, and more. In this blog, I’ll guide you through 5 simple steps to fix an annoying overflow bug using AI assistance, complete with a screenshot to capture your success. Let’s dive in!
Why Use Chrome’s AI Assistance for Debugging?
As of May 2025, Chrome’s AI assistance in DevTools is a game-changer for frontend developers. It’s a chat-like tool that analyzes your webpage and suggests solutions for styling issues, console errors, and performance hiccups. For this guide, we’ll tackle an overflowing element causing unwanted scrollbars—a classic frontend headache—using the Cinemai DevTools Overflow Demo. With AI assistance, you’ll squash bugs in minutes, no extra tools needed!
Prerequisites
Before we start, ensure you have:
Latest Chrome Browser: Update via Menu > Help > About Google Chrome.
Google Account: Sign in (must be 18+ and in a supported region).
Right-click on the page and select Inspect, or press F12 (or Ctrl+Shift+I on Windows, Cmd+Option+I on Mac).
This launches DevTools, your debugging command center.
Step 2: Enable AI Assistance
Unlock AI assistance to debug smarter:
In DevTools, click the Settings gear icon (top-right) or press F1.
Go to AI Assistance and toggle it on. Sign in with your Google account and agree to the Terms of Service if prompted.
Look for the Ask AI button in the DevTools toolbar. If it’s not there, update Chrome to the latest version.
Note: This feature is experimental, so verify AI suggestions. Avoid sensitive data in prompts, as Google may store inputs for up to 18 months (Data Use Policy).
Step 3: Identify the Issue
On the demo page, the scrollbar is caused by the scuba diver image overflowing its container.
In DevTools, go to the Elements tab.
Select the image element (e.g., the <div> with a class like page_overflow__B5rjQ).
Check the Styles pane to pinpoint the issue, likely a fixed width or sizing problem.
Step 4: Ask AI for a Fix
Let AI assistance work its magic:
Click the Ask AI button to open the chat interface.
Type: How can I make this element visible without scrollbars?
Example AI response that I got:
To make the element visible without scrollbars, you have a few options:
Change the overflow property: Set overflow: hidden on the element. This will hide any content that overflows its box, preventing scrollbars. However, the overflowing content will not be visible.
With Chrome’s AI DevTools, frontend debugging is faster and friendlier than ever. In five simple steps, you’ve fixed an overflow bug and captured a screenshot to prove it. Try it on the Cinemai DevTools Overflow Demo or your own project. As an experimental feature (May 2025), AI assistance is evolving, so your feedback can shape its future. Debug smarter and share your wins!
The Model Context Protocol (MCP) is a powerful standard for connecting Large Language Models (LLMs) to external data and tools, acting like a universal bridge for AI interactions. If you’re a Python developer looking to enhance your AI workflows, FastMCP is the go-to library to simplify building MCP servers. In this blog, we’ll walk through setting up a basic MCP server using FastMCP, complete with a practical example to get you started.
Whether you’re new to MCP or a seasoned developer, this guide will help you create a server that enables LLMs to interact with custom tools and data. Let’s dive in!
FastMCP is a Pythonic, high-level framework that abstracts the complexities of the MCP protocol, such as server setup, protocol handlers, and error management. It allows you to focus on building tools and resources for LLMs using simple Python functions and decorators. Think of it as a way to make your Python code AI-ready with minimal effort.
What is MCP? The Model Context Protocol (MCP) is a set of rules that lets AI models, like LLMs, connect to external data and tools. Use MCP when you need your AI to:
Fetch External Data: Query databases or APIs (e.g., weather updates or sales figures).
Run Custom Tools: Execute tasks you’ve programmed, like calculations or file editing.
Enhance AI Capabilities: Combine AI with your systems for practical, real-world results.
For example, MCP can let an AI add two numbers using a tool you’ve built or pull a personalised greeting from a custom resource; similarly, you can create your own tools or integrations to empower AI with unique capabilities tailored to your needs. Even a documentation search MCP?
Prerequisites
Before we start, ensure you have:
Python 3.7+ installed on your system.
A terminal (macOS/Linux) or PowerShell/CMD (Windows).
Basic familiarity with Python and virtual environments.
(Optional) An MCP-compatible client like Claude Desktop or Cursor for testing.
Step 1: Install FastMCP
The recommended way to install FastMCP is using uv, a fast Python package manager that simplifies dependency management. If you’re new to Python, uv is a tool that makes it easy to install libraries (pre-written code) and manage project environments. It’s faster and simpler than traditional tools like pip and virtualenv, combining their features into one command. With uv, you can quickly set up projects and ensure consistent results with minimal hassle.
If you don’t have uv installed, you can install it via:
# On macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# On Windows (PowerShell)
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
After installing uv, create a project directory and set up a virtual environment:
mkdir my-mcp-server
cd my-mcp-server
uv init
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
Now, install FastMCP with the optional CLI tools for debugging:
uv pip install "mcp[cli]"
Verify the installation by checking the FastMCP version:
Let’s build a basic MCP server with a tool that adds two numbers. Create a file named server.py in your project directory and add the following code:
from mcp.server.fastmcp import FastMCP
# Initialize the MCP server with a name
mcp = FastMCP("CalculatorServer")
# Define a tool to add two numbers
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers and return the result."""
return a + b
# Run the server
if __name__ == "__main__":
mcp.run(transport="stdio")
This code:
Imports FastMCP from the MCP library.
Creates a server instance named “CalculatorServer”.
Defines a tool called add using the @mcp.tool() decorator, which allows an LLM to call this function.
Runs the server using the stdio transport, ideal for local testing.
Step 3: Test the Server with MCP Inspector
FastMCP includes a handy debugging tool called MCP Inspector, which provides a UI to test your server without connecting to an LLM client. Start the server in development mode:
uv run mcp dev server.py
This command launches the server and opens the MCP Inspector in your browser (typically at http://127.0.0.1:6274).
If it doesn’t open automatically, navigate to that URL. Once loaded:
Click Connect to link to your server.
In the Tools tab, you’ll see the add tool listed.
Click on it, enter values (e.g., a=4, b=3), and select Run Tool to test. You should see the result: 7.
Step 4: Integrate with an LLM Client (Optional)
To see your server in action with an LLM, integrate it with an MCP-compatible client like Claude Desktop or Cursor. Here’s how to set it up with Cursor:
Open your project directory in Cursor.
Go to File → Preferences → Cursor Settings → MCP → Add New Server.
Configure the server:
Name: “Calculator”
Type: Command
Command: uv run mcp run /path/to/your/server.py
Save the configuration. A green 🟢 indicator should appear, confirming the server is running.
In Cursor’s chat interface, type: “Add 5 and 3.” The LLM should call your add tool and return 8.
If you encounter a 🟠 indicator, double-check the file path and ensure the virtual environment is activated.
Step 5: Expand Your Server
Now that you have a working server, you can add more tools or resources. For example, let’s add a resource to greet users:
# Add to server.py
@mcp.resource("greeting://{name}")
def get_greeting(name: str) -> str:
"""Return a personalized greeting."""
return f"Hello, {name}!"
Restart the server with uv run mcp dev server.py, and test the resource in MCP Inspector by accessing greeting://John to see “Hello, John!” You can also call this resource from Cursor by asking, “Get a greeting for John.”
Copy-and-Paste Example
Want to get started quickly? Below is a complete, ready-to-run FastMCP server script that includes both the add tool and the greeting resource. Simply click the “Copy” button (or select and copy the code) and paste it into a file named server.py.
from mcp.server.fastmcp import FastMCP
# Initialize the MCP server
mcp = FastMCP("MyFirstMCPServer")
# Define a tool to add two numbers
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers and return the result."""
return a + b
# Define a resource for personalized greetings
@mcp.resource("greeting://{name}")
def get_greeting(name: str) -> str:
"""Return a personalized greeting."""
return f"Hello, {name}!"
# Run the server
if __name__ == "__main__":
mcp.run(transport="stdio")
How to Run:
Click the “Copy” button above (or select and copy the code) and save it as server.py in your project directory.
Ensure FastMCP is installed (uv pip install "mcp[cli]").
Activate your virtual environment (source .venv/bin/activate or .venv\Scripts\activate on Windows).
Run the server: uv run mcp dev server.py.
Open http://127.0.0.1:6274 in your browser to test with MCP Inspector. Try the add tool (e.g., a=10, b=20) or the greeting resource (e.g., greeting://Alice).
This script is a fully functional starting point you can build upon!
Why Use FastMCP?
FastMCP simplifies MCP server development with:
Minimal Boilerplate: Create tools and resources with just a few lines of code.
Pythonic Design: Uses decorators and type hints for a familiar experience.
Powerful Features: Supports async functions, OpenAPI integration, and client libraries for advanced use cases.
Debugging Tools: MCP Inspector makes testing straightforward.
Next Steps
You’ve now set up a basic MCP server with FastMCP! From here, you can:
Add more complex tools, like querying APIs or databases (e.g., SQLite or yfinance).
Deploy your server to a cloud platform like AWS or Cloudflare.
Explore the FastMCP documentation for advanced features like server proxying or LLM sampling.