Trust signal

Our methodology

How we test tools, what goes into a rating, how we handle conflicts of interest, and how often we refresh published work.

Last updated: 2026-05-12

1. How we test

Every published review on Botapolis begins the same way: we open an account on the tool, install it on a real Shopify store (one of ours or a partner merchant’s — never a sandbox), and run it for a minimum of 30 daysthrough a complete merchant workflow. For email/SMS tools that means a live campaign and at least one automation flow. For ads tools, a real budget against live inventory. For AI content tools, at least 50 generations through the workflow the tool is sold for.

We track three outcomes during the test window:

  • Did the tool deliver what it promised on the landing page? Concretely — not vibes.
  • What did it cost? Including the upsell-tier required for the feature that sold us.
  • Where did it break? Every tool breaks somewhere. We document the seam.

2. Rating criteria

Each review carries a 0–10 overall rating plus a four-axis breakdown:

  1. Ease of use (25%): Onboarding, daily UI fluency, learning curve for a non-technical merchant.
  2. Value (25%): Output quality per dollar at the tier most merchants actually use, not the free tier.
  3. Support (20%): Response time and answer quality across two real support tickets.
  4. Features (30%): Depth at the price point versus the closest two competitors.

A 9.0+ rating means we’d switch our own store to it tomorrow. 7–9 is recommended for the use case. Below 7 means there is a better choice for most readers and we say so out loud.

3. Conflict of interest

We earn affiliate commission on most tools we recommend. That funds the testing you’re reading. Here’s how we keep it from corrupting the work:

  • Rating is decided before partnership is contacted. The 30-day test happens on a public-tier account paid for out of our own pocket. The rating gets written. Then we apply to the affiliate program.
  • Affiliate status doesn’t move ratings up.Five of the tools in our top-10 email category aren’t affiliate partners at all (Mailchimp, for example — no public partner program for content sites). Their ratings aren’t penalised for it.
  • Sponsored content is marked, isolated, and never affects the independent rating. If a tool pays us for a deep-dive piece, that piece carries a Sponsored chip in the hero, its own URL path (/sponsored/…, not /reviews/…), and a disclosure banner above the fold. It does not get a Botapolis rating.

Full FTC-compliant text lives in our Affiliate disclosure.

4. Refresh cadence

Every review and comparison gets a quarterly reviewagainst the tool’s shipping changelog. If pricing changes materially, the integration depth shifts, or the support quality regresses, we re-test and update the rating with a dated changelog block at the bottom of the article.

For anything pricing-sensitive, the live price on this site is canonical against the tool’s pricing page; the screenshot in the article may lag by up to a quarter. We’re working on automating that gap.

5. The team

Botapolis is run by a small team of operating partners with first-hand Shopify store experience — not freelance writers. Every review carries the operator’s e-commerce credentials privately attached to the published byline; we keep bylines pseudonymous (“Botapolis editorial”) because the operators still run their own stores and don’t want their brand-test results indexed against their personal name. Email editorial@botapolis.com if you need to verify a specific review on the record.

6. Corrections

Found something wrong? We publish a correction block at the top of any article we edit substantively, with the date and what we changed. Smaller fixes (typos, broken links) are silent. To flag a correction, email editorial@botapolis.com with the article URL and the specific claim — we respond within 5 business days.