QA Review
The QA review is the internal checkpoint between finishing a build and showing it to the client. This is where your team walks through every configured element of the sub-account to verify that workflows fire correctly, forms submit and route properly, automations trigger as expected, and branding is consistent throughout. Nothing goes to the client until it passes this review.
Why This Matters
Presenting a client with a broken system is one of the most damaging things an agency can do during onboarding. The client just paid for a professional solution. If the first thing they see is a form that does not submit, a calendar that books into the wrong time zone, or a workflow that sends duplicate messages, you have lost credibility that is incredibly hard to earn back.
The cost goes beyond perception. Bugs caught after the client sees the system take three to five times longer to resolve than bugs caught internally. Once the client is involved, every fix requires explanation, reassurance, and scheduling. What could have been a quick correction during QA becomes a back-and-forth thread that drags on for days.
QA also protects your team’s time after launch. A system that goes live with untested automations will generate support tickets for weeks. Every ticket is time your team spends firefighting instead of building for the next client. The agencies that scale smoothly are the ones that treat QA as a non-negotiable gate, not an optional step when there is time.
How to Think About It
QA is not “click around and see if things look right.” It is a systematic walkthrough of every functional element in the build. Your team needs a repeatable process that covers the same checkpoints for every client, with additional items based on the specific package and customizations.
Think of QA in layers. First, verify the static elements: business information, branding, logos, colors, custom values, and contact details. These are simple to check and easy to miss. Second, test the interactive elements: form submissions, calendar bookings, chat widgets, and pipeline stage transitions. Third, stress-test the automations: trigger every workflow, verify the conditions route correctly, confirm that emails and SMS messages send with the right content and timing. Fourth, check the integrations: payment processing, third-party connections, and any external tools linked to the sub-account.
The person who performs QA should ideally not be the person who built the system. Fresh eyes catch things the builder’s muscle memory skips over. If you are a solo operator, put at least a few hours between the build and the review. Context distance matters.
Common Mistakes
Treating QA as a visual check only. Looking at the funnel and confirming the colors match the brand is not QA. You need to actually submit the forms, book the appointments, and trigger the workflows. Visual checks catch maybe 20% of the issues. Functional testing catches the rest.
Skipping the mobile test. Most client-facing elements, especially funnels and forms, will be accessed on mobile devices. If you only test on desktop, you will miss layout breaks, button sizing issues, and form fields that are unusable on smaller screens.
Not testing edge cases in workflows. Your workflow works when someone submits a form with all fields completed. But what happens when a required field is missing? What happens when someone books and then cancels? What about a repeat submission from the same contact? Edge cases are where automations break, and they are where clients lose trust.
Rushing QA to meet the build clock. If the build took longer than expected, the temptation is to cut QA short and ship. This is false economy. A 30-minute QA session that catches a broken workflow saves hours of post-delivery support. Never sacrifice QA to reclaim build time.
No documentation of QA findings. When you find issues during QA, log them. Even after they are fixed. This creates a pattern library that helps your team avoid repeating the same mistakes across future builds.
Tools Involved
QA happens inside the client’s GHL sub-account. You will be testing Workflows, Forms, Calendars, Funnels, and Pipelines. For verifying email and SMS delivery, check the conversation records in the contact’s profile. If the client’s build includes Conversation AI or Reviews AI, test those flows end to end as well.
Where This Fits
QA review sits at sequence position 20, directly after the Build Phase. Nothing moves forward until QA passes. Once the review is clean, the next step is the Build Complete Notice, which tells the client their system is ready for walkthrough. Skipping or shortcutting QA means the client becomes your QA team, which is a fast path to refund requests.
Common Questions
How long should QA take? For a standard build, allocate 45 minutes to 90 minutes. Complex builds with multiple workflows and integrations may need two hours. If QA consistently takes longer than that, your build process likely needs tighter standards so fewer issues make it to review.
What if QA reveals a major issue? Fix it before proceeding. Do not send the build complete notice with known issues and a plan to “fix it before the call.” Things that seem minor have a way of becoming visible at the worst possible moment. The build is not complete until QA is clean.
Should we have a formal QA checklist? Absolutely. A checklist ensures consistency across team members and prevents the “I forgot to check that” problem. Start with a base checklist that covers every build, then add package-specific items. Update the checklist every time a new category of bug appears in QA or post-launch support.