Mobile App Blocker Tracking: Never Ship a Critical Bug Again
It's 4:30 PM on a Friday. Your team has been testing v3.2.0 all week. The build looks solid, the release notes are written, and the App Store submission is queued up. Then a tester drops a message in Slack: "Payment flow crashes on iOS 17 when the user has no saved cards."
Is this a blocker? Who decides? Where does it get tracked? Does the release go out anyway because the deadline is today and someone in management said "we committed to this date"?
If your team has shipped a critical bug because a blocker got lost in a Slack thread or buried in a long Jira backlog, you already know the cost. App store review rejections. One-star reviews. Emergency hotfixes on a Saturday. Trust erosion with your users.
Blocker tracking exists to prevent exactly this. Not as another process to follow, but as a dedicated mechanism that ensures critical bugs can't be ignored, forgotten, or deprioritized into oblivion.
What Exactly Is a Blocker?
Let's define terms clearly, because "blocker" gets used loosely in many teams.
A blocker is an issue with the highest possible priority — one that must be resolved before a release can ship. It's not a "nice to fix." It's not a "we should probably look at this." It's a hard stop.
Common examples of blockers:
- Crash on a critical user path (login, checkout, onboarding)
- Data loss or corruption
- Security vulnerability
- Regulatory compliance failure
- Complete feature breakage that was working in the previous version
Common examples of things that are not blockers (even if they're annoying):
- A button is 2 pixels off on one screen size
- A loading spinner shows for an extra 500ms
- A non-critical feature has a minor edge case bug
The distinction matters because when everything is a blocker, nothing is. Teams that over-use the blocker label create noise. Teams that under-use it ship broken software. The goal is precision.
The Problem: Blockers Get Lost
Most teams don't lack a way to report bugs. They have Jira, Linear, GitHub Issues, Asana, or a dozen other tools. The problem is that blockers don't get special treatment in these systems. They're just another priority level in a list of hundreds of issues.
Here's what typically goes wrong:
- Blockers are indistinguishable from high-priority bugs. In a list of 50 "high priority" issues, the one that will crash the app for 30% of users looks the same as the one about a cosmetic glitch on tablets.
- No release-level visibility. You can see blockers in your issue tracker, but there's no connection to the specific release or build they affect. You have to manually cross-reference.
- No enforcement. Nothing prevents someone from marking a release as "ready to ship" while three unresolved blockers exist. It's a human discipline problem with no system-level guardrail.
- Resolution is untracked. When a blocker gets fixed, who verified it? When? What build contains the fix? This information is scattered across commits, comments, and conversations.
How Blocker Tracking Works in TestApp.io
TestApp.io treats blockers as a first-class concept, not just another priority level. Here's how the system works end to end.
Reporting a Blocker
There are two primary ways to report a blocker:
1. From task creation. When creating a new task in the task management system, set the priority to Blocker. This is the highest priority level available, above Critical, High, Normal, and Low. The task is immediately flagged across the system.
2. From a release. When a tester is working with a specific build and discovers a blocking issue, they can report the blocker directly from the release. This creates a task with Blocker priority that is automatically linked to the specific release where the issue was found. This linkage is important — it answers the question "which build has this problem?" without any manual effort.
Both paths result in the same outcome: a tracked blocker that surfaces everywhere it needs to.
Where Blockers Surface
This is where dedicated blocker tracking diverges from generic issue tracking. In TestApp.io, blockers don't just exist in a task list — they surface prominently across multiple views:
App Dashboard — Blocker Count Badge. The main dashboard for each app shows a blocker count. You don't have to dig into task lists or run filtered searches. The number is right there, impossible to miss. If it's not zero, you know there's a problem.
Version Overview — Warning Indicators. When viewing a version's overview, any open blockers trigger warning indicators. This is critical during the Testing and Ready phases of the version lifecycle. A version with open blockers is visually flagged as not-ready, regardless of what anyone says in a meeting.
Release List — Flagged Releases. Individual releases (builds) that have blockers reported against them are flagged in the release list. When scrolling through builds, you can immediately see which ones have known blocking issues. This prevents testers from wasting time on builds that are already known to be broken.
The design principle here is simple: blockers should be unavoidable. You shouldn't have to go looking for them. They should be in your face until they're resolved.
The Resolution Workflow
Finding and reporting blockers is only half the battle. The other half is resolving them with a clear, auditable process.
When a blocker is resolved in TestApp.io, the resolution captures several pieces of information:
- Resolution notes. A description of what was done to fix the issue. This isn't optional hand-waving — it's a record that future team members (or future you) can reference.
- Who resolved it. The specific team member who marked the blocker as resolved. Accountability matters.
- When it was resolved. A timestamp for the resolution. Combined with the creation timestamp, this gives you resolution time metrics.
This resolution data feeds into the audit trail for the version, creating a complete record of every blocker's lifecycle: when it was reported, on which build, by whom, how it was resolved, by whom, and when.
Blocker Metrics and SLA Tracking
Over time, blocker data becomes a powerful diagnostic tool for your release process. TestApp.io tracks blocker metrics that help you answer important questions:
- How many blockers per release? If your blocker count is trending upward across releases, something is going wrong upstream — maybe code review isn't catching enough, or test coverage has gaps.
- What's the average time to resolution? If blockers take three days to resolve but your release cycle is one week, that's a structural problem. You're spending nearly half your cycle on emergency fixes.
- When in the lifecycle are blockers found? Blockers found during Testing are expected. Blockers found after moving to Ready are a red flag — it means your testing phase isn't thorough enough.
- Who reports the most blockers? Who resolves them? This isn't about blame. It's about understanding workload distribution and identifying your most effective testers.
SLA tracking adds a time dimension to this. You can monitor whether blockers are being resolved within acceptable timeframes and identify when resolution is lagging behind expectations.
How Blockers Interact with Version Lifecycle
Blocker tracking doesn't exist in isolation — it's deeply connected to the version lifecycle. Here's how they interact at each stage:
Planning and Development: Blockers are less common here since there may not be testable builds yet. But they can exist — for example, a known issue carried over from a previous version that must be addressed before this one ships.
Testing: This is where most blockers are discovered. As testers work through builds, they report blockers that surface prominently on the version's Quality tab. The blocker count becomes the primary metric for release readiness during this phase.
Ready: Moving a version to Ready status is a statement that the version is shippable. Open blockers directly contradict this. The blocker count on the version overview serves as a quality gate — it's a clear, objective signal that the version isn't actually ready if the count is greater than zero.
Released: If a blocker is discovered after release (it happens), it can still be tracked against the version. This feeds into post-release metrics and may trigger a hotfix version.
This integration means blocker tracking isn't a separate process bolted onto your workflow. It's woven into the progression of every release.
Real-World Scenario: The Friday Blocker
Let's walk through the scenario from the introduction with proper blocker tracking in place.
4:30 PM Friday. Your team has version v3.2.0 in Testing status. Three builds have been uploaded this week via CI/CD. The latest build, uploaded two hours ago, is the release candidate.
4:32 PM. A tester discovers that the payment flow crashes on iOS 17 when the user has no saved payment methods. They report a blocker directly from the release. The task is created with Blocker priority, linked to the specific build.
4:33 PM. The blocker count on the app dashboard updates to 1. The version overview shows a warning indicator. The release is flagged in the release list. Everyone with access can see this immediately — no Slack message required.
4:35 PM. The team gets a Slack notification (via the Slack integration) about the new blocker. The notification includes the blocker description, which build it affects, and who reported it.
4:40 PM. The lead developer picks up the blocker, reproduces it, and identifies the issue — a nil check that was missed in a recent refactor. The fix is straightforward.
5:15 PM. The fix is pushed. CI/CD runs, and a new build is automatically uploaded to the version's releases via ta-cli.
5:20 PM. The tester installs the new build from the release link, verifies the fix, and the developer resolves the blocker with notes: "Added nil check for saved payment methods array. Verified on iOS 17.2 simulator and physical device."
5:22 PM. Blocker count drops to 0. Version overview shows no warnings. The Quality tab confirms no open blockers.
5:25 PM. The team reviews the Quality tab one more time, confirms everything looks clean, and moves the version to Ready. The release manager will submit to the App Store on Monday morning.
Everyone goes home on time.
Compare this to the alternative: the bug is reported in Slack, gets buried under replies, someone half-remembers it on Monday, the version ships without the fix, and a one-star review appears by Tuesday.
Best Practices for Blocker Tracking
Here are practical recommendations for getting the most out of blocker tracking:
1. Define What Constitutes a Blocker — and Write It Down
Every team should have a shared definition of what makes something a blocker versus a critical or high-priority bug. Write this down in your team's onboarding docs or wiki. Ambiguity here leads to either over-reporting (which creates noise) or under-reporting (which defeats the purpose).
A simple framework: If this bug were in production, would it cause immediate harm to users or the business? If yes, it's a blocker.
2. Report Blockers from the Release, Not Just from Task Creation
When you report a blocker from a specific release, it's automatically linked to that build. This context is valuable — it tells the developer exactly which build to reproduce the issue on and gives the team traceability from bug to build to fix to verification.
3. Always Write Resolution Notes
"Fixed" is not a resolution note. "Added nil check for savedPaymentMethods array in CheckoutViewController. Crash was caused by force-unwrapping an optional that is nil when user has no saved cards. Verified fix on iOS 17.0, 17.2, and 17.4" — that's a resolution note. Future team members will thank you.
4. Review Blocker Metrics After Every Release
During your retrospective (you are doing retrospectives, right?), pull up the blocker metrics. Look at:
- Total blockers found during this release cycle
- Average time from report to resolution
- At which stage blockers were discovered
- Whether any blockers were found post-release
Trends in these metrics are more informative than any single data point.
5. Don't Override the Quality Gate
It's tempting to ship with an open blocker when there's pressure from stakeholders or a hard deadline. Resist this. The entire point of blocker tracking is to provide an objective signal. If you override it routinely, you've just built a system that everyone ignores.
If a deadline is truly immovable, the correct response is to scope down the release — remove the affected feature or screen — not to ship a known blocker.
6. Integrate with Your Communication Tools
Connect TestApp.io to Slack or Microsoft Teams so blocker notifications are automatic. The faster the team knows about a blocker, the faster it gets resolved. Slack integration supports channel selection and event configuration, so you can route blocker notifications to a dedicated release channel without spamming your general channel.
Building a Culture Around Blocker Discipline
Tools can only do so much. Blocker tracking works best when it's backed by team culture:
- Celebrate blocker reporters. Finding a blocker before release is a win, not an inconvenience. The tester who found the Friday payment crash saved your team a weekend of firefighting and your users from a broken experience.
- Don't shoot the messenger. If developers get defensive when blockers are filed against their code, testers will stop reporting them. That's the worst possible outcome.
- Make resolution a team effort. Blockers aren't one person's problem. When a blocker is filed, the team rallies to resolve it. This is release-critical work.
- Treat post-release blockers as learning opportunities. If a blocker makes it to production, don't blame — investigate. Was it a gap in test coverage? A platform-specific issue that nobody tested? Use the audit trail to understand what happened and prevent it next time.
Wrapping Up
Blocker tracking isn't glamorous. It's not the feature you showcase in a demo. But it's the feature that prevents your most painful days — the emergency hotfixes, the weekend deploys, the apologetic emails to users.
The core idea is simple: critical bugs deserve dedicated, visible, enforceable tracking that's connected to your releases and your version lifecycle. When blockers can't hide in long task lists, when they surface on every dashboard, and when their resolution is documented and measurable — you ship better software.
Not because you have fewer bugs (you'll always have bugs), but because the ones that matter most can't slip through.
Start using TestApp.io to bring structured blocker tracking to your mobile releases. Check the help center for setup guides and detailed documentation.