What if you could upload both your APK and IPA to one place, send a single link to your testers, and get feedback the same day? That’s what TestApp.io is built for.
In this guide, we’ll walk through the full workflow: building your Flutter app for both platforms, uploading to TestApp.io (via the portal, CLI, or CI/CD), and getting your testers up and running in minutes—not days.
Flutter’s cross-platform promise breaks down at distribution time. Here’s what you’re up against:
For a Flutter team shipping to both platforms, managing two separate distribution pipelines is a tax on every release cycle.
TestApp.io was designed for exactly this scenario. Upload your APK and IPA to one place, invite your testers once, and let them install the right build for their device. No app store accounts required. No review gates. No separate workflows for Android and iOS.
Beyond distribution, TestApp.io gives your testers a way to report bugs directly from the app, log blockers, and track feedback—all tied back to the specific release they’re testing. Your team gets a release dashboard, notification integrations with tools like Slack and Microsoft Teams, and task management that syncs with project management tools such as Jira and Linear.
Before uploading anything, you need your build artifacts. Flutter makes this straightforward.
From your Flutter project root, run:
flutter build apk --releaseThis produces a fat APK (all ABIs) at:
build/app/outputs/flutter-apk/app-release.apkflutter build apk --split-per-abi to generate architecture-specific APKs, then upload the one matching your testers’ devices.Building for iOS requires a Mac with Xcode installed. Run:
flutter build ipa --release --export-method ad-hocThis generates the IPA at:
build/ios/ipa/<YourApp>.ipa--export-method ad-hoc flag is important. TestApp.io supports Ad Hoc, Development, and Enterprise signed IPAs. If you omit this flag, Flutter defaults to App Store export, which won’t work for direct distribution. Make sure your provisioning profile includes your testers’ device UDIDs for Ad Hoc builds.You have three ways to get your builds onto TestApp.io: the web portal, the CLI, or your CI/CD pipeline. Pick whichever fits your workflow.
The simplest path—ideal for one-off builds or when you’re just getting started:
app-release.apk and .ipa fileThat’s it. Testers receive a link, tap to install, and they’re testing your latest Flutter build within minutes.
For developers who prefer the command line, ta-cli lets you publish directly from your terminal. Install it first:
curl -Ls https://github.com/testappio/cli/releases/latest/download/install | bashThen publish both platforms in a single command:
ta-cli publish \\
--api_token=YOUR_API_TOKEN \\
--app_id=YOUR_APP_ID \\
--release=both \\
--apk=build/app/outputs/flutter-apk/app-release.apk \\
--ipa=build/ios/ipa/YourApp.ipa \\
--release_notes="Fixed login bug, improved performance" \\
--notifyKey flags explained:
--release: Set to both, android, or ios depending on what you’re uploading--apk / --ipa: Paths to your build artifacts--release_notes: What changed in this build (up to 1,200 characters)--git_release_notes: Automatically pull the last commit message as release notes--git_commit_id: Include the commit hash in the release notes for traceability--notify: Send push notifications to your team membersYou can grab your API token and App ID from your TestApp.io portal under Settings > API Credentials.
This is where the real time savings kick in. Automate the entire build-and-distribute pipeline so every push to your main branch delivers a fresh build to your testers.
Here’s a GitHub Actions workflow that builds your Flutter app for both platforms and uploads to TestApp.io:
name: Build & Distribute Flutter App
on:
push:
branches: [main]
jobs:
build-android:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: subosito/flutter-action@v2
with:
flutter-version: "3.x"
- run: flutter pub get
- run: flutter build apk --release
- uses: testappio/github-action@v5
with:
api_token: \${{ secrets.TESTAPPIO_API_TOKEN }}
app_id: \${{ secrets.TESTAPPIO_APP_ID }}
file: build/app/outputs/flutter-apk/app-release.apk
release_notes: "Android build from commit \${{ github.sha }}"
git_release_notes: true
include_git_commit_id: true
notify: true
build-ios:
runs-on: macos-latest
steps:
- uses: actions/checkout@v4
- uses: subosito/flutter-action@v2
with:
flutter-version: "3.x"
- run: flutter pub get
- run: flutter build ipa --release --export-method ad-hoc
- uses: testappio/github-action@v5
with:
api_token: \${{ secrets.TESTAPPIO_API_TOKEN }}
app_id: \${{ secrets.TESTAPPIO_APP_ID }}
file: build/ios/ipa/YourApp.ipa
release_notes: "iOS build from commit \${{ github.sha }}"
git_release_notes: true
include_git_commit_id: true
notify: trueTESTAPPIO_API_TOKEN and TESTAPPIO_APP_ID as GitHub repository secrets. Never hardcode credentials in your workflow files.The TestApp.io GitHub Action (testappio/github-action@v5) handles installing ta-cli and uploading each artifact. Since the action accepts a single file per step, the workflow runs Android and iOS as parallel jobs for faster builds.
GitHub Actions isn’t the only option. TestApp.io integrates with the CI/CD tools Flutter teams already use:
testappio Fastlane plugin to upload as part of your lane. Great if you’re already using Fastlane for code signing and build management.In every case, the pattern is the same: build your artifacts, then call ta-cli or the TestApp.io action to upload. Your testers get notified, install from a link, and you get feedback—all without touching an app store.
Here’s an honest look at how the main distribution options stack up for Flutter teams:
| TestApp.io | TestFlight | Google Play Internal Testing | Firebase App Distribution | |
|---|---|---|---|---|
| Android support | ✅ APK upload | ❌ iOS only | ✅ APK + AAB | ✅ APK + AAB |
| iOS support | ✅ IPA upload | ✅ Native | ❌ Android only | ✅ IPA upload |
| Review required | No | Yes (up to 48h) | No (internal track) | No |
| Tester setup | Email invite + link | Apple ID required | Google account + opt-in | Email invite |
| In-app feedback | ✅ Built-in | Basic screenshots | ❌ None | ❌ None |
| Task management | ✅ Built-in + Jira/Linear sync | ❌ None | ❌ None | ❌ None |
| Notification integrations | ✅ Slack, Teams, email | Email only | Email only | Email + Firebase console |
| CLI support | ✅ ta-cli | ✅ Xcode CLI | ✅ Gradle | ✅ Firebase CLI |
| CI/CD integrations | GitHub Actions, Fastlane, Codemagic, + more | Xcode Cloud, Fastlane | Gradle-based | Fastlane, GitHub Actions |
| Both platforms, one dashboard | ✅ | ❌ | ❌ | ✅ |
TestFlight remains the gold standard for iOS-only teams that need tight App Store integration. Firebase App Distribution is a solid choice if your stack is already Firebase-heavy. But for Flutter teams shipping to both platforms, managing a single distribution pipeline saves real time.
A few things we’ve seen work well for teams distributing Flutter apps:
main eliminates the "Can you send me the latest build?” messages from your Slack channel.--git_release_notes flag in ta-cli automatically pulls the last commit message. It takes zero effort and gives testers context on what changed.--release builds. Debug builds behave differently—they’re slower, include debug banners, and may not surface issues that only appear in release mode.If you’re building with Flutter and tired of juggling TestFlight, Play Console, and a patchwork of tools to get builds to your testers—give TestApp.io a try. Upload your APK and IPA, invite your team, and start collecting feedback today.
Already have a CI/CD pipeline? Check out the GitHub Actions setup guide to plug in TestApp.io in under five minutes.
Have questions about integrating TestApp.io into your Flutter workflow? Check our pricing page for plan details, or reach out—we’re happy to help.
]]>The TestApp.io mobile app does exactly that. It turns every tester's device into a complete testing workstation. Here is how it works.
Most build distribution tools give testers a link. They tap it, a file downloads, and they figure out the rest. On Android that means hunting for the APK in their downloads folder. On iOS it means navigating provisioning profiles and trust settings.
The TestApp.io app eliminates that friction. When a new build is uploaded, testers receive a push notification. Tapping it opens the release directly. On Android, a single tap starts the download and walks through installation automatically. On iOS, the app provides a QR code or direct link that handles the provisioning flow.
After installation, the button switches from "Install" to "Open" — testers can launch the build without leaving the TestApp.io app. If a newer build comes along, the button changes to "Upgrade" so testers always know when they are behind.
Telling testers "go test the app" without specific guidance leads to shallow, unstructured feedback. That is why the TestApp.io app surfaces tasks directly on the tester's phone.
Each app has a Tasks tab showing what needs to be tested. Tasks include status (new, in progress, blocked, done), priority, assignee, and a link to the specific release they apply to. Testers can update task status as they work — marking items in progress, flagging blockers, or completing them — all without switching to a browser.
If your team uses Jira or Linear, tasks sync bidirectionally. A tester marking a task as "blocked" in the app updates the linked Jira or Linear ticket automatically.
The best bug reports include context. The TestApp.io app lets testers submit feedback with up to 10 attachments — screenshots, screen recordings, or any other images and videos captured on their device.
Every release has a Comments tab where testers write feedback and attach files. Attachments upload in the background, so there is no waiting around. The same comments appear in the portal for developers and PMs who are triaging issues from their desk.
This matters because testers are on the device where the bug lives. They can capture exactly what they see — a glitchy animation, a layout issue on their specific screen size, a crash on their OS version — and attach it to their report in seconds.
One of the most common questions during QA is: "Was this bug in the last version too?" With the TestApp.io app, testers can answer that themselves.
The Releases tab shows every build ever uploaded for an app, with platform and status filters. Testers can install any previous version, reproduce the issue, then install the current build to confirm the fix. No need to ask a developer to dig up an old build and re-share it.
This is especially valuable for regression testing — when you ship a fix, your testers can verify it did not break something that was working before by comparing the old and new builds side by side.
The app does not rely on polling. Real-time updates push changes to every connected device immediately:
This means testers always see the current state of the project. No refreshing, no wondering if they are looking at stale data.
Testers who work across multiple projects can switch between teams from the side menu. Each team has its own set of apps, releases, tasks, and notifications. The switch is instant — all data refreshes to show the selected team's workspace.
If a tester receives a deep link or push notification from a different team than the one they currently have open, the app automatically switches context to the right team.
The TestApp.io app is available on iOS and Android. Testers sign in with their existing TestApp.io account (email, Google, or Apple sign-in) and accept a team invitation to start seeing releases.
For the full setup walkthrough, see the Getting Started with the TestApp.io Mobile App guide in the help center.
If you are managing the distribution side — uploading builds, creating tasks, inviting testers — the Getting Started with TestApp.io guide covers the portal workflow end to end.
TestApp.io helps mobile teams distribute builds to testers, collect feedback, and manage releases — all in one place. Support for iOS (IPA) and Android (APK), with integrations for Slack, Microsoft Teams, Jira, Linear, and 10+ CI/CD platforms.
👉 Get started free — or explore the Help Center to learn more.
]]>But mobile app testing creates a gap that Jira alone cannot fill. A tester installs a build, discovers a crash on a specific device, and needs to report it. They can file a Jira issue manually — typing out the reproduction steps, attaching screenshots, and setting the priority. Then the developer fixes it and moves the Jira issue to "Done". Now someone has to go back to the testing tool to update the status there. Or worse, no one does, and the two systems drift apart.
For teams shipping mobile apps on a weekly or biweekly cadence, this manual sync between Jira and your testing workflow becomes a serious drag on velocity. Missed status updates. Duplicate issues. Bug reports filed in the wrong place. Testers are waiting for developers to update a ticket; developers are assuming the tester already verified.
TestApp.io integrates directly with Jira through Atlassian's OAuth 2.0 to provide real-time, bidirectional sync between your testing tasks and Jira issues. Here is how it works and why it matters for teams shipping mobile apps.
Every mobile release involves two distinct workflows running in parallel:
The problem is not that you use two tools. The problem is that both tools need to reflect the same state, and keeping them in sync manually is unreliable.
Consider what happens during a typical QA cycle:
By the end of the sprint, the Jira board says one thing and the testing tool says another. The team loses confidence in both.
The TestApp.io Jira integration connects the two workflows so that changes in either system propagate automatically. No copying, no pasting, no manual bridging.
The setup uses Atlassian's OAuth 2.0 for secure authorisation:
The entire process takes under five minutes. TestApp.io requests only the permissions it needs to read and write issues in your selected project — it does not ask for admin access to your Atlassian organisation.
Jira and TestApp.io use different schemas for statuses and priorities. Field mappings define the translation layer so data moves correctly between both systems.
Status mapping connects each TestApp.io status to its Jira equivalent:
Priority mapping aligns severity levels so a critical issue in one tool has the same urgency in the other. TestApp.io priorities (Blocker, Critical, High, Normal, and Low) map to Jira priorities (Highest, High, Medium, Low, and Lowest) based on your team's definitions.
Both mappings are fully customisable. If your Jira project uses custom statuses or a modified workflow, you can map every status individually. The same applies to priority levels.
Once connected and mapped, sync happens automatically via webhooks in near real time:
This is not polling on a schedule. Webhooks trigger on every change, so both systems stay in sync without delay. Failed webhook deliveries are logged in the sync history and can be retried.
Most teams starting with the integration already have work in progress in both tools. Two features handle this:
Pull existing Jira issues into TestApp.io with the Pull Tasks feature. Browse your Jira project's issues, select the ones relevant to your current testing cycle, and import them. Each imported issue becomes a synced TestApp.io task — future changes in either direction flow automatically.
A practical approach: import only active issues (those in "To Do" or "In Progress" status). There is no need to import your entire Jira backlog on day one.
Going the other direction, you can push TestApp.io tasks to Jira using the Migrate Tasks feature. Select the tasks, review the status and priority mappings, and confirm. Each task is created as a new Jira issue and linked for ongoing sync.
This is particularly useful when your QA team has been logging issues in TestApp.io and now wants developers to see them on the Jira board without re-entering everything.
With the integration running, here is what a typical testing cycle looks like for a team with developers in Jira and testers in TestApp.io:
No one manually bridges the gap. Both tools are always in sync. Developers never leave Jira. Testers never leave TestApp.io.
Every sync event — successful or failed — is logged in the integration's sync history. Each entry shows:
This matters for two reasons. First, it makes debugging straightforward — if a status is not syncing correctly, the history tells you exactly what happened. Second, it provides accountability for teams that need to track who changed what and when across both systems.
Failed sync events can be retried directly from the history view, handling transient errors like network timeouts without manual intervention.
The integration includes several features that make it production-ready for teams at scale:
For the full feature set, see the Integration Power Features guide.
Mobile app releases have a unique challenge that web development does not: the testing environment is fragmented across devices, OS versions, and form factors. A bug might only appear on a specific Android device running a specific OS version. The context around that bug — device info, screenshots, reproduction steps — is critical for the developer to fix it efficiently.
When testers file bugs in TestApp.io during real-device testing, that context is captured at the source. The Jira integration ensures it reaches developers without anyone stripping out details or forgetting to attach the screenshot. The developer gets the full picture in Jira. The tester gets status updates in TestApp.io. Both sides have what they need to do their job.
For teams shipping mobile apps on tight schedules, eliminating the manual overhead of keeping Jira and your testing workflow in sync directly translates to faster release cycles and fewer dropped bugs.
Connect your Jira workspace at portal.testapp.io under Team Settings → Integrations. The setup takes about five minutes.
For the full step-by-step setup guide with screenshots, see the Jira Integration help article. For details on task management features, visit Task Management. And if you are also using Linear, we have a dedicated integration for that too.
TestApp.io helps mobile teams distribute builds to testers, collect feedback, and manage releases — all in one place. Support for iOS (IPA) and Android (APK), with integrations for Slack, Microsoft Teams, Jira, Linear, and 10+ CI/CD platforms.
👉 Get started free — or explore the Help Center to learn more.
]]>Add two more people to the equation and everything changes. Suddenly you need a way to distribute builds so testers can install them. You need to know who tested what. You need a place to collect feedback that is not a group chat full of screenshots with no context. You need to track whether a bug was fixed, verified, and ready for release — not just on your machine, but on the actual devices your testers are using.
Mobile teams that ship reliably have figured out this coordination problem. They have a workflow that moves builds from development to testing to release without gaps. And that workflow is what separates teams that ship weekly from teams that spend half their sprint chasing status updates across Slack, email, and spreadsheets.
The bottleneck for most mobile teams is not writing code. It is everything that happens between "the code is merged" and "the app is live in the store." Specifically:
Each of these problems is small on its own. Together, they compound into the reason many mobile teams can only ship every two to four weeks instead of every week.
Here is the release workflow that teams on TestApp.io follow to ship consistently and quickly. It breaks into four phases.
Every release starts with a build. TestApp.io accepts both Android APKs and iOS IPAs. You can upload manually through the web portal or automate it through CI/CD integrations with GitHub Actions, Fastlane, Bitrise, or any pipeline that can make an API call.
When a build is uploaded, two things happen automatically:
Testers install the build on their physical devices with one tap. Android installs directly. iOS installs via an ad hoc or enterprise provisioning profile.
Each build lives inside a version, and each version moves through a lifecycle: planning, testing, approval, and release. This structure replaces the informal "is this build ready?" conversations with a clear visual status.
Within each version, you can create tasks — either manually or by letting AI generate them from your release notes. Tasks have priorities, assignees, and statuses that sync bidirectionally with Jira or Linear if your team uses either tool.
The dashboard shows you everything at a glance: recent releases, active tasks, team activity, and install metrics. No digging through multiple tools to understand where things stand.
Testing is where most workflows break down. Not the testing itself, but the feedback loop. A tester finds an issue — now what? Where do they report it? How do they include device context? How does the developer know about it?
In TestApp.io, testers file feedback directly from their device during or after a testing session. The report automatically includes device model, OS version, and app version. Testers add screenshots, reproduction steps, and priority levels.
These reports become tasks that developers can see immediately — either in TestApp.io's built-in task board or in their Jira/Linear project via the integration sync. If something is a release blocker, the blocker tracking feature flags it with the appropriate severity level so the team can prioritize accordingly.
The activity feed gives the team lead visibility into everything happening in real-time: who installed, who commented, which tasks were updated, which blockers were resolved. No need to ask "has everyone tested?" — you can see it.
Before submitting to the App Store or Google Play, teams need a structured way to verify that everything is ready. Playbooks are reusable checklists that standardize this process. Define the steps once — check crash reports, verify accessibility, confirm localization, test on minimum supported devices — and use the same playbook for every release.
Once the checklist is complete and all blockers are resolved, the version moves to the approval stage. From there, launches let you track the actual App Store and Google Play submissions: review status, approval timelines, and release dates.
No team uses a single tool. The value of a release workflow is how well it connects with the tools you already use.
The individual features — build distribution, task management, blocker tracking, reusable checklists — are useful on their own. But the real value is how they compound.
When your build is distributed automatically, testers start testing sooner. When feedback flows directly into tasks, developers fix bugs faster. When status is tracked in one place, nobody wastes time asking for updates. When checklists are reusable, release quality stays consistent even as the team grows.
Teams that adopt a structured release workflow typically see their time from build to first tester install drop from days to hours. Not because any single step got faster, but because the gaps between steps disappeared.
If your team is currently stitching together a release workflow across email, Slack, spreadsheets, and TestFlight, here is the fastest path to a structured process:
The Getting Started guide walks through each step in detail. For teams migrating from another platform, check the App Center migration guide, TestFlight alternatives, or Firebase alternatives comparison.
]]>That works when you are the only person writing code. It stops working the moment someone else needs to test your builds, report bugs, or help you decide if a release is ready. And it completely breaks down when you have a team of four or five people all working on the same app, shipping updates every week.
This guide is about the transition from solo mobile development to a team release process — what changes, what breaks, and how to set up a workflow that does not collapse under the weight of coordination.
As a solo developer, your "release process" probably looks something like this:
There is no handoff because there is no one to hand off to. There is no feedback loop because you are the tester. There is no status tracking because you know the status — it is whatever you are currently doing.
Now add a team:
Suddenly you need answers to questions that never existed before:
Most teams solve these problems with whatever tools are already lying around: Slack for bug reports, email for build distribution, a spreadsheet for tracking who tested what. It works until it does not, usually around the third or fourth sprint when a bug slips through because the report was buried in a thread.
Scaling from solo to team is not about adopting a dozen new tools. It is about adding structure to three things that were invisible when you worked alone:
Your tester cannot test if they do not have the build. This sounds obvious, but it is the most common bottleneck for small teams. The developer builds, then has to remember to share the APK or IPA, then the tester has to figure out how to install it.
On iOS, this is especially painful. You need provisioning profiles, device UDIDs, and either TestFlight (with its review delays) or ad hoc distribution (with its device limits and certificate management).
The fix is a distribution platform that handles this automatically. Upload the build — either manually or via CI/CD — and everyone on the team can install it on their device from one place. TestApp.io handles both Android APK and iOS IPA distribution with a simple install flow that works on physical devices.
If you are already using a CI/CD pipeline, you can automate the upload so every merge to your release branch distributes a build without anyone manually doing anything.
When a tester finds a bug, two things matter: the details are complete enough for a developer to reproduce it, and the report does not get lost.
"It crashed" in a Slack message is not a bug report. "Layout broken on the settings screen" with no screenshot is barely better. What the developer needs is: which device, which OS version, which app version, what were the steps, and ideally a screenshot or screen recording.
TestApp.io's task management captures this context at the source. When a tester files feedback, device information is included automatically. They add the reproduction steps, screenshots, and severity level. The result is a task that a developer can act on immediately without a follow-up conversation asking "what phone were you using?"
For teams that use Jira or Linear for development work, the Jira and Linear integrations sync these tasks bidirectionally — so developers see the bug in their tool, and testers see the fix status in theirs.
Solo developers know when the release is ready because they decide. On a team, "ready" requires consensus. Has everyone tested? Are there open blockers? Did someone verify that the login flow still works after last week's refactor?
Two features solve this:
The combination gives you a clear answer to "can we ship?" instead of a vague feeling based on who you last talked to.
Here is a practical sequence for teams transitioning from solo to structured:
This alone eliminates the "how do I get the build?" problem. No more emailing APKs, sharing download links in Slack, or walking over to someone's desk with a USB cable.
Now feedback has a home. Bug reports are structured, tracked, and visible to the entire team. No more digging through chat history to find that screenshot someone sent three days ago.
After three weeks, you have a complete workflow: builds are distributed automatically, feedback is collected in structured tasks, and releases follow a repeatable checklist.
Now the tedious parts are automated, and you can focus on what matters: building the app and making sure it works.
Having seen teams go through this transition, a few patterns consistently cause problems:
The transition from solo to team does not require a big-bang process change. Start with the distribution problem — get your builds to your testers without manual effort. Then layer on structured feedback and release checklists as your team needs them.
Create your team at portal.testapp.io and follow the Getting Started guide. If you are coming from another tool, check the App Center migration guide or the comparison guides for TestFlight, Firebase, and Diawi alternatives.
]]>Enterprise mobile teams need distribution infrastructure that matches their security requirements, team complexity, and release velocity. Here's what that looks like in practice.
Talk to any mobile engineering manager at a company with 50+ people touching mobile apps, and the same requirements come up:
Most app distribution tools are built for indie developers or small teams. They solve the "how do I get this APK to my friend" problem. Enterprise teams need something different.
The single biggest concern enterprise teams raise: where are our builds stored?
When you connect your own S3 bucket or Google Cloud Storage to your distribution platform, you get:
This matters for regulated industries — fintech, healthcare, government contractors — where a third party storing your application binaries creates compliance headaches.
A typical enterprise mobile org looks like this:
That's 15-30 people who need different levels of access to different builds. Setting up your workspace with proper team structure from day one prevents the chaos of everyone seeing every build.
The pattern that works for teams of 10+:
Manual uploads don't scale past 3-4 builds per week. At enterprise velocity (daily builds, multiple variants), you need automated distribution.
The setup is straightforward with any major CI/CD tool:
Once connected, every successful build automatically lands in your testers' hands. No Slack messages, no manual downloads, no "which build should I test?"
When 20 people are involved in a release cycle, communication overhead is the real productivity killer. Automated notifications solve this:
The goal is zero-effort distribution: developer pushes code → CI builds → testers get notified → feedback flows back into your issue tracker. No one has to manage the process manually.
Enterprise releases can't ship on vibes. You need verifiable quality criteria:
This is the gap between "we distributed the app" and "we're confident it's ready to ship."
If you're running a mobile team of 10+ and currently managing distribution via TestFlight + Slack messages + shared drives, the path to enterprise-grade distribution takes about an afternoon:
Your team will have professional distribution infrastructure running by end of day — the same setup used by teams of 50 to 100+ who ship weekly without the chaos.
]]>Then your team grows to 10, 20, 50 people. You start shipping weekly. You have QA, product managers, stakeholders, and external beta testers. And suddenly Firebase's simplicity becomes a limitation.
Here's what teams consistently report when they outgrow Firebase — and what they do about it.
A tester finds a bug in your build. In Firebase, they… send you a Slack message? File a Jira ticket manually? There's no way to create, track, or resolve issues inside the distribution workflow.
Teams need built-in task management where bugs discovered during testing are tracked alongside the build that triggered them. Even better: bidirectional sync with Jira or Linear so issues flow automatically between your testing platform and project management tool.
With Firebase, every tester needs a Google account and must be explicitly invited. That works for internal teams but fails for:
Public install pages let anyone install with a link — no account required. You can still control access, but you remove the friction that blocks your testing velocity.
When is a build ready to ship? Firebase can't answer that. It distributes builds — that's it. There's no concept of:
Quality playbooks turn "I think it's ready" into "all 12 checklist items are verified and all 3 blockers are resolved."
Firebase App Distribution lives inside the Firebase Console. Your builds are on Google's infrastructure. Your analytics are in Google's format. If your team uses AWS or Azure, you're running a split infrastructure.
Teams that need external storage on their own S3 or GCS buckets can't do that with Firebase. For regulated industries (fintech, healthcare), this is often a dealbreaker.
With Firebase, you upload a build and hope people install it. You can see download counts, but you can't see:
Activity feeds and installation tracking give team leads visibility into the actual testing progress — not just "it was distributed."
Teams typically switch in an afternoon. The process:
Your existing Firebase testers just need a new install link. No migration tool needed — you're not migrating data, you're upgrading your workflow.
To be fair, Firebase App Distribution works well for:
If that describes your team, stick with Firebase. But if you're reading this, you've probably already hit the ceiling.
See our detailed Firebase App Distribution vs TestApp.io comparison for the full feature-by-feature breakdown, or read our Firebase alternatives guide to understand all your options.
The teams that switch typically share the same story: Firebase was great when they were small, but as soon as they needed task management, quality gates, or team workflows, they needed a dedicated platform built specifically for mobile distribution.
]]>Without a system, you get the familiar chaos: "which build has the fix?", "did QA test this?", "I thought we were shipping Thursday?", and the classic "my build is 3 versions behind."
Here's the release management system that works for teams of 10 to 100+.
If anyone on your team is manually uploading builds, you have a bottleneck. At team scale, distribution must be automated:
Set this up once with GitHub Actions, Fastlane, Bitrise, or any CI tool via TA-CLI, and you never think about it again. Every build reaches testers within minutes of being merged.
The biggest time sink for engineering managers: collecting and organizing feedback from testers, PMs, and stakeholders who all report bugs differently, in different channels.
The fix:
The result: every piece of feedback lives in one place, linked to the build it was found in, and flows into your project management tool automatically.
At team scale, "I think it's ready" is not a release strategy. You need verifiable quality criteria:
Engineering managers use these to answer the daily standup question: "are we on track to ship this week?"
Stop being the person who messages the team every time a build is ready. Automate it:
Here's what a typical week looks like for a team of 15 using this system:
Monday: Developers merge features from the sprint. CI automatically builds and distributes to QA team. Slack notification fires: "Build 4.2.1 (247) ready for testing."
Tuesday-Wednesday: QA tests on iOS and Android. Issues are created in-app, automatically synced to Jira. Two blockers are flagged. Developers see them immediately and start fixing.
Thursday: Blocker fixes are merged. New build is auto-distributed. QA re-tests the specific issues. Both blockers are resolved and marked as fixed in Jira.
Friday: Engineering manager checks the launch playbook: 8/8 items checked. No open blockers. 12 of 15 team members have installed and tested. PM has signed off. Build goes to production.
Total time the engineering manager spent on distribution logistics: approximately zero.
If you're currently managing releases through a combination of TestFlight, Slack, Google Drive, and prayer, here's how to set up a proper system in one afternoon:
The whole setup takes an afternoon. By next week, your team will have the same release infrastructure used by teams of 50+ who ship every week without the chaos.
Is this a blocker? Who decides? Where does it get tracked? Does the release go out anyway because the deadline is today and someone in management said "we committed to this date"?
If your team has shipped a critical bug because a blocker got lost in a Slack thread or buried in a long Jira backlog, you already know the cost. App store review rejections. One-star reviews. Emergency hotfixes on a Saturday. Trust erosion with your users.
Blocker tracking exists to prevent exactly this. Not as another process to follow, but as a dedicated mechanism that ensures critical bugs can't be ignored, forgotten, or deprioritized into oblivion.
Let's define terms clearly, because "blocker" gets used loosely in many teams.
A blocker is an issue with the highest possible priority — one that must be resolved before a release can ship. It's not a "nice to fix." It's not a "we should probably look at this." It's a hard stop.
Common examples of blockers:
Common examples of things that are not blockers (even if they're annoying):
The distinction matters because when everything is a blocker, nothing is. Teams that over-use the blocker label create noise. Teams that under-use it ship broken software. The goal is precision.
Most teams don't lack a way to report bugs. They have Jira, Linear, GitHub Issues, Asana, or a dozen other tools. The problem is that blockers don't get special treatment in these systems. They're just another priority level in a list of hundreds of issues.
Here's what typically goes wrong:
TestApp.io treats blockers as a first-class concept, not just another priority level. Here's how the system works end to end.
There are two primary ways to report a blocker:
1. From task creation. When creating a new task in the task management system, set the priority to Blocker. This is the highest priority level available, above Critical, High, Normal, and Low. The task is immediately flagged across the system.
2. From a release. When a tester is working with a specific build and discovers a blocking issue, they can report the blocker directly from the release. This creates a task with Blocker priority that is automatically linked to the specific release where the issue was found. This linkage is important — it answers the question "which build has this problem?" without any manual effort.
Both paths result in the same outcome: a tracked blocker that surfaces everywhere it needs to.
This is where dedicated blocker tracking diverges from generic issue tracking. In TestApp.io, blockers don't just exist in a task list — they surface prominently across multiple views:
App Dashboard — Blocker Count Badge. The main dashboard for each app shows a blocker count. You don't have to dig into task lists or run filtered searches. The number is right there, impossible to miss. If it's not zero, you know there's a problem.
Version Overview — Warning Indicators. When viewing a version's overview, any open blockers trigger warning indicators. This is critical during the Testing and Ready phases of the version lifecycle. A version with open blockers is visually flagged as not-ready, regardless of what anyone says in a meeting.
Release List — Flagged Releases. Individual releases (builds) that have blockers reported against them are flagged in the release list. When scrolling through builds, you can immediately see which ones have known blocking issues. This prevents testers from wasting time on builds that are already known to be broken.
The design principle here is simple: blockers should be unavoidable. You shouldn't have to go looking for them. They should be in your face until they're resolved.
Finding and reporting blockers is only half the battle. The other half is resolving them with a clear, auditable process.
When a blocker is resolved in TestApp.io, the resolution captures several pieces of information:
This resolution data feeds into the audit trail for the version, creating a complete record of every blocker's lifecycle: when it was reported, on which build, by whom, how it was resolved, by whom, and when.
Over time, blocker data becomes a powerful diagnostic tool for your release process. TestApp.io tracks blocker metrics that help you answer important questions:
SLA tracking adds a time dimension to this. You can monitor whether blockers are being resolved within acceptable timeframes and identify when resolution is lagging behind expectations.
Blocker tracking doesn't exist in isolation — it's deeply connected to the version lifecycle. Here's how they interact at each stage:
Planning and Development: Blockers are less common here since there may not be testable builds yet. But they can exist — for example, a known issue carried over from a previous version that must be addressed before this one ships.
Testing: This is where most blockers are discovered. As testers work through builds, they report blockers that surface prominently on the version's Quality tab. The blocker count becomes the primary metric for release readiness during this phase.
Ready: Moving a version to Ready status is a statement that the version is shippable. Open blockers directly contradict this. The blocker count on the version overview serves as a quality gate — it's a clear, objective signal that the version isn't actually ready if the count is greater than zero.
Released: If a blocker is discovered after release (it happens), it can still be tracked against the version. This feeds into post-release metrics and may trigger a hotfix version.
This integration means blocker tracking isn't a separate process bolted onto your workflow. It's woven into the progression of every release.
Let's walk through the scenario from the introduction with proper blocker tracking in place.
4:30 PM Friday. Your team has version v3.2.0 in Testing status. Three builds have been uploaded this week via CI/CD. The latest build, uploaded two hours ago, is the release candidate.
4:32 PM. A tester discovers that the payment flow crashes on iOS 17 when the user has no saved payment methods. They report a blocker directly from the release. The task is created with Blocker priority, linked to the specific build.
4:33 PM. The blocker count on the app dashboard updates to 1. The version overview shows a warning indicator. The release is flagged in the release list. Everyone with access can see this immediately — no Slack message required.
4:35 PM. The team gets a Slack notification (via the Slack integration) about the new blocker. The notification includes the blocker description, which build it affects, and who reported it.
4:40 PM. The lead developer picks up the blocker, reproduces it, and identifies the issue — a nil check that was missed in a recent refactor. The fix is straightforward.
5:15 PM. The fix is pushed. CI/CD runs, and a new build is automatically uploaded to the version's releases via ta-cli.
5:20 PM. The tester installs the new build from the release link, verifies the fix, and the developer resolves the blocker with notes: "Added nil check for saved payment methods array. Verified on iOS 17.2 simulator and physical device."
5:22 PM. Blocker count drops to 0. Version overview shows no warnings. The Quality tab confirms no open blockers.
5:25 PM. The team reviews the Quality tab one more time, confirms everything looks clean, and moves the version to Ready. The release manager will submit to the App Store on Monday morning.
Everyone goes home on time.
Compare this to the alternative: the bug is reported in Slack, gets buried under replies, someone half-remembers it on Monday, the version ships without the fix, and a one-star review appears by Tuesday.
Here are practical recommendations for getting the most out of blocker tracking:
Every team should have a shared definition of what makes something a blocker versus a critical or high-priority bug. Write this down in your team's onboarding docs or wiki. Ambiguity here leads to either over-reporting (which creates noise) or under-reporting (which defeats the purpose).
A simple framework: If this bug were in production, would it cause immediate harm to users or the business? If yes, it's a blocker.
When you report a blocker from a specific release, it's automatically linked to that build. This context is valuable — it tells the developer exactly which build to reproduce the issue on and gives the team traceability from bug to build to fix to verification.
"Fixed" is not a resolution note. "Added nil check for savedPaymentMethods array in CheckoutViewController. Crash was caused by force-unwrapping an optional that is nil when user has no saved cards. Verified fix on iOS 17.0, 17.2, and 17.4" — that's a resolution note. Future team members will thank you.
During your retrospective (you are doing retrospectives, right?), pull up the blocker metrics. Look at:
Trends in these metrics are more informative than any single data point.
It's tempting to ship with an open blocker when there's pressure from stakeholders or a hard deadline. Resist this. The entire point of blocker tracking is to provide an objective signal. If you override it routinely, you've just built a system that everyone ignores.
If a deadline is truly immovable, the correct response is to scope down the release — remove the affected feature or screen — not to ship a known blocker.
Connect TestApp.io to Slack or Microsoft Teams so blocker notifications are automatic. The faster the team knows about a blocker, the faster it gets resolved. Slack integration supports channel selection and event configuration, so you can route blocker notifications to a dedicated release channel without spamming your general channel.
Tools can only do so much. Blocker tracking works best when it's backed by team culture:
Blocker tracking isn't glamorous. It's not the feature you showcase in a demo. But it's the feature that prevents your most painful days — the emergency hotfixes, the weekend deploys, the apologetic emails to users.
The core idea is simple: critical bugs deserve dedicated, visible, enforceable tracking that's connected to your releases and your version lifecycle. When blockers can't hide in long task lists, when they surface on every dashboard, and when their resolution is documented and measurable — you ship better software.
Not because you have fewer bugs (you'll always have bugs), but because the ones that matter most can't slip through.
Start using TestApp.io to bring structured blocker tracking to your mobile releases. Check the help center for setup guides and detailed documentation.
]]>Version management shouldn't be this painful. But for most mobile teams, it is — because they're stitching together tools that were never designed to track the full lifecycle of a mobile release.
This guide walks through how to manage the complete version lifecycle in TestApp.io, from the first planning session to the final archive. If you're tired of ambiguity around what's shipping, when, and whether it's actually ready, read on.
Before diving into the solution, let's be honest about why version management falls apart. Most teams start with good intentions — a Slack channel, a Notion doc, maybe a Jira epic per release. But these approaches share common failure modes:
The result is predictable: missed bugs, confused testers, delayed releases, and a lot of time spent in "what's the status?" meetings that shouldn't need to exist.
TestApp.io provides a structured version lifecycle that gives every release a clear, trackable progression from initial planning through final archival. Each version moves through defined statuses, and every artifact — builds, tasks, blockers, launch submissions — is connected to the version it belongs to.
Here's what the lifecycle looks like at a high level:
Planning → Development → Testing → Ready → Released → Archived
Each status represents a distinct phase with its own activities, expectations, and quality gates. Let's walk through every stage.
Every version starts in the Planning status. This is where you define what's going into the release before any code is written or any builds are uploaded.
During planning, you'll typically:
v2.5.0) along with any relevant notes about scope or goals.The Planning tab within the version dashboard gives you a focused view of all tasks associated with the version. You can see what's assigned, what's prioritized, and what's still unscoped.
If you're creating a version around a set of release notes or a feature description, TestApp.io can generate up to 15 QA tasks automatically using AI. These tasks are platform-aware, meaning they'll account for iOS-specific or Android-specific testing needs. It's a fast way to bootstrap your testing plan without starting from a blank slate.
Once planning is complete, move the version to Development. This signals to the team that active work is underway.
During development, the version dashboard becomes a coordination hub:
ta-cli, so every successful build is captured and linked.The key benefit here is visibility. Instead of asking "did the latest build get uploaded?" in Slack, you can see it directly in the version dashboard.
Moving to Testing status tells the team that the version is ready for QA. Builds are available, and testers should be actively validating.
This is where the version dashboard really shines:
The Testing phase is where quality gates become critical. TestApp.io tracks blockers — the highest-priority issues that must be resolved before a release can ship. We'll cover blocker tracking in depth in a separate post, but the key point is this: blocker counts are visible on the version dashboard, and they serve as a clear signal of release readiness.
If a version has open blockers, it's not ready. Period. This removes the subjective "I think it's fine" conversations and replaces them with objective criteria.
A version moves to Ready when testing is complete and all quality gates are passed. This means:
The Ready status is a holding state — it means the version is approved for release but hasn't been submitted or shipped yet. This is useful for teams that have a scheduled release cadence or need sign-off from a release manager before going live.
Once the version is live — whether that means submitted to the App Store, pushed to Google Play, or distributed to your full user base — it moves to Released.
This is also where Launches come into play. Launches are TestApp.io's way of tracking store submissions attached to a version. A launch progresses through its own statuses:
Draft → In Progress → Submitted → Released
You can track exactly where your App Store or Google Play submission stands without leaving the version dashboard. This is especially useful for teams that submit to multiple stores or have staggered rollouts across platforms.
Before marking a launch as submitted, many teams use Playbooks — reusable checklists that ensure nothing is missed. TestApp.io includes templates for common scenarios:
You can also create custom playbooks with required items, so critical steps can't be skipped. Think of them as pre-flight checklists for your release.
After a version has been released and enough time has passed, move it to Archived. This keeps your active version list clean while preserving the full history of what happened — every build, every task, every comment, every status change.
Archived versions remain fully searchable and browsable. You're not deleting anything; you're decluttering your workspace.
Each version in TestApp.io has a dedicated dashboard with five tabs. Here's what each one gives you:
| Tab | What It Shows |
|---|---|
| Overview | Version summary — current status, key metrics, recent activity, blocker count, and quick links to important artifacts. |
| Planning | All tasks associated with the version. Filter by assignee, priority, or status. Kanban board and table views available. |
| Releases | Every build uploaded for this version. Platform, file info, upload date, install links, and distribution status. |
| Quality | Blocker tracking, testing metrics, and quality indicators. The go-to tab for answering "is this version ready to ship?" |
| Settings | Version configuration — name, description, target dates, and other metadata. |
Having all of this in one place eliminates the context-switching tax of jumping between Jira, Slack, spreadsheets, and your CI dashboard.
Let's compare what release week looks like with and without structured version management.
The difference isn't magic — it's structure. When every artifact, status change, and quality signal lives in one connected system, releases become predictable instead of chaotic.
Every action taken on a version is recorded in an audit trail. This includes:
This isn't just for compliance — though it helps there too. The audit trail is invaluable for post-mortems. When a release goes sideways, you can reconstruct exactly what happened without relying on anyone's memory.
If you're transitioning from ad-hoc release tracking, here are some practical suggestions:
Don't try to retroactively organize past releases. Create a version for your next upcoming release and use it as a pilot. Let the team experience the workflow before rolling it out broadly.
The biggest time-saver is automatic build uploads. Set up ta-cli in your CI/CD pipeline so every successful build automatically appears in the version's Releases tab. This eliminates the "where's the latest build?" question entirely.
Make it a team norm: if a bug could prevent the release from shipping, it's a blocker. Report it as a blocker, not just a high-priority task. The distinction matters because blocker counts are surfaced prominently across the dashboard.
If your team uses project management tools like Jira or Linear, connect them. Two-way sync means tasks created in those tools automatically appear in your version's planning tab, and status changes flow both directions in real time. This avoids duplicate data entry and keeps everyone working in their preferred tool.
Start with the built-in templates for App Store or Google Play submissions. Customize them over time as you learn what your team's specific pre-release checklist looks like. The goal is to make "did we forget something?" a question that never needs to be asked.
Spend 15 minutes after each release reviewing the audit trail. Look for patterns: Are blockers consistently found late in the cycle? Are certain types of tasks always underestimated? The data is there — use it to improve your process.
Version lifecycle management isn't about adding process for the sake of process. It's about replacing ambiguity with clarity. When every team member can look at a version dashboard and immediately understand what's planned, what's built, what's tested, what's blocking, and what's shipped — releases stop being stressful events and start being routine operations.
TestApp.io's version lifecycle gives you the structure to make that happen, without forcing you into a rigid workflow that doesn't fit your team. The six stages are a framework, not a straightjacket. Use them as guardrails, and let the connected dashboard, blocker tracking, and audit trail handle the rest.
Ready to bring order to your release process? Get started with TestApp.io and create your first version today. For detailed setup instructions, visit the help center.
]]>The result? Some features get tested rigorously. Others barely get a glance. And when a bug ships to production, the postmortem always comes back to the same root cause: "We did not test that scenario."
TestApp.io's AI task generation reads your release notes and produces targeted, platform-aware QA tasks that cover the changes in that build. It does not replace your testers' judgment. It gives them a comprehensive starting point so nothing falls through the cracks.
Here is the core concept: when you upload a new build to TestApp.io, you include release notes describing what changed. The AI reads those notes along with your app's context (description, platform, previous patterns) and generates up to 15 QA task suggestions tailored to that specific build.
These are not generic "test the login flow" tasks. They are targeted to the actual changes. If your release notes say "Fixed crash when rotating device on the payment screen," the AI generates tasks like verifying the rotation behavior on the payment screen across different device orientations, checking that the payment flow completes after rotation, and testing edge cases like rotating mid-transaction.
The generated tasks are suggestions, not mandates. You review them, edit what needs adjusting, remove what is irrelevant, and bulk-create the ones you want. They land on your task board as real tasks with priorities and assignees, ready for your testing workflow.
Let us compare the two approaches on a real-world release.
Say your latest build includes these changes:
- Added dark mode support for all main screens
- Fixed crash when uploading images larger than 10MB
- Improved loading time for the dashboard by 40%
- Added pull-to-refresh on the notifications screen
- Fixed incorrect badge count after clearing notifications
- Updated minimum supported iOS version to 15.0A QA lead reads the notes and creates tasks. On a busy day, this is what gets written:
Four tasks for six changes. The badge count fix and the iOS version update are not covered. Two of the four tasks lack enough detail for a tester to execute them without asking follow-up questions.
This is not because the QA lead is careless. They are busy, they are context-switching between three releases, and writing detailed QA tasks is mentally taxing work that happens at the end of an already full day.
The AI reads the same release notes and generates something closer to this:
Fifteen tasks covering all six changes, with specific test scenarios, edge cases, and platform considerations. A tester can pick up any of these and execute them without ambiguity.
The time investment? A few seconds to click "Generate Tasks" and a couple of minutes to review and adjust. Compare that to 20-30 minutes of manual task writing that still misses scenarios.
Here is the step-by-step workflow.
When you upload a new build to TestApp.io — whether through the dashboard, the CLI (ta-cli), or your CI/CD pipeline — include release notes describing what changed in this build.
The more specific your release notes, the better the AI's output. More on this later in the tips section.
Once the build is uploaded and processed, go to the release in your TestApp.io dashboard. You will find the release notes displayed along with the build details.
Look for the Generate Tasks option associated with the release. Clicking it sends the release notes, along with your app's context (app description, platform — iOS or Android), to the AI engine.
The generation takes a few seconds. When it completes, you see a list of suggested QA tasks.
This is the important part. AI-generated tasks are suggestions, not final outputs. Review each one with your tester's eye:
Click into any generated task to modify it before creation. You can change:
Think of this as a review pass, not a rewrite. The AI gives you 80% of the content; you add the 20% that requires human context.
Once you have reviewed and edited the suggestions, select the ones you want to keep and bulk-create them. They immediately appear on your task board as real tasks, ready to be assigned and worked on.
You can create all 15 suggestions, or just the 8 that are most relevant. There is no obligation to accept everything the AI generates.
The quality of AI-generated tasks depends on the context available. Here is what the AI uses:
This is the primary input. The AI parses the release notes to understand what changed, what was fixed, what was added, and what was modified. Structured release notes (bullet points, categorized changes) produce better results than a single paragraph of prose.
Your app's description in TestApp.io provides background context. If your app is described as a "financial services app for iOS and Android," the AI can factor in domain-specific concerns like security, data accuracy, and compliance-related testing.
The AI knows whether the build is for iOS or Android and tailors tasks accordingly. An iOS build might get tasks related to iOS-specific behaviors (like permission dialogs, App Transport Security, or device rotation). An Android build gets tasks relevant to Android's ecosystem (like varied screen sizes, back button behavior, or permission handling).
This platform awareness means you do not have to mentally filter out irrelevant platform suggestions. The tasks are already scoped to the right platform.
Generated tasks do not live in a separate silo. Once created, they are full-fledged tasks on your TestApp.io task board with all the standard capabilities:
This last point is worth emphasizing. If you are using the JIRA or Linear integration, AI-generated tasks flow into your developers' issue trackers just like any other task. The developer does not need to know or care that the task was AI-generated. It appears on their board like any other issue.
The quality of the output directly correlates with the quality of the input. Here are practical tips for getting the most useful task suggestions.
Compare these two versions of the same change:
Vague: "Fixed bugs and improved performance"
Specific: "Fixed crash on payment screen when rotating device during transaction. Improved dashboard load time from 3.2s to 1.8s by optimizing API calls."
The vague version gives the AI almost nothing to work with. The specific version produces targeted, testable tasks.
Structure your release notes as a bulleted list of changes. Each bullet becomes a potential source of one or more test tasks. A paragraph of prose is harder for the AI to parse into distinct, testable changes.
"Added pull-to-refresh on notifications" tells the AI what changed. "Added pull-to-refresh on notifications to resolve user complaints about stale notification data" also tells it why, which can produce more thoughtful edge-case tasks (like testing with stale cache data or poor network conditions).
If a change only affects certain OS versions, device types, or configurations, mention it in the notes. "Updated minimum iOS version to 15.0" gives the AI explicit information to generate version-boundary testing tasks.
"Fixed login bug and redesigned the settings page" is two changes that should be two bullets. Separating them helps the AI generate distinct tasks for each change rather than conflating them.
The best workflow is: generate tasks, take a short break or switch context, then come back and review. Fresh eyes catch the suggestions that are too generic or miss your app's specific edge cases.
AI task generation is most valuable in these scenarios:
To be clear about the boundaries: AI task generation does not replace exploratory testing, domain expertise, or the intuition that experienced testers develop over years. It will not catch the subtle interaction bug that only happens when you navigate between three specific screens in a particular order while on a slow network.
What it does is handle the routine, systematic task creation that takes up a disproportionate amount of QA planning time. It ensures that every change in the release notes has corresponding test coverage. It catches the obvious tasks so your testers can spend their energy on the non-obvious ones.
Think of it as a QA task first draft. A really good first draft that covers the fundamentals, leaving your team free to add the nuanced, experience-driven test scenarios that no AI can generate.
If you are manually creating QA tasks from release notes today, AI task generation can reclaim that time and improve your test coverage simultaneously. The workflow is simple: upload a build with release notes, generate tasks, review, create.
Try it on your next release at portal.testapp.io. Write detailed release notes, generate the tasks, and compare the output to what you would have created manually. Most teams find the AI catches scenarios they would have missed.
For additional details on task management workflows, check the help center.
]]>This disconnect is not just annoying. It costs real time. Every manual copy-paste, every "hey, did you update the ticket?" Slack message, every missed status change adds friction to a process that should be seamless.
TestApp.io's JIRA integration solves this with genuine 2-way, real-time sync. Changes flow in both directions automatically. No middleware, no Zapier workarounds, no cron jobs. Here is how to set it up from scratch, and how to get the most out of it once it is running.
Before diving into setup, here is a clear picture of what you get:
The integration uses Atlassian's OAuth 2.0 flow, which means you are not handing over API tokens or service account credentials. Here is how to get started:
In your TestApp.io dashboard, go to your version's settings and find the Integrations tab. You will see JIRA listed as an available integration.
Click Connect on the JIRA integration card. This redirects you to Atlassian's OAuth consent screen. You will need to:
Once authorized, you are redirected back to TestApp.io with the connection established. The OAuth token is stored securely and handles refresh automatically, so you will not need to re-authorize unless you explicitly revoke access.
The integration requests access to read and write issues, comments, and project metadata. It does not request admin-level permissions for your Atlassian organization. Only the JIRA projects you explicitly select will be accessible.
After connecting, you need to tell TestApp.io which JIRA project to sync with. This is a one-to-one mapping: one TestApp.io version syncs with one JIRA project.
From the integration settings panel:
A few things to keep in mind here:
This is where the integration gets powerful. Field mapping lets you define how statuses and priorities translate between the two systems.
JIRA and TestApp.io likely use different status names. Maybe your JIRA workflow has "To Do," "In Development," "Code Review," "QA," and "Done." TestApp.io uses statuses that are more QA-focused.
The mapping interface lets you pair each JIRA status with a TestApp.io status. For example:
| JIRA Status | TestApp.io Status |
|---|---|
| To Do | Open |
| In Development | Open |
| QA | In Progress |
| Done | Closed |
This mapping works in both directions. When a task moves to "Closed" in TestApp.io, JIRA updates it to "Done" (or whatever you mapped). When a developer moves an issue to "QA" in JIRA, it appears as "In Progress" in TestApp.io.
Similarly, map priority levels between the two systems. TestApp.io uses a priority scale of Low, Normal, High, Critical, and Blocker. JIRA typically uses Lowest, Low, Medium, High, and Highest. Set up the mapping that makes sense for your team's conventions:
| JIRA Priority | TestApp.io Priority |
|---|---|
| Highest | Blocker |
| High | Critical |
| Medium | High |
| Low | Normal |
| Lowest | Low |
Take a few minutes to get these mappings right. They form the backbone of how accurately your tasks stay in sync across both systems.
Once field mappings are configured, the webhook-based sync is live. Here is what happens in practice:
A developer updates an issue in JIRA — changes the status from "To Do" to "In Development," adds a comment, or bumps the priority. Within seconds, those changes appear on the corresponding task in TestApp.io. Your QA team sees the update without switching tools or asking for a status update.
A tester finds a bug during a testing session, updates the task priority to "Blocker," and adds a comment with reproduction steps. That change flows back to JIRA immediately. The developer sees the priority change on their JIRA board without anyone having to ping them.
What happens if someone edits the same field in both systems simultaneously? The integration uses a last-write-wins approach with the sync history providing full visibility into what changed and when. In practice, true simultaneous edits are rare, but the audit trail ensures nothing is silently overwritten without a record.
Most teams do not start from zero. You probably have an existing backlog of issues in JIRA that relate to your mobile app. Rather than recreating them manually in TestApp.io, use the import feature.
From the integration settings:
Imported issues become full TestApp.io tasks with bidirectional sync enabled. Any future changes in either system stay synchronized.
A practical tip: do not import everything blindly. Start with issues that are actively being tested or are in your current sprint. You can always import more later.
The reverse scenario is also common: you have been using TestApp.io's built-in task management and now want those tasks reflected in JIRA. The migration feature handles this.
After migration, those tasks exist in both systems with sync enabled going forward. The original TestApp.io tasks are not deleted; they become synced tasks linked to their JIRA counterparts.
The sync history is one of those features you do not think about until you need it — and then you really need it. Every sync event is recorded with:
This is invaluable for debugging. If a tester says "I updated the status an hour ago but JIRA still shows the old value," you can check the sync history and see exactly what happened. Failed syncs can be retried directly from the history view.
Even well-configured integrations occasionally run into hiccups. Here are the most common issues and how to resolve them:
If your JIRA admin modifies the project's workflow (adds new statuses, removes old ones, changes transitions), your field mappings may become stale. When a task moves to a status that is not mapped, the sync cannot determine where to put it.
Fix: Go to integration settings and update your status mappings to include the new JIRA statuses. The sync will resume for any pending changes.
If someone revokes the OAuth grant from the Atlassian side, or if the token expires without a successful refresh, the integration will stop syncing.
Fix: Re-authorize by clicking Connect again in the integration settings. Your existing field mappings and sync history are preserved; only the auth token is refreshed.
If you import issues and then also have someone manually create the same tasks, you can end up with duplicates. The integration tracks linked issues by their JIRA issue key, so manually created tasks are not automatically deduplicated.
Fix: Before importing, communicate with your team that JIRA issues are being pulled in automatically. Delete any manually created duplicates after import.
Network issues or temporary outages can cause webhook deliveries to fail. The sync history will show these as failed events.
Fix: Check the sync history for failed events and use the retry option. If failures persist, verify that your network allows outbound webhook traffic and that no firewall rules are blocking the connection.
If the Atlassian user who authorized the integration does not have write access to certain JIRA fields or transitions, syncs that try to update those fields will fail.
Fix: Ensure the authorizing user has sufficient permissions in the JIRA project. They need to be able to create issues, edit fields, transition statuses, and add comments.
After working with teams who run this integration daily, here are some patterns that consistently work well:
The JIRA integration turns two separate tools into a unified workflow. Developers stay in JIRA. Testers stay in TestApp.io. Changes flow automatically, and everyone has the same picture of what is happening.
If you are spending time copying issue details between tools, manually updating statuses, or wondering whether your JIRA board reflects reality, this integration eliminates that overhead.
Set up the connection at portal.testapp.io, and check the help center for additional guides on fine-tuning your integration settings.
]]>But here is where things break down: your developers live in Linear, and your QA process lives somewhere else. A tester finds a critical bug during a testing session. They log it in the testing tool. Now someone has to manually create a matching issue in Linear so the developer sees it. The developer fixes it and moves the Linear issue to "Done." Someone has to go back to the testing tool and update the status there too.
Multiply that by every bug, every status change, every priority update across an entire release cycle, and you have a significant amount of time spent on busywork that adds zero value.
TestApp.io integrates directly with Linear to eliminate this entirely. Bidirectional sync via webhooks keeps both tools in lockstep, in real time, without anyone manually bridging the gap.
Let us be specific about what changes when you connect Linear and TestApp.io.
Every step that involves "copies the details" or "updates the other tool" is a failure point. Details get lost. Statuses drift. And nobody trusts either tool to have the current truth.
No copying. No pasting. No "hey, can you update the ticket" messages. Both tools always reflect the same reality.
The setup takes about five minutes. Here is the complete walkthrough.
In your TestApp.io dashboard, navigate to your version's settings and open the Integrations tab. Find the Linear integration card and click Connect.
This launches Linear's OAuth authorization flow. You will be asked to:
The permissions allow TestApp.io to read and write issues, comments, and team metadata in the workspace you select. It does not request admin-level access to your entire Linear organization.
Click Authorize, and you will be redirected back to TestApp.io with the connection established.
Linear organizes work into teams, and TestApp.io needs to know which team's issues to sync with. After authorization, you will see a dropdown listing the teams in your connected workspace.
Select the team that manages your mobile app development. This creates a one-to-one link between your TestApp.io version and the Linear team.
A few considerations:
Linear and TestApp.io use different status and priority schemas. Field mappings tell the integration how to translate between them.
Linear's default workflow statuses are Backlog, Todo, In Progress, In Review, and Done. TestApp.io has its own set of statuses tailored for QA workflows. You need to define the correspondence:
| Linear Status | TestApp.io Status |
|---|---|
| Backlog | Open |
| Todo | Open |
| In Progress | In Progress |
| In Review | In Review |
| Done | Closed |
| Cancelled | Closed |
This mapping is bidirectional. Moving a task to "Closed" in TestApp.io transitions the Linear issue to "Done." Moving a Linear issue to "In Progress" updates the TestApp.io task accordingly.
If your Linear team uses custom statuses (and many do), map those as well. Every unmapped status becomes a potential sync gap.
Linear uses Urgent, High, Medium, Low, and No Priority. TestApp.io uses Blocker, Critical, High, Normal, and Low. Map them according to your team's understanding of severity:
| Linear Priority | TestApp.io Priority |
|---|---|
| Urgent | Blocker |
| High | Critical |
| Medium | High |
| Low | Normal |
| No Priority | Low |
Get agreement from both your developers and testers on these mappings before finalizing. When a tester marks something as "Blocker," they need to know it shows up as "Urgent" in Linear, and the developer needs to treat it accordingly.
With field mappings configured, the integration is ready. Webhook-based sync activates automatically. From this point forward:
If your Linear team already has a backlog of issues, you do not have to start from scratch. The import feature lets you pull existing Linear issues into TestApp.io.
Navigate to the integration's pull tasks option and select which issues to import. You can filter by status, assignee, label, or other criteria. Preview the import to verify the field mappings look right, then confirm.
Each imported issue becomes a synced TestApp.io task. Future changes to that issue in either system flow bidirectionally.
A practical approach: start by importing only issues in active statuses (Todo, In Progress). There is no need to import every closed issue from six months ago. You can always import more later.
The opposite scenario is equally common. You have been using TestApp.io's built-in task management, and now you want those tasks visible in Linear so developers can work with them in their normal workflow.
The migration feature handles this:
Migrated tasks are created as new issues in your Linear team with all the relevant details (title, description, priority, status). From the moment of migration, bidirectional sync is active for those tasks.
The original TestApp.io tasks are not deleted. They become synced tasks linked to their new Linear counterparts.
Every sync event is logged and visible in the integration settings. The sync history records:
This audit trail is critical for two scenarios:
Debugging sync issues: If a task's status does not match between the two tools, the sync history tells you exactly what happened. Maybe the sync failed due to a permissions issue. Maybe it succeeded but the field mapping produced an unexpected result. Either way, you have the data to diagnose the problem.
Compliance and accountability: For teams that need to demonstrate who changed what and when, the sync history provides a complete record of all automated changes. You can trace any field change back to its source system and timestamp.
Failed sync events can be retried directly from the history view, which is useful for transient errors like network timeouts.
Beyond the core sync, there are several capabilities worth knowing about:
Need to pause sync temporarily? Maybe your team is doing a major sprint reorganization in Linear and you do not want a flood of sync events. Toggle the integration off without losing your configuration, mappings, or history. Toggle it back on when things stabilize, and sync resumes from where it left off.
Not every task needs to live in both tools. You can control which tasks sync and which stay local. This is useful for internal QA tasks that developers do not need visibility into, or for Linear issues that are not relevant to the testing workflow.
The integration uses webhooks for real-time sync rather than periodic polling. This means changes appear in seconds rather than minutes. Webhook deliveries that fail (due to network issues, temporary outages) are tracked in the sync history and can be retried.
Having worked with teams that run this integration in production, here are patterns that work well:
If your engineering team runs on Linear and your QA process involves mobile app distribution and testing, this integration eliminates the manual overhead of keeping both systems in sync. Setup takes about five minutes. The time savings compound with every sprint.
Connect your Linear workspace at portal.testapp.io and check the help center for detailed guides on field mapping configurations and advanced sync options.
]]>If your compliance team has ever asked "where are our app binaries stored?" and you could not give a precise answer, this article is for you. We will cover why custom storage matters, how TestApp.io's Bring Your Own Storage feature works, and the practical steps to set it up with Amazon S3, Google Cloud Storage, or Backblaze B2.
App builds are not just code. They contain proprietary business logic, API endpoints, embedded credentials (hopefully not, but often yes), and sometimes sensitive data used for testing. Where these files are stored has real compliance and security implications.
Data residency laws require that certain data stays within specific geographic boundaries. GDPR, for instance, has implications for where data belonging to EU citizens can be processed and stored. If your app is built for a European market and your builds are stored in a US data center by default, your compliance team has a legitimate concern.
With custom storage, you control the region. Create an S3 bucket in eu-west-1 or a GCS bucket in europe-west3, and your builds stay where your compliance requirements say they should.
HIPAA, SOC 2, ISO 27001, FedRAMP: these frameworks all have requirements around data handling, access controls, and audit trails. When your builds live in your own cloud storage, you inherit the compliance controls you have already set up for that cloud account. Your existing encryption-at-rest configuration, access logging, lifecycle policies, and IAM rules all apply automatically.
This is significantly easier than trying to validate that a third-party platform's storage meets all your compliance requirements. Your cloud account is already audited. Your builds are just another set of objects in it.
Many organizations have internal security policies that require all production artifacts to reside in company-managed infrastructure, regardless of specific regulatory requirements. This is a reasonable security posture. Fewer third-party storage locations mean a smaller attack surface and simpler access auditing.
TestApp.io supports three storage providers for Bring Your Own Storage:
The most widely used object storage service. If your organization is on AWS, this is the natural choice. You get full control over bucket region, encryption, versioning, lifecycle policies, and IAM-based access controls. S3 also supports compliance-relevant features like Object Lock (WORM storage) and detailed access logging via CloudTrail.
For organizations on Google Cloud Platform, GCS provides equivalent capabilities: regional and multi-regional buckets, customer-managed encryption keys, IAM integration, and audit logging via Cloud Audit Logs. If your CI/CD pipeline already runs on GCP (Cloud Build, for example), keeping your builds in GCS reduces cross-cloud data transfer.
A cost-effective alternative for teams that need custom storage but do not require the full feature set of AWS or GCP. Backblaze B2 offers S3-compatible APIs, straightforward pricing, and data center locations in the US and EU. For teams where budget is a consideration and compliance requirements are moderate, B2 is a practical choice.
The key concept is straightforward: your app builds are stored in your bucket, while TestApp.io handles distribution.
When a build is uploaded (either manually or through the ta-cli command-line tool from your CI/CD pipeline), the binary goes directly to your configured storage bucket. TestApp.io manages the metadata, distribution links, QR codes, install flow, and access control. Testers still install builds through TestApp.io's interface, mobile app, or shared links. They do not need direct access to your S3 or GCS bucket.
This separation is important. You get the compliance benefits of controlling where data lives, without losing the distribution convenience of a purpose-built platform. Your testers do not need AWS credentials or GCP access. They just tap a link and install.
Here is the practical walkthrough for each provider. For the most up-to-date instructions and screenshots, check help.testapp.io.
yourcompany-testappio-builds.s3:PutObject, s3:GetObject, s3:DeleteObject, and s3:ListBucket on the specific bucket. Follow the principle of least privilege.Once configured, TestApp.io provides clear visibility into your external storage status.
Your storage configuration shows one of three states:
One particularly useful feature: you can disable external storage without losing your configuration. If you need to temporarily switch back to default storage (for troubleshooting, during a credential rotation, or for any other reason), you can disable and re-enable without re-entering all your bucket and credential details.
You can update your storage configuration at any time. Need to rotate credentials? Update the access key without changing the bucket. Need to switch regions? Update the bucket configuration. Changes take effect for new uploads; existing builds remain where they were stored.
Before setting up custom storage, consider these practical points.
You are responsible for the storage costs in your cloud account. For most teams, this is negligible. A typical mobile app build is 50-200 MB. Even at 10 builds per week, you are looking at 1-2 GB per week, which costs pennies on any cloud provider. But if you retain builds indefinitely and build frequently, implement lifecycle policies to archive or delete old builds automatically.
Treat the credentials you give TestApp.io with the same care as any service credential. Use dedicated IAM users or service accounts with minimum required permissions. Rotate credentials on a regular schedule (quarterly is a reasonable default). Monitor access logs for unexpected activity.
Build uploads go to your storage bucket, so the upload speed is determined by the network path between the uploader and your bucket. If your CI/CD pipeline runs in the same cloud region as your bucket, uploads will be fast. If your developers are uploading manually from a different continent, consider a bucket region that balances compliance requirements with upload performance.
Your standard cloud backup and DR practices apply. Enable versioning to protect against accidental deletion. Set up cross-region replication if your DR requirements demand it. TestApp.io manages the distribution metadata, but the binaries are in your bucket and subject to your backup policies.
Bring Your Own Storage is available on the Pro plan. It is designed for teams where one or more of the following is true:
If none of these apply and default storage works fine for your team, there is no need to add the complexity of managing your own bucket. But if compliance is a concern, this feature exists so you do not have to choose between meeting your requirements and having a functional distribution workflow.
For more details on the Pro plan and its features, visit testapp.io.
Setting up custom storage takes about 15 minutes if you already have a cloud account:
From that point forward, every build uploaded through TestApp.io, whether manually or through your CI/CD pipeline, lands in your bucket. Your compliance team can point to a specific bucket in a specific region managed by your cloud account. Your distribution workflow stays exactly the same.
That is the point. Compliance should not require sacrificing convenience. Your testers still install via link or QR code. Your CI/CD pipeline still uploads via ta-cli. The only difference is where the bytes land, and now you control that.
Visit help.testapp.io for detailed setup guides with screenshots for each storage provider.
]]>But here's the gap that becomes obvious once your team scales past a handful of testers: Firebase App Distribution is only distribution. Everything that happens after a tester installs your build — bug reports, task tracking, blocker management, release sign-offs — happens somewhere else entirely. You end up stitching together Jira, Slack, spreadsheets, and email threads to cover what should be a single workflow.
If that friction sounds familiar, you're not alone. A growing number of mobile teams are looking for Firebase App Distribution alternatives that consolidate distribution and QA into one platform. Let's break down the specific limitations and what to look for instead.
Firebase distributes builds. That's it. There's no built-in way to create QA tasks, assign them to testers, set priorities, or track completion. Every bug your testers find gets reported through a separate tool — Jira, Linear, GitHub Issues, a shared spreadsheet, or worst of all, a group chat. The disconnect between "here's the build" and "here's what to test" creates overhead that compounds with every release cycle.
For small teams shipping once a week, this is manageable. For teams running multiple builds per day across iOS and Android, the context switching becomes a real productivity drain.
When a tester finds a critical bug that should block a release, how do you track that in Firebase? You don't — at least not within the distribution tool itself. There's no concept of blocker status, no dashboard showing unresolved blockers per version, and no resolution workflow with notes. You're relying on your project management tool to surface this information, and hoping everyone checks it before pushing to production.
Blocker tracking isn't a nice-to-have. It's the difference between catching a crash-on-launch bug before your App Store submission and discovering it from 1-star reviews.
Firebase treats every upload as an isolated event. There's no concept of a version moving through stages — Planning, Development, Testing, Ready, Released, Archived. You can't look at a dashboard and see which versions are in testing versus which are ready for store submission. That lifecycle visibility has to be reconstructed manually from build numbers, timestamps, and team memory.
Shipping to the App Store or Google Play involves a repeatable set of steps: screenshots updated, changelog written, compliance checks passed, stakeholder sign-off obtained. Firebase offers no mechanism for release checklists. Every release cycle, someone has to remember (or re-create) the checklist from scratch. Reusable playbook templates — for iOS App Store submissions, TestFlight distributions, Google Play releases — simply don't exist in Firebase's distribution tooling.
Google has a well-documented history of scaling back or sunsetting products, and various Firebase features have been affected over the years. While Firebase App Distribution is still actively maintained, teams building long-term workflows around it should consider the platform risk. Betting your entire release process on a tool that could be deprecated is a decision worth weighing carefully.
Before jumping to a specific tool, here's the criteria that matter most for teams outgrowing Firebase's distribution-only approach:
TestApp.io is purpose-built for the workflow that Firebase App Distribution doesn't cover: everything that happens between uploading a build and shipping to the store.
Distribution: Upload IPA, APK files. Testers install via direct link, QR code, or the TestApp.io mobile app. Uploads use chunked resumable upload protocol, so large builds don't fail on flaky connections. No app review process — builds are available to testers instantly.
Task Management: Built-in Kanban board and table view for QA tasks. Set priorities from Low to Blocker. Assign tasks to specific team members with due dates. Link tasks directly to releases so testers know exactly what to verify against which build. AI-powered task generation can create up to 15 platform-aware QA tasks from your release notes — saving time on repetitive test case creation.
Blocker Tracking: Report blockers directly from tasks or releases. A dedicated dashboard shows blocker counts per version, surfaces warnings when unresolved blockers exist, and provides a resolution workflow with notes. No more guessing whether a release is safe to ship.
Version Lifecycle: Every release moves through defined stages — Planning, Development, Testing, Ready, Released, Archived. Dashboard tabs let you see at a glance what's in testing, what's ready, and what's already shipped.
Playbooks: Create reusable release checklists from templates (iOS App Store, TestFlight, Google Play) or build custom ones. Mark items as required so nothing gets skipped. Run a playbook for every release and track completion across your team.
Launches: Track store submissions through their own lifecycle — Draft, In Progress, Submitted, Released — so you have visibility into what's pending review at Apple or Google.
Integrations: Two-way real-time sync with project management tools (such as Jira and Linear) via OAuth and webhooks. Field mapping, task import/migration, and sync history. Slack integration with rich formatted messages and channel selection. Microsoft Teams support via Power Automate with Adaptive Cards. CI/CD via the ta-cli command-line tool, with support for GitHub Actions, Bitrise, CircleCI, Fastlane, Jenkins, Xcode Cloud, GitLab CI, Azure DevOps, Codemagic, and Travis CI.
Collaboration: Real-time activity feed on every release. Threaded comments with @mentions, emoji reactions, and file attachments. Role-based access control and a team leaderboard with points to encourage testing participation.
TestFlight is free with an Apple Developer account ($99/year) and handles iOS, iPadOS, macOS, watchOS, and tvOS beta distribution. Internal testing supports up to 100 users with no review required. External testing allows up to 10,000 testers but requires Beta App Review, which can take 24-48 hours.
The obvious limitation: no Android support whatsoever. If your team ships on both platforms, TestFlight only covers half the picture. There's also no task management, no blocker tracking, no CI/CD API for managing testers programmatically, and no release checklists. TestFlight is excellent for what it does, but it's a distribution channel, not a QA platform.
Diawi is the simplest option on this list: upload an IPA or APK, get a link and QR code, share it. No account required for basic use. It's ideal for solo developers or quick one-off shares during development.
However, Diawi offers no team management, no CI/CD integration, no version tracking, and no task management. Install links can sometimes be unreliable, and there's no upload retry mechanism. For anything beyond sharing a single build with a few people, Diawi's simplicity becomes a limitation rather than an advantage.
| Feature | Firebase App Dist. | TestApp.io | TestFlight | Diawi |
|---|---|---|---|---|
| iOS Distribution | Yes | Yes | Yes | Yes |
| Android Distribution | Yes | Yes (APK) | No | Yes |
| QR Code Sharing | No | Yes | No | Yes |
| Task Management | No | Yes (Kanban + Table) | No | No |
| Blocker Tracking | No | Yes | No | No |
| Version Lifecycle | No | Yes (6 stages) | No | No |
| Release Checklists | No | Yes (Playbooks) | No | No |
| AI Task Generation | No | Yes | No | No |
| Jira/Linear Sync | No | Yes (2-way real-time) | No | No |
| CI/CD Integration | Yes (CLI, Gradle, Fastlane) | Yes (ta-cli + 10 platforms) | Via Xcode/Fastlane | No |
| Slack/Teams Notifications | No native | Yes (both) | No | No |
| Tester App | Via Firebase console | Yes (dedicated app) | Yes (TestFlight app) | No |
| Store Submission Tracking | No | Yes (Launches) | No | No |
If you're currently using Firebase App Distribution and want to migrate, here's a practical path:
ta-cli tool. If you're using GitHub Actions, Bitrise, or another supported CI platform, check the help documentation for step-by-step setup guides.The migration doesn't have to be all-at-once. You can run TestApp.io alongside Firebase for a few release cycles to validate the workflow before fully switching over.
Firebase App Distribution is a solid, no-frills distribution tool. If all you need is to get builds to testers and you're already deep in the Firebase ecosystem, it works. But if you've been spending hours every sprint wrangling bugs across Jira, Slack, and spreadsheets — trying to answer "is this version ready to ship?" — then the problem isn't your testers or your process. It's that your distribution tool stops at distribution.
TestApp.io bridges that gap: distribute builds, manage QA tasks, track blockers, run release checklists, and sync with your existing tools — all in one platform. It's not trying to replace Firebase's analytics or crash reporting. It's replacing the duct tape you've been using to connect distribution to everything else.
Ready to try it? Sign up at portal.testapp.io — free to start, no credit card required.
]]>