If you've run into the 100-tester limit, spent time debugging "app not available" errors because a tester wasn't in the right Google Group, or wished your iOS and Android testers could use the same distribution workflow — you already know the friction points. This article breaks down where Google Play internal testing works, where it falls short, and how TestApp.io offers a faster, simpler alternative.
Google Play Console offers three testing tracks:
Internal testing is what most development teams use for day-to-day QA. You upload an APK, add testers by email, and they install via a Play Store link.
The internal testing track was designed for pre-launch validation, not for the fast iteration cycles that development teams actually need.
Internal testing is capped at 100 testers. That sounds like plenty until you count developers, QA engineers, product managers, designers, stakeholders, and client contacts. For a mid-sized team or an agency managing multiple clients, you hit the ceiling fast.
Every tester needs a Google account, and they must be added to a Google Group or individually by email. External stakeholders, clients, or testers who prefer not to use their personal Google accounts create friction. You can't just share a link — the tester must be on the list first.
If your team also builds for iOS, you need a completely separate distribution pipeline. Most teams end up using TestFlight for iOS and Google Play for Android — two tools, two workflows, two sets of feedback to reconcile.
Google Play Console doesn't have a concept of test tasks. Testers install the build and... that's it. There's no structured way to assign testing areas, track coverage, or collect feedback tied to specific features. Bug reports happen in Slack threads, email chains, or spreadsheets.
There's no native connection to Jira, Linear, Slack, or Microsoft Teams. Every bug found during testing must be manually entered into your issue tracker.
Even on the internal track, builds can take minutes to hours to become available after upload. If you're pushing multiple builds per day during a bug-fix sprint, those delays compound.
Testers install via a special Play Store link that often shows "app not available" if they haven't accepted the testing invitation, aren't signed into the right Google account, or if the build is still processing. Debugging these issues wastes everyone's time.
TestApp.io is built for the development and QA phase — the part of your workflow where speed and feedback matter more than store compliance.
| Capability | Google Play Internal Testing | TestApp.io |
|---|---|---|
| Android distribution | Yes (APK) | Yes (APK) |
| iOS distribution | No | Yes (IPA) |
| Tester limit | 100 (internal track) | Unlimited |
| Google account required | Yes | No |
| Build availability | Minutes to hours after upload | Seconds after upload |
| Install experience | Play Store link (can show errors) | Direct install link — works immediately |
| Task management | No | Yes — create, assign, and track tasks |
| Jira / Linear sync | No | Yes — bidirectional |
| Slack / Teams notifications | No | Yes |
| CI/CD integration | Via Play Console API or fastlane | Via ta-cli — works with any CI/CD |
| Activity feed | No | Yes — installs, feedback, test progress |
| Release lifecycle | Track-based (internal → closed → open → production) | Version-based with plan, test, approve workflow |
| Cost | Free (requires $25 developer account) | Free tier available — see pricing |
Here's what distribution looks like when you replace Google Play internal testing with TestApp.io during development:
When the build is stable and ready for wider distribution or production, you upload to Google Play Console as usual.
If you're already automating your Android builds, adding TestApp.io distribution is a single step. The ta-cli command-line tool works with any CI/CD system:
Your CI builds the APK, ta-cli uploads it, and your team gets notified — all automated.
Google Play internal testing is still the right choice when:
Switch to TestApp.io for your development and QA distribution when:
Setting up TestApp.io for your Android project takes about two minutes:
You don't have to choose one or the other permanently. Use TestApp.io for daily development builds and fast QA cycles and Google Play Console when you're ready to push toward production.
But "default" doesn't mean "best fit." TestFlight was designed for App Store beta programs, not for the fast, cross-platform iteration cycles that modern mobile teams need. If you've ever waited hours for a build to process, juggled separate distribution tools for iOS and Android, or wished your testers could file bugs directly from a test build — you already know the friction.
This article is a direct, honest comparison between TestFlight and TestApp.io. We'll cover where TestFlight genuinely excels, where it falls short, and how TestApp.io fills the gaps.
Credit where it's due. TestFlight has real strengths:
For teams that only ship iOS, only need basic feedback, and don't mind the processing delays — TestFlight is a reasonable choice.
The problems start when your team's needs go beyond "upload a build and hope testers find bugs."
TestFlight doesn't support Android. If your team builds for both platforms — which most teams do — you need a completely separate distribution pipeline for Android. That means different tools, different workflows, and different places to track feedback.
External TestFlight builds require Apple's Beta App Review, which can take hours or even days. Internal builds skip review but are limited to 100 testers who must be added to your App Store Connect team — fine for your development team, impractical for a QA group or external stakeholders.
TestFlight collects feedback, but there's no way to turn that feedback into tracked tasks. Testers submit screenshots and comments, and then... it sits in the TestFlight console. You manually copy issues into Jira, Linear, or whatever your team uses. There's no bidirectional sync, no status tracking, no way for testers to see if a bug they reported was fixed.
TestFlight doesn't connect to Jira, Linear, Slack, or Microsoft Teams. Every bug report, every status update, every "did this get fixed?" question requires manual effort.
TestFlight builds expire after 90 days. For most active development, this isn't an issue. But if you need to keep a reference build available for compliance, client demos, or regression testing, you'll need another solution.
You can see install counts, but there's no activity feed, no install timeline, and no per-tester engagement data. When your QA lead asks "has everyone on the team actually installed the latest build?" — TestFlight can't answer that directly.
TestApp.io is built for teams that need to move fast across both platforms. Here's how the two tools compare on the things that matter most during development and testing:
| Capability | TestFlight | TestApp.io |
|---|---|---|
| iOS distribution | Yes (via App Store Connect) | Yes (direct IPA upload) |
| Android distribution | No | Yes (APK upload) |
| Build processing time | Minutes to hours | Seconds after upload |
| Beta App Review required | Yes (external testers) | No |
| Tester limit | 100 internal / 10,000 external | Unlimited |
| Task management | No | Yes — create, assign, and track tasks per release |
| Jira / Linear sync | No | Yes — bidirectional sync |
| Slack / Teams notifications | No | Yes |
| CI/CD integration | Via Xcode Cloud or fastlane | Via ta-cli — works with any CI/CD |
| Install page | TestFlight app required | Direct install link — no app needed |
| Activity feed | No | Yes — see installs, feedback, and test activity |
| Build expiry | 90 days | No expiry |
| Release lifecycle | No | Yes — plan, test, approve, release workflow |
| Cost | Free (Apple Developer account required) | Free tier available — see pricing |
The table tells part of the story. The real difference is in how your day-to-day workflow changes.
TestFlight is still a good choice if:
TestApp.io makes more sense when:
If you want to try TestApp.io alongside or instead of TestFlight:
Your first project takes about two minutes to set up. You'll know within one release cycle whether it fits your workflow better than TestFlight alone.
What if you could upload both your APK and IPA to one place, send a single link to your testers, and get feedback the same day? That’s what TestApp.io is built for.
In this guide, we’ll walk through the full workflow: building your Flutter app for both platforms, uploading to TestApp.io (via the portal, CLI, or CI/CD), and getting your testers up and running in minutes—not days.
Flutter’s cross-platform promise breaks down at distribution time. Here’s what you’re up against:
For a Flutter team shipping to both platforms, managing two separate distribution pipelines is a tax on every release cycle.
TestApp.io was designed for exactly this scenario. Upload your APK and IPA to one place, invite your testers once, and let them install the right build for their device. No app store accounts required. No review gates. No separate workflows for Android and iOS.
Beyond distribution, TestApp.io gives your testers a way to report bugs directly from the app, log blockers, and track feedback—all tied back to the specific release they’re testing. Your team gets a release dashboard, notification integrations with tools like Slack and Microsoft Teams, and task management that syncs with project management tools such as Jira and Linear.
Before uploading anything, you need your build artifacts. Flutter makes this straightforward.
From your Flutter project root, run:
flutter build apk --releaseThis produces a fat APK (all ABIs) at:
build/app/outputs/flutter-apk/app-release.apkflutter build apk --split-per-abi to generate architecture-specific APKs, then upload the one matching your testers’ devices.Building for iOS requires a Mac with Xcode installed. Run:
flutter build ipa --release --export-method ad-hocThis generates the IPA at:
build/ios/ipa/<YourApp>.ipa--export-method ad-hoc flag is important. TestApp.io supports Ad Hoc, Development, and Enterprise signed IPAs. If you omit this flag, Flutter defaults to App Store export, which won’t work for direct distribution. Make sure your provisioning profile includes your testers’ device UDIDs for Ad Hoc builds.You have three ways to get your builds onto TestApp.io: the web portal, the CLI, or your CI/CD pipeline. Pick whichever fits your workflow.
The simplest path—ideal for one-off builds or when you’re just getting started:
app-release.apk and .ipa fileThat’s it. Testers receive a link, tap to install, and they’re testing your latest Flutter build within minutes.
For developers who prefer the command line, ta-cli lets you publish directly from your terminal. Install it first:
curl -Ls https://github.com/testappio/cli/releases/latest/download/install | bashThen publish both platforms in a single command:
ta-cli publish \\
--api_token=YOUR_API_TOKEN \\
--app_id=YOUR_APP_ID \\
--release=both \\
--apk=build/app/outputs/flutter-apk/app-release.apk \\
--ipa=build/ios/ipa/YourApp.ipa \\
--release_notes="Fixed login bug, improved performance" \\
--notifyKey flags explained:
--release: Set to both, android, or ios depending on what you’re uploading--apk / --ipa: Paths to your build artifacts--release_notes: What changed in this build (up to 1,200 characters)--git_release_notes: Automatically pull the last commit message as release notes--git_commit_id: Include the commit hash in the release notes for traceability--notify: Send push notifications to your team membersYou can grab your API token and App ID from your TestApp.io portal under Settings > API Credentials.
This is where the real time savings kick in. Automate the entire build-and-distribute pipeline so every push to your main branch delivers a fresh build to your testers.
Here’s a GitHub Actions workflow that builds your Flutter app for both platforms and uploads to TestApp.io:
name: Build & Distribute Flutter App
on:
push:
branches: [main]
jobs:
build-android:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: subosito/flutter-action@v2
with:
flutter-version: "3.x"
- run: flutter pub get
- run: flutter build apk --release
- uses: testappio/github-action@v5
with:
api_token: \${{ secrets.TESTAPPIO_API_TOKEN }}
app_id: \${{ secrets.TESTAPPIO_APP_ID }}
file: build/app/outputs/flutter-apk/app-release.apk
release_notes: "Android build from commit \${{ github.sha }}"
git_release_notes: true
include_git_commit_id: true
notify: true
build-ios:
runs-on: macos-latest
steps:
- uses: actions/checkout@v4
- uses: subosito/flutter-action@v2
with:
flutter-version: "3.x"
- run: flutter pub get
- run: flutter build ipa --release --export-method ad-hoc
- uses: testappio/github-action@v5
with:
api_token: \${{ secrets.TESTAPPIO_API_TOKEN }}
app_id: \${{ secrets.TESTAPPIO_APP_ID }}
file: build/ios/ipa/YourApp.ipa
release_notes: "iOS build from commit \${{ github.sha }}"
git_release_notes: true
include_git_commit_id: true
notify: trueTESTAPPIO_API_TOKEN and TESTAPPIO_APP_ID as GitHub repository secrets. Never hardcode credentials in your workflow files.The TestApp.io GitHub Action (testappio/github-action@v5) handles installing ta-cli and uploading each artifact. Since the action accepts a single file per step, the workflow runs Android and iOS as parallel jobs for faster builds.
GitHub Actions isn’t the only option. TestApp.io integrates with the CI/CD tools Flutter teams already use:
testappio Fastlane plugin to upload as part of your lane. Great if you’re already using Fastlane for code signing and build management.In every case, the pattern is the same: build your artifacts, then call ta-cli or the TestApp.io action to upload. Your testers get notified, install from a link, and you get feedback—all without touching an app store.
Here’s an honest look at how the main distribution options stack up for Flutter teams:
| TestApp.io | TestFlight | Google Play Internal Testing | Firebase App Distribution | |
|---|---|---|---|---|
| Android support | ✅ APK upload | ❌ iOS only | ✅ APK + AAB | ✅ APK + AAB |
| iOS support | ✅ IPA upload | ✅ Native | ❌ Android only | ✅ IPA upload |
| Review required | No | Yes (up to 48h) | No (internal track) | No |
| Tester setup | Email invite + link | Apple ID required | Google account + opt-in | Email invite |
| In-app feedback | ✅ Built-in | Basic screenshots | ❌ None | ❌ None |
| Task management | ✅ Built-in + Jira/Linear sync | ❌ None | ❌ None | ❌ None |
| Notification integrations | ✅ Slack, Teams, email | Email only | Email only | Email + Firebase console |
| CLI support | ✅ ta-cli | ✅ Xcode CLI | ✅ Gradle | ✅ Firebase CLI |
| CI/CD integrations | GitHub Actions, Fastlane, Codemagic, + more | Xcode Cloud, Fastlane | Gradle-based | Fastlane, GitHub Actions |
| Both platforms, one dashboard | ✅ | ❌ | ❌ | ✅ |
TestFlight remains the gold standard for iOS-only teams that need tight App Store integration. Firebase App Distribution is a solid choice if your stack is already Firebase-heavy. But for Flutter teams shipping to both platforms, managing a single distribution pipeline saves real time.
A few things we’ve seen work well for teams distributing Flutter apps:
main eliminates the "Can you send me the latest build?” messages from your Slack channel.--git_release_notes flag in ta-cli automatically pulls the last commit message. It takes zero effort and gives testers context on what changed.--release builds. Debug builds behave differently—they’re slower, include debug banners, and may not surface issues that only appear in release mode.If you’re building with Flutter and tired of juggling TestFlight, Play Console, and a patchwork of tools to get builds to your testers—give TestApp.io a try. Upload your APK and IPA, invite your team, and start collecting feedback today.
Already have a CI/CD pipeline? Check out the GitHub Actions setup guide to plug in TestApp.io in under five minutes.
Have questions about integrating TestApp.io into your Flutter workflow? Check our pricing page for plan details, or reach out—we’re happy to help.
]]>Whether you’re running a bare React Native project or using Expo’s managed workflow, getting test builds into your team’s hands quickly is what separates teams that ship fast from teams stuck waiting on TestFlight review queues. In this guide, we’ll walk through building release artifacts from both workflows, uploading them for distribution, automating the process with CI/CD, and how the available tools stack up against each other.
Before we get into distribution, let’s clarify something: regardless of whether you’re using the bare React Native CLI or Expo’s managed workflow, the end result is the same—an .apk file for Android and an .ipa file for iOS. These are the standard binary formats that any distribution platform, including TestApp.io, can work with.
The difference is how you produce those files, not what you produce.
For a bare React Native project, building a release APK is done through Gradle. From your project root:
cd android
./gradlew clean
./gradlew assembleReleaseBefore this works, you need a signing keystore configured in android/app/build.gradle. If you haven’t set one up yet, generate it with:
keytool -genkeypair -v -storetype PKCS12 \
-keystore my-upload-key.keystore \
-alias my-key-alias -keyalg RSA \
-keysize 2048 -validity 10000Your signed APK will be at android/app/build/outputs/apk/release/app-release.apk.
For iOS, you’ll need a Mac with Xcode installed. Open your .xcworkspace file:
open ios/YourApp.xcworkspaceThen:
.ipa file.For command-line builds (useful in CI), you can use xcodebuild:
xcodebuild -workspace ios/YourApp.xcworkspace \
-scheme YourApp \
-configuration Release \
-archivePath build/YourApp.xcarchive \
archive
xcodebuild -exportArchive \
-archivePath build/YourApp.xcarchive \
-exportPath build/ \
-exportOptionsPlist ExportOptions.plistIf you’re using Expo, EAS Build handles the compilation in the cloud—no local Android Studio or Xcode required for Android builds.
First, install the EAS CLI and configure your project:
npm install -g eas-cli
eas build:configureTo build installable binaries for tester distribution, set your eas.json build profile to use internal distribution:
{
"build": {
"preview": {
"distribution": "internal",
"android": {
"buildType": "apk"
},
"ios": {
"simulator": false
}
}
}
}Then build for both platforms:
eas build --profile preview --platform allThis produces an .apk for Android and an .ipa for iOS. Once the builds complete, download them from the EAS dashboard or CLI output.
"distribution": "internal" tells EAS to produce an APK for Android and an ad-hoc signed IPA for iOS. This is exactly what you need for tester distribution outside the app stores.Once you have your .apk and .ipa files, you have three ways to get them to your testers through TestApp.io:
The simplest approach: log into the TestApp.io Portal, navigate to your app, and drag-and-drop your build file. Your team gets notified instantly, and testers can install the build directly from their device’s browser—no app store review, no waiting.
For developers who prefer the command line, ta-cli lets you publish directly from your terminal:
ta-cli publish \
--api_token YOUR_API_TOKEN \
--app_id YOUR_APP_ID \
--release both \
--apk ./android/app/build/outputs/apk/release/app-release.apk \
--ipa ./build/YourApp.ipa \
--release_notes "Fixed login bug, updated onboarding flow" \
--notify \
--git_release_notes \
--git_commit_idKey flags:
--release: android, ios, or both--apk / --ipa: Path to your build artifact--release_notes: What changed in this build (up to 1,200 characters)--notify: Automatically notify your team members--git_release_notes: Append the latest git commit message to release notes--git_commit_id: Include the git commit hash in release notes--archive_latest_release: Automatically archive the previous release on uploadGenerate your API token from Portal Settings → API Credentials.
This is where it gets powerful. Wire up your CI/CD pipeline to build and distribute on every push. No manual steps, no forgotten uploads—your testers always have the latest build.
Here’s a complete GitHub Actions workflow that builds a React Native Android APK and uploads it to TestApp.io automatically:
name: Build & Distribute to TestApp.io
on:
push:
branches: [main, develop]
jobs:
build-android:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: 20
cache: npm
- name: Install dependencies
run: npm ci
- name: Build Release APK
run: |
cd android
./gradlew assembleRelease
- name: Upload to TestApp.io
uses: testappio/github-action@v5
with:
api_token: ${{ secrets.TESTAPPIO_API_TOKEN }}
app_id: ${{ secrets.TESTAPPIO_APP_ID }}
file: android/app/build/outputs/apk/release/app-release.apk
release_notes: "Build from commit ${{ github.sha }}"
git_release_notes: true
git_commit_id: true
notify: trueFor iOS, you’d add a separate job running on macos-latest with the Xcode archive steps, or use Fastlane to simplify the iOS build and signing process.
If you’re using Expo, your workflow is even simpler since EAS Build handles the compilation remotely:
name: Expo Build & Distribute
on:
push:
branches: [main]
jobs:
build-and-distribute:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: npm
- name: Install dependencies
run: npm ci
- name: Install EAS CLI
run: npm install -g eas-cli
- name: Build APK with EAS
run: eas build --profile preview --platform android --non-interactive
env:
EXPO_TOKEN: ${{ secrets.EXPO_TOKEN }}
- name: Download build artifact
run: |
BUILD_URL=$(eas build:list --platform android --status finished --limit 1 --json --non-interactive | jq -r .[0].artifacts.buildUrl)
curl -L -o app-release.apk "$BUILD_URL"
- name: Upload to TestApp.io
uses: testappio/github-action@v5
with:
api_token: ${{ secrets.TESTAPPIO_API_TOKEN }}
app_id: ${{ secrets.TESTAPPIO_APP_ID }}
file: app-release.apk
release_notes: "Expo build from ${{ github.sha }}"
notify: trueLet’s be real: there are other ways to get builds to testers. Here’s how they compare for React Native teams:
| TestApp.io | TestFlight | Expo EAS Update | |
|---|---|---|---|
| Android support | ✅ APK upload | ❌ iOS only | ✅ JS-only OTA |
| iOS support | ✅ IPA upload | ✅ Full support | ✅ JS-only OTA |
| Review required | No | Yes (first build per version) | No |
| Native code changes | ✅ Full binary | ✅ Full binary | ❌ JS bundle only |
| Setup complexity | Low — upload and share | Medium — Apple Developer account, provisioning | Medium — Expo ecosystem required |
| CI/CD integration | ✅ GitHub Actions, Fastlane, CLI | ✅ Via Xcode / Fastlane | ✅ Built-in EAS workflows |
| Team notifications | ✅ Built-in (Slack, Teams, email) | ❌ Manual or via App Store Connect | ❌ Manual |
| Tester device management | ✅ UDID collection + provisioning | ✅ Via Apple Developer portal | ❌ Not applicable (OTA) |
| Cost | Free tier available — see pricing | Requires $99/yr Apple Developer account | Free tier + paid plans |
If you’re reading older React Native guides, you’ll see references to CodePush for over-the-air updates. CodePush is no longer available. Microsoft retired Visual Studio App Center on March 31, 2025, and CodePush went down with it. While Microsoft open-sourced the CodePush server for self-hosting, that’s a significant operational burden for most teams.
For OTA JavaScript bundle updates, Expo’s EAS Update is a solid alternative. But for distributing full native builds—which you need any time you change native modules, update React Native versions, or modify native configuration—you need a proper binary distribution tool.
TestFlight is great for what it does, but it comes with friction for React Native teams:
TestApp.io handles both platforms from one dashboard, with no review queue, instant installation links, and built-in notifications to Slack and Microsoft Teams.
Here’s the workflow that React Native teams use to eliminate distribution as a bottleneck:
No app store reviews. No waiting. No “can you re-send me the build?” messages in Slack.
TestApp.io has a free tier that covers small teams, and setup takes about five minutes. Create your account, add your app, generate an API token, and you’re distributing.
If you’re already using CI/CD, add the GitHub Action or drop ta-cli into your pipeline script. Your testers will thank you.
]]>The TestApp.io mobile app does exactly that. It turns every tester's device into a complete testing workstation. Here is how it works.
Most build distribution tools give testers a link. They tap it, a file downloads, and they figure out the rest. On Android that means hunting for the APK in their downloads folder. On iOS it means navigating provisioning profiles and trust settings.
The TestApp.io app eliminates that friction. When a new build is uploaded, testers receive a push notification. Tapping it opens the release directly. On Android, a single tap starts the download and walks through installation automatically. On iOS, the app provides a QR code or direct link that handles the provisioning flow.
After installation, the button switches from "Install" to "Open" — testers can launch the build without leaving the TestApp.io app. If a newer build comes along, the button changes to "Upgrade" so testers always know when they are behind.
Telling testers "go test the app" without specific guidance leads to shallow, unstructured feedback. That is why the TestApp.io app surfaces tasks directly on the tester's phone.
Each app has a Tasks tab showing what needs to be tested. Tasks include status (new, in progress, blocked, done), priority, assignee, and a link to the specific release they apply to. Testers can update task status as they work — marking items in progress, flagging blockers, or completing them — all without switching to a browser.
If your team uses Jira or Linear, tasks sync bidirectionally. A tester marking a task as "blocked" in the app updates the linked Jira or Linear ticket automatically.
The best bug reports include context. The TestApp.io app lets testers submit feedback with up to 10 attachments — screenshots, screen recordings, or any other images and videos captured on their device.
Every release has a Comments tab where testers write feedback and attach files. Attachments upload in the background, so there is no waiting around. The same comments appear in the portal for developers and PMs who are triaging issues from their desk.
This matters because testers are on the device where the bug lives. They can capture exactly what they see — a glitchy animation, a layout issue on their specific screen size, a crash on their OS version — and attach it to their report in seconds.
One of the most common questions during QA is: "Was this bug in the last version too?" With the TestApp.io app, testers can answer that themselves.
The Releases tab shows every build ever uploaded for an app, with platform and status filters. Testers can install any previous version, reproduce the issue, then install the current build to confirm the fix. No need to ask a developer to dig up an old build and re-share it.
This is especially valuable for regression testing — when you ship a fix, your testers can verify it did not break something that was working before by comparing the old and new builds side by side.
The app does not rely on polling. Real-time updates push changes to every connected device immediately:
This means testers always see the current state of the project. No refreshing, no wondering if they are looking at stale data.
Testers who work across multiple projects can switch between teams from the side menu. Each team has its own set of apps, releases, tasks, and notifications. The switch is instant — all data refreshes to show the selected team's workspace.
If a tester receives a deep link or push notification from a different team than the one they currently have open, the app automatically switches context to the right team.
The TestApp.io app is available on iOS and Android. Testers sign in with their existing TestApp.io account (email, Google, or Apple sign-in) and accept a team invitation to start seeing releases.
For the full setup walkthrough, see the Getting Started with the TestApp.io Mobile App guide in the help center.
If you are managing the distribution side — uploading builds, creating tasks, inviting testers — the Getting Started with TestApp.io guide covers the portal workflow end to end.
TestApp.io helps mobile teams distribute builds to testers, collect feedback, and manage releases — all in one place. Support for iOS (IPA) and Android (APK), with integrations for Slack, Microsoft Teams, Jira, Linear, and 10+ CI/CD platforms.
👉 Get started free — or explore the Help Center to learn more.
]]>But mobile app testing creates a gap that Jira alone cannot fill. A tester installs a build, discovers a crash on a specific device, and needs to report it. They can file a Jira issue manually — typing out the reproduction steps, attaching screenshots, and setting the priority. Then the developer fixes it and moves the Jira issue to "Done". Now someone has to go back to the testing tool to update the status there. Or worse, no one does, and the two systems drift apart.
For teams shipping mobile apps on a weekly or biweekly cadence, this manual sync between Jira and your testing workflow becomes a serious drag on velocity. Missed status updates. Duplicate issues. Bug reports filed in the wrong place. Testers are waiting for developers to update a ticket; developers are assuming the tester already verified.
TestApp.io integrates directly with Jira through Atlassian's OAuth 2.0 to provide real-time, bidirectional sync between your testing tasks and Jira issues. Here is how it works and why it matters for teams shipping mobile apps.
Every mobile release involves two distinct workflows running in parallel:
The problem is not that you use two tools. The problem is that both tools need to reflect the same state, and keeping them in sync manually is unreliable.
Consider what happens during a typical QA cycle:
By the end of the sprint, the Jira board says one thing and the testing tool says another. The team loses confidence in both.
The TestApp.io Jira integration connects the two workflows so that changes in either system propagate automatically. No copying, no pasting, no manual bridging.
The setup uses Atlassian's OAuth 2.0 for secure authorisation:
The entire process takes under five minutes. TestApp.io requests only the permissions it needs to read and write issues in your selected project — it does not ask for admin access to your Atlassian organisation.
Jira and TestApp.io use different schemas for statuses and priorities. Field mappings define the translation layer so data moves correctly between both systems.
Status mapping connects each TestApp.io status to its Jira equivalent:
Priority mapping aligns severity levels so a critical issue in one tool has the same urgency in the other. TestApp.io priorities (Blocker, Critical, High, Normal, and Low) map to Jira priorities (Highest, High, Medium, Low, and Lowest) based on your team's definitions.
Both mappings are fully customisable. If your Jira project uses custom statuses or a modified workflow, you can map every status individually. The same applies to priority levels.
Once connected and mapped, sync happens automatically via webhooks in near real time:
This is not polling on a schedule. Webhooks trigger on every change, so both systems stay in sync without delay. Failed webhook deliveries are logged in the sync history and can be retried.
Most teams starting with the integration already have work in progress in both tools. Two features handle this:
Pull existing Jira issues into TestApp.io with the Pull Tasks feature. Browse your Jira project's issues, select the ones relevant to your current testing cycle, and import them. Each imported issue becomes a synced TestApp.io task — future changes in either direction flow automatically.
A practical approach: import only active issues (those in "To Do" or "In Progress" status). There is no need to import your entire Jira backlog on day one.
Going the other direction, you can push TestApp.io tasks to Jira using the Migrate Tasks feature. Select the tasks, review the status and priority mappings, and confirm. Each task is created as a new Jira issue and linked for ongoing sync.
This is particularly useful when your QA team has been logging issues in TestApp.io and now wants developers to see them on the Jira board without re-entering everything.
With the integration running, here is what a typical testing cycle looks like for a team with developers in Jira and testers in TestApp.io:
No one manually bridges the gap. Both tools are always in sync. Developers never leave Jira. Testers never leave TestApp.io.
Every sync event — successful or failed — is logged in the integration's sync history. Each entry shows:
This matters for two reasons. First, it makes debugging straightforward — if a status is not syncing correctly, the history tells you exactly what happened. Second, it provides accountability for teams that need to track who changed what and when across both systems.
Failed sync events can be retried directly from the history view, handling transient errors like network timeouts without manual intervention.
The integration includes several features that make it production-ready for teams at scale:
For the full feature set, see the Integration Power Features guide.
Mobile app releases have a unique challenge that web development does not: the testing environment is fragmented across devices, OS versions, and form factors. A bug might only appear on a specific Android device running a specific OS version. The context around that bug — device info, screenshots, reproduction steps — is critical for the developer to fix it efficiently.
When testers file bugs in TestApp.io during real-device testing, that context is captured at the source. The Jira integration ensures it reaches developers without anyone stripping out details or forgetting to attach the screenshot. The developer gets the full picture in Jira. The tester gets status updates in TestApp.io. Both sides have what they need to do their job.
For teams shipping mobile apps on tight schedules, eliminating the manual overhead of keeping Jira and your testing workflow in sync directly translates to faster release cycles and fewer dropped bugs.
Connect your Jira workspace at portal.testapp.io under Team Settings → Integrations. The setup takes about five minutes.
For the full step-by-step setup guide with screenshots, see the Jira Integration help article. For details on task management features, visit Task Management. And if you are also using Linear, we have a dedicated integration for that too.
TestApp.io helps mobile teams distribute builds to testers, collect feedback, and manage releases — all in one place. Support for iOS (IPA) and Android (APK), with integrations for Slack, Microsoft Teams, Jira, Linear, and 10+ CI/CD platforms.
👉 Get started free — or explore the Help Center to learn more.
]]>Add two more people to the equation and everything changes. Suddenly you need a way to distribute builds so testers can install them. You need to know who tested what. You need a place to collect feedback that is not a group chat full of screenshots with no context. You need to track whether a bug was fixed, verified, and ready for release — not just on your machine, but on the actual devices your testers are using.
Mobile teams that ship reliably have figured out this coordination problem. They have a workflow that moves builds from development to testing to release without gaps. And that workflow is what separates teams that ship weekly from teams that spend half their sprint chasing status updates across Slack, email, and spreadsheets.
The bottleneck for most mobile teams is not writing code. It is everything that happens between "the code is merged" and "the app is live in the store." Specifically:
Each of these problems is small on its own. Together, they compound into the reason many mobile teams can only ship every two to four weeks instead of every week.
Here is the release workflow that teams on TestApp.io follow to ship consistently and quickly. It breaks into four phases.
Every release starts with a build. TestApp.io accepts both Android APKs and iOS IPAs. You can upload manually through the web portal or automate it through CI/CD integrations with GitHub Actions, Fastlane, Bitrise, or any pipeline that can make an API call.
When a build is uploaded, two things happen automatically:
Testers install the build on their physical devices with one tap. Android installs directly. iOS installs via an ad hoc or enterprise provisioning profile.
Each build lives inside a version, and each version moves through a lifecycle: planning, testing, approval, and release. This structure replaces the informal "is this build ready?" conversations with a clear visual status.
Within each version, you can create tasks — either manually or by letting AI generate them from your release notes. Tasks have priorities, assignees, and statuses that sync bidirectionally with Jira or Linear if your team uses either tool.
The dashboard shows you everything at a glance: recent releases, active tasks, team activity, and install metrics. No digging through multiple tools to understand where things stand.
Testing is where most workflows break down. Not the testing itself, but the feedback loop. A tester finds an issue — now what? Where do they report it? How do they include device context? How does the developer know about it?
In TestApp.io, testers file feedback directly from their device during or after a testing session. The report automatically includes device model, OS version, and app version. Testers add screenshots, reproduction steps, and priority levels.
These reports become tasks that developers can see immediately — either in TestApp.io's built-in task board or in their Jira/Linear project via the integration sync. If something is a release blocker, the blocker tracking feature flags it with the appropriate severity level so the team can prioritize accordingly.
The activity feed gives the team lead visibility into everything happening in real-time: who installed, who commented, which tasks were updated, which blockers were resolved. No need to ask "has everyone tested?" — you can see it.
Before submitting to the App Store or Google Play, teams need a structured way to verify that everything is ready. Playbooks are reusable checklists that standardize this process. Define the steps once — check crash reports, verify accessibility, confirm localization, test on minimum supported devices — and use the same playbook for every release.
Once the checklist is complete and all blockers are resolved, the version moves to the approval stage. From there, launches let you track the actual App Store and Google Play submissions: review status, approval timelines, and release dates.
No team uses a single tool. The value of a release workflow is how well it connects with the tools you already use.
The individual features — build distribution, task management, blocker tracking, reusable checklists — are useful on their own. But the real value is how they compound.
When your build is distributed automatically, testers start testing sooner. When feedback flows directly into tasks, developers fix bugs faster. When status is tracked in one place, nobody wastes time asking for updates. When checklists are reusable, release quality stays consistent even as the team grows.
Teams that adopt a structured release workflow typically see their time from build to first tester install drop from days to hours. Not because any single step got faster, but because the gaps between steps disappeared.
If your team is currently stitching together a release workflow across email, Slack, spreadsheets, and TestFlight, here is the fastest path to a structured process:
The Getting Started guide walks through each step in detail. For teams migrating from another platform, check the App Center migration guide, TestFlight alternatives, or Firebase alternatives comparison.
]]>That works when you are the only person writing code. It stops working the moment someone else needs to test your builds, report bugs, or help you decide if a release is ready. And it completely breaks down when you have a team of four or five people all working on the same app, shipping updates every week.
This guide is about the transition from solo mobile development to a team release process — what changes, what breaks, and how to set up a workflow that does not collapse under the weight of coordination.
As a solo developer, your "release process" probably looks something like this:
There is no handoff because there is no one to hand off to. There is no feedback loop because you are the tester. There is no status tracking because you know the status — it is whatever you are currently doing.
Now add a team:
Suddenly you need answers to questions that never existed before:
Most teams solve these problems with whatever tools are already lying around: Slack for bug reports, email for build distribution, a spreadsheet for tracking who tested what. It works until it does not, usually around the third or fourth sprint when a bug slips through because the report was buried in a thread.
Scaling from solo to team is not about adopting a dozen new tools. It is about adding structure to three things that were invisible when you worked alone:
Your tester cannot test if they do not have the build. This sounds obvious, but it is the most common bottleneck for small teams. The developer builds, then has to remember to share the APK or IPA, then the tester has to figure out how to install it.
On iOS, this is especially painful. You need provisioning profiles, device UDIDs, and either TestFlight (with its review delays) or ad hoc distribution (with its device limits and certificate management).
The fix is a distribution platform that handles this automatically. Upload the build — either manually or via CI/CD — and everyone on the team can install it on their device from one place. TestApp.io handles both Android APK and iOS IPA distribution with a simple install flow that works on physical devices.
If you are already using a CI/CD pipeline, you can automate the upload so every merge to your release branch distributes a build without anyone manually doing anything.
When a tester finds a bug, two things matter: the details are complete enough for a developer to reproduce it, and the report does not get lost.
"It crashed" in a Slack message is not a bug report. "Layout broken on the settings screen" with no screenshot is barely better. What the developer needs is: which device, which OS version, which app version, what were the steps, and ideally a screenshot or screen recording.
TestApp.io's task management captures this context at the source. When a tester files feedback, device information is included automatically. They add the reproduction steps, screenshots, and severity level. The result is a task that a developer can act on immediately without a follow-up conversation asking "what phone were you using?"
For teams that use Jira or Linear for development work, the Jira and Linear integrations sync these tasks bidirectionally — so developers see the bug in their tool, and testers see the fix status in theirs.
Solo developers know when the release is ready because they decide. On a team, "ready" requires consensus. Has everyone tested? Are there open blockers? Did someone verify that the login flow still works after last week's refactor?
Two features solve this:
The combination gives you a clear answer to "can we ship?" instead of a vague feeling based on who you last talked to.
Here is a practical sequence for teams transitioning from solo to structured:
This alone eliminates the "how do I get the build?" problem. No more emailing APKs, sharing download links in Slack, or walking over to someone's desk with a USB cable.
Now feedback has a home. Bug reports are structured, tracked, and visible to the entire team. No more digging through chat history to find that screenshot someone sent three days ago.
After three weeks, you have a complete workflow: builds are distributed automatically, feedback is collected in structured tasks, and releases follow a repeatable checklist.
Now the tedious parts are automated, and you can focus on what matters: building the app and making sure it works.
Having seen teams go through this transition, a few patterns consistently cause problems:
The transition from solo to team does not require a big-bang process change. Start with the distribution problem — get your builds to your testers without manual effort. Then layer on structured feedback and release checklists as your team needs them.
Create your team at portal.testapp.io and follow the Getting Started guide. If you are coming from another tool, check the App Center migration guide or the comparison guides for TestFlight, Firebase, and Diawi alternatives.
]]>Enterprise mobile teams need distribution infrastructure that matches their security requirements, team complexity, and release velocity. Here's what that looks like in practice.
Talk to any mobile engineering manager at a company with 50+ people touching mobile apps, and the same requirements come up:
Most app distribution tools are built for indie developers or small teams. They solve the "how do I get this APK to my friend" problem. Enterprise teams need something different.
The single biggest concern enterprise teams raise: where are our builds stored?
When you connect your own S3 bucket or Google Cloud Storage to your distribution platform, you get:
This matters for regulated industries — fintech, healthcare, government contractors — where a third party storing your application binaries creates compliance headaches.
A typical enterprise mobile org looks like this:
That's 15-30 people who need different levels of access to different builds. Setting up your workspace with proper team structure from day one prevents the chaos of everyone seeing every build.
The pattern that works for teams of 10+:
Manual uploads don't scale past 3-4 builds per week. At enterprise velocity (daily builds, multiple variants), you need automated distribution.
The setup is straightforward with any major CI/CD tool:
Once connected, every successful build automatically lands in your testers' hands. No Slack messages, no manual downloads, no "which build should I test?"
When 20 people are involved in a release cycle, communication overhead is the real productivity killer. Automated notifications solve this:
The goal is zero-effort distribution: developer pushes code → CI builds → testers get notified → feedback flows back into your issue tracker. No one has to manage the process manually.
Enterprise releases can't ship on vibes. You need verifiable quality criteria:
This is the gap between "we distributed the app" and "we're confident it's ready to ship."
If you're running a mobile team of 10+ and currently managing distribution via TestFlight + Slack messages + shared drives, the path to enterprise-grade distribution takes about an afternoon:
Your team will have professional distribution infrastructure running by end of day — the same setup used by teams of 50 to 100+ who ship weekly without the chaos.
]]>Then your team grows to 10, 20, 50 people. You start shipping weekly. You have QA, product managers, stakeholders, and external beta testers. And suddenly Firebase's simplicity becomes a limitation.
Here's what teams consistently report when they outgrow Firebase — and what they do about it.
A tester finds a bug in your build. In Firebase, they… send you a Slack message? File a Jira ticket manually? There's no way to create, track, or resolve issues inside the distribution workflow.
Teams need built-in task management where bugs discovered during testing are tracked alongside the build that triggered them. Even better: bidirectional sync with Jira or Linear so issues flow automatically between your testing platform and project management tool.
With Firebase, every tester needs a Google account and must be explicitly invited. That works for internal teams but fails for:
Public install pages let anyone install with a link — no account required. You can still control access, but you remove the friction that blocks your testing velocity.
When is a build ready to ship? Firebase can't answer that. It distributes builds — that's it. There's no concept of:
Quality playbooks turn "I think it's ready" into "all 12 checklist items are verified and all 3 blockers are resolved."
Firebase App Distribution lives inside the Firebase Console. Your builds are on Google's infrastructure. Your analytics are in Google's format. If your team uses AWS or Azure, you're running a split infrastructure.
Teams that need external storage on their own S3 or GCS buckets can't do that with Firebase. For regulated industries (fintech, healthcare), this is often a dealbreaker.
With Firebase, you upload a build and hope people install it. You can see download counts, but you can't see:
Activity feeds and installation tracking give team leads visibility into the actual testing progress — not just "it was distributed."
Teams typically switch in an afternoon. The process:
Your existing Firebase testers just need a new install link. No migration tool needed — you're not migrating data, you're upgrading your workflow.
To be fair, Firebase App Distribution works well for:
If that describes your team, stick with Firebase. But if you're reading this, you've probably already hit the ceiling.
See our detailed Firebase App Distribution vs TestApp.io comparison for the full feature-by-feature breakdown, or read our Firebase alternatives guide to understand all your options.
The teams that switch typically share the same story: Firebase was great when they were small, but as soon as they needed task management, quality gates, or team workflows, they needed a dedicated platform built specifically for mobile distribution.
]]>Without a system, you get the familiar chaos: "which build has the fix?", "did QA test this?", "I thought we were shipping Thursday?", and the classic "my build is 3 versions behind."
Here's the release management system that works for teams of 10 to 100+.
If anyone on your team is manually uploading builds, you have a bottleneck. At team scale, distribution must be automated:
Set this up once with GitHub Actions, Fastlane, Bitrise, or any CI tool via TA-CLI, and you never think about it again. Every build reaches testers within minutes of being merged.
The biggest time sink for engineering managers: collecting and organizing feedback from testers, PMs, and stakeholders who all report bugs differently, in different channels.
The fix:
The result: every piece of feedback lives in one place, linked to the build it was found in, and flows into your project management tool automatically.
At team scale, "I think it's ready" is not a release strategy. You need verifiable quality criteria:
Engineering managers use these to answer the daily standup question: "are we on track to ship this week?"
Stop being the person who messages the team every time a build is ready. Automate it:
Here's what a typical week looks like for a team of 15 using this system:
Monday: Developers merge features from the sprint. CI automatically builds and distributes to QA team. Slack notification fires: "Build 4.2.1 (247) ready for testing."
Tuesday-Wednesday: QA tests on iOS and Android. Issues are created in-app, automatically synced to Jira. Two blockers are flagged. Developers see them immediately and start fixing.
Thursday: Blocker fixes are merged. New build is auto-distributed. QA re-tests the specific issues. Both blockers are resolved and marked as fixed in Jira.
Friday: Engineering manager checks the launch playbook: 8/8 items checked. No open blockers. 12 of 15 team members have installed and tested. PM has signed off. Build goes to production.
Total time the engineering manager spent on distribution logistics: approximately zero.
If you're currently managing releases through a combination of TestFlight, Slack, Google Drive, and prayer, here's how to set up a proper system in one afternoon:
The whole setup takes an afternoon. By next week, your team will have the same release infrastructure used by teams of 50+ who ship every week without the chaos.
Is this a blocker? Who decides? Where does it get tracked? Does the release go out anyway because the deadline is today and someone in management said "we committed to this date"?
If your team has shipped a critical bug because a blocker got lost in a Slack thread or buried in a long Jira backlog, you already know the cost. App store review rejections. One-star reviews. Emergency hotfixes on a Saturday. Trust erosion with your users.
Blocker tracking exists to prevent exactly this. Not as another process to follow, but as a dedicated mechanism that ensures critical bugs can't be ignored, forgotten, or deprioritized into oblivion.
Let's define terms clearly, because "blocker" gets used loosely in many teams.
A blocker is an issue with the highest possible priority — one that must be resolved before a release can ship. It's not a "nice to fix." It's not a "we should probably look at this." It's a hard stop.
Common examples of blockers:
Common examples of things that are not blockers (even if they're annoying):
The distinction matters because when everything is a blocker, nothing is. Teams that over-use the blocker label create noise. Teams that under-use it ship broken software. The goal is precision.
Most teams don't lack a way to report bugs. They have Jira, Linear, GitHub Issues, Asana, or a dozen other tools. The problem is that blockers don't get special treatment in these systems. They're just another priority level in a list of hundreds of issues.
Here's what typically goes wrong:
TestApp.io treats blockers as a first-class concept, not just another priority level. Here's how the system works end to end.
There are two primary ways to report a blocker:
1. From task creation. When creating a new task in the task management system, set the priority to Blocker. This is the highest priority level available, above Critical, High, Normal, and Low. The task is immediately flagged across the system.
2. From a release. When a tester is working with a specific build and discovers a blocking issue, they can report the blocker directly from the release. This creates a task with Blocker priority that is automatically linked to the specific release where the issue was found. This linkage is important — it answers the question "which build has this problem?" without any manual effort.
Both paths result in the same outcome: a tracked blocker that surfaces everywhere it needs to.
This is where dedicated blocker tracking diverges from generic issue tracking. In TestApp.io, blockers don't just exist in a task list — they surface prominently across multiple views:
App Dashboard — Blocker Count Badge. The main dashboard for each app shows a blocker count. You don't have to dig into task lists or run filtered searches. The number is right there, impossible to miss. If it's not zero, you know there's a problem.
Version Overview — Warning Indicators. When viewing a version's overview, any open blockers trigger warning indicators. This is critical during the Testing and Ready phases of the version lifecycle. A version with open blockers is visually flagged as not-ready, regardless of what anyone says in a meeting.
Release List — Flagged Releases. Individual releases (builds) that have blockers reported against them are flagged in the release list. When scrolling through builds, you can immediately see which ones have known blocking issues. This prevents testers from wasting time on builds that are already known to be broken.
The design principle here is simple: blockers should be unavoidable. You shouldn't have to go looking for them. They should be in your face until they're resolved.
Finding and reporting blockers is only half the battle. The other half is resolving them with a clear, auditable process.
When a blocker is resolved in TestApp.io, the resolution captures several pieces of information:
This resolution data feeds into the audit trail for the version, creating a complete record of every blocker's lifecycle: when it was reported, on which build, by whom, how it was resolved, by whom, and when.
Over time, blocker data becomes a powerful diagnostic tool for your release process. TestApp.io tracks blocker metrics that help you answer important questions:
SLA tracking adds a time dimension to this. You can monitor whether blockers are being resolved within acceptable timeframes and identify when resolution is lagging behind expectations.
Blocker tracking doesn't exist in isolation — it's deeply connected to the version lifecycle. Here's how they interact at each stage:
Planning and Development: Blockers are less common here since there may not be testable builds yet. But they can exist — for example, a known issue carried over from a previous version that must be addressed before this one ships.
Testing: This is where most blockers are discovered. As testers work through builds, they report blockers that surface prominently on the version's Quality tab. The blocker count becomes the primary metric for release readiness during this phase.
Ready: Moving a version to Ready status is a statement that the version is shippable. Open blockers directly contradict this. The blocker count on the version overview serves as a quality gate — it's a clear, objective signal that the version isn't actually ready if the count is greater than zero.
Released: If a blocker is discovered after release (it happens), it can still be tracked against the version. This feeds into post-release metrics and may trigger a hotfix version.
This integration means blocker tracking isn't a separate process bolted onto your workflow. It's woven into the progression of every release.
Let's walk through the scenario from the introduction with proper blocker tracking in place.
4:30 PM Friday. Your team has version v3.2.0 in Testing status. Three builds have been uploaded this week via CI/CD. The latest build, uploaded two hours ago, is the release candidate.
4:32 PM. A tester discovers that the payment flow crashes on iOS 17 when the user has no saved payment methods. They report a blocker directly from the release. The task is created with Blocker priority, linked to the specific build.
4:33 PM. The blocker count on the app dashboard updates to 1. The version overview shows a warning indicator. The release is flagged in the release list. Everyone with access can see this immediately — no Slack message required.
4:35 PM. The team gets a Slack notification (via the Slack integration) about the new blocker. The notification includes the blocker description, which build it affects, and who reported it.
4:40 PM. The lead developer picks up the blocker, reproduces it, and identifies the issue — a nil check that was missed in a recent refactor. The fix is straightforward.
5:15 PM. The fix is pushed. CI/CD runs, and a new build is automatically uploaded to the version's releases via ta-cli.
5:20 PM. The tester installs the new build from the release link, verifies the fix, and the developer resolves the blocker with notes: "Added nil check for saved payment methods array. Verified on iOS 17.2 simulator and physical device."
5:22 PM. Blocker count drops to 0. Version overview shows no warnings. The Quality tab confirms no open blockers.
5:25 PM. The team reviews the Quality tab one more time, confirms everything looks clean, and moves the version to Ready. The release manager will submit to the App Store on Monday morning.
Everyone goes home on time.
Compare this to the alternative: the bug is reported in Slack, gets buried under replies, someone half-remembers it on Monday, the version ships without the fix, and a one-star review appears by Tuesday.
Here are practical recommendations for getting the most out of blocker tracking:
Every team should have a shared definition of what makes something a blocker versus a critical or high-priority bug. Write this down in your team's onboarding docs or wiki. Ambiguity here leads to either over-reporting (which creates noise) or under-reporting (which defeats the purpose).
A simple framework: If this bug were in production, would it cause immediate harm to users or the business? If yes, it's a blocker.
When you report a blocker from a specific release, it's automatically linked to that build. This context is valuable — it tells the developer exactly which build to reproduce the issue on and gives the team traceability from bug to build to fix to verification.
"Fixed" is not a resolution note. "Added nil check for savedPaymentMethods array in CheckoutViewController. Crash was caused by force-unwrapping an optional that is nil when user has no saved cards. Verified fix on iOS 17.0, 17.2, and 17.4" — that's a resolution note. Future team members will thank you.
During your retrospective (you are doing retrospectives, right?), pull up the blocker metrics. Look at:
Trends in these metrics are more informative than any single data point.
It's tempting to ship with an open blocker when there's pressure from stakeholders or a hard deadline. Resist this. The entire point of blocker tracking is to provide an objective signal. If you override it routinely, you've just built a system that everyone ignores.
If a deadline is truly immovable, the correct response is to scope down the release — remove the affected feature or screen — not to ship a known blocker.
Connect TestApp.io to Slack or Microsoft Teams so blocker notifications are automatic. The faster the team knows about a blocker, the faster it gets resolved. Slack integration supports channel selection and event configuration, so you can route blocker notifications to a dedicated release channel without spamming your general channel.
Tools can only do so much. Blocker tracking works best when it's backed by team culture:
Blocker tracking isn't glamorous. It's not the feature you showcase in a demo. But it's the feature that prevents your most painful days — the emergency hotfixes, the weekend deploys, the apologetic emails to users.
The core idea is simple: critical bugs deserve dedicated, visible, enforceable tracking that's connected to your releases and your version lifecycle. When blockers can't hide in long task lists, when they surface on every dashboard, and when their resolution is documented and measurable — you ship better software.
Not because you have fewer bugs (you'll always have bugs), but because the ones that matter most can't slip through.
Start using TestApp.io to bring structured blocker tracking to your mobile releases. Check the help center for setup guides and detailed documentation.
]]>Version management shouldn't be this painful. But for most mobile teams, it is — because they're stitching together tools that were never designed to track the full lifecycle of a mobile release.
This guide walks through how to manage the complete version lifecycle in TestApp.io, from the first planning session to the final archive. If you're tired of ambiguity around what's shipping, when, and whether it's actually ready, read on.
Before diving into the solution, let's be honest about why version management falls apart. Most teams start with good intentions — a Slack channel, a Notion doc, maybe a Jira epic per release. But these approaches share common failure modes:
The result is predictable: missed bugs, confused testers, delayed releases, and a lot of time spent in "what's the status?" meetings that shouldn't need to exist.
TestApp.io provides a structured version lifecycle that gives every release a clear, trackable progression from initial planning through final archival. Each version moves through defined statuses, and every artifact — builds, tasks, blockers, launch submissions — is connected to the version it belongs to.
Here's what the lifecycle looks like at a high level:
Planning → Development → Testing → Ready → Released → Archived
Each status represents a distinct phase with its own activities, expectations, and quality gates. Let's walk through every stage.
Every version starts in the Planning status. This is where you define what's going into the release before any code is written or any builds are uploaded.
During planning, you'll typically:
v2.5.0) along with any relevant notes about scope or goals.The Planning tab within the version dashboard gives you a focused view of all tasks associated with the version. You can see what's assigned, what's prioritized, and what's still unscoped.
If you're creating a version around a set of release notes or a feature description, TestApp.io can generate up to 15 QA tasks automatically using AI. These tasks are platform-aware, meaning they'll account for iOS-specific or Android-specific testing needs. It's a fast way to bootstrap your testing plan without starting from a blank slate.
Once planning is complete, move the version to Development. This signals to the team that active work is underway.
During development, the version dashboard becomes a coordination hub:
ta-cli, so every successful build is captured and linked.The key benefit here is visibility. Instead of asking "did the latest build get uploaded?" in Slack, you can see it directly in the version dashboard.
Moving to Testing status tells the team that the version is ready for QA. Builds are available, and testers should be actively validating.
This is where the version dashboard really shines:
The Testing phase is where quality gates become critical. TestApp.io tracks blockers — the highest-priority issues that must be resolved before a release can ship. We'll cover blocker tracking in depth in a separate post, but the key point is this: blocker counts are visible on the version dashboard, and they serve as a clear signal of release readiness.
If a version has open blockers, it's not ready. Period. This removes the subjective "I think it's fine" conversations and replaces them with objective criteria.
A version moves to Ready when testing is complete and all quality gates are passed. This means:
The Ready status is a holding state — it means the version is approved for release but hasn't been submitted or shipped yet. This is useful for teams that have a scheduled release cadence or need sign-off from a release manager before going live.
Once the version is live — whether that means submitted to the App Store, pushed to Google Play, or distributed to your full user base — it moves to Released.
This is also where Launches come into play. Launches are TestApp.io's way of tracking store submissions attached to a version. A launch progresses through its own statuses:
Draft → In Progress → Submitted → Released
You can track exactly where your App Store or Google Play submission stands without leaving the version dashboard. This is especially useful for teams that submit to multiple stores or have staggered rollouts across platforms.
Before marking a launch as submitted, many teams use Playbooks — reusable checklists that ensure nothing is missed. TestApp.io includes templates for common scenarios:
You can also create custom playbooks with required items, so critical steps can't be skipped. Think of them as pre-flight checklists for your release.
After a version has been released and enough time has passed, move it to Archived. This keeps your active version list clean while preserving the full history of what happened — every build, every task, every comment, every status change.
Archived versions remain fully searchable and browsable. You're not deleting anything; you're decluttering your workspace.
Each version in TestApp.io has a dedicated dashboard with five tabs. Here's what each one gives you:
| Tab | What It Shows |
|---|---|
| Overview | Version summary — current status, key metrics, recent activity, blocker count, and quick links to important artifacts. |
| Planning | All tasks associated with the version. Filter by assignee, priority, or status. Kanban board and table views available. |
| Releases | Every build uploaded for this version. Platform, file info, upload date, install links, and distribution status. |
| Quality | Blocker tracking, testing metrics, and quality indicators. The go-to tab for answering "is this version ready to ship?" |
| Settings | Version configuration — name, description, target dates, and other metadata. |
Having all of this in one place eliminates the context-switching tax of jumping between Jira, Slack, spreadsheets, and your CI dashboard.
Let's compare what release week looks like with and without structured version management.
The difference isn't magic — it's structure. When every artifact, status change, and quality signal lives in one connected system, releases become predictable instead of chaotic.
Every action taken on a version is recorded in an audit trail. This includes:
This isn't just for compliance — though it helps there too. The audit trail is invaluable for post-mortems. When a release goes sideways, you can reconstruct exactly what happened without relying on anyone's memory.
If you're transitioning from ad-hoc release tracking, here are some practical suggestions:
Don't try to retroactively organize past releases. Create a version for your next upcoming release and use it as a pilot. Let the team experience the workflow before rolling it out broadly.
The biggest time-saver is automatic build uploads. Set up ta-cli in your CI/CD pipeline so every successful build automatically appears in the version's Releases tab. This eliminates the "where's the latest build?" question entirely.
Make it a team norm: if a bug could prevent the release from shipping, it's a blocker. Report it as a blocker, not just a high-priority task. The distinction matters because blocker counts are surfaced prominently across the dashboard.
If your team uses project management tools like Jira or Linear, connect them. Two-way sync means tasks created in those tools automatically appear in your version's planning tab, and status changes flow both directions in real time. This avoids duplicate data entry and keeps everyone working in their preferred tool.
Start with the built-in templates for App Store or Google Play submissions. Customize them over time as you learn what your team's specific pre-release checklist looks like. The goal is to make "did we forget something?" a question that never needs to be asked.
Spend 15 minutes after each release reviewing the audit trail. Look for patterns: Are blockers consistently found late in the cycle? Are certain types of tasks always underestimated? The data is there — use it to improve your process.
Version lifecycle management isn't about adding process for the sake of process. It's about replacing ambiguity with clarity. When every team member can look at a version dashboard and immediately understand what's planned, what's built, what's tested, what's blocking, and what's shipped — releases stop being stressful events and start being routine operations.
TestApp.io's version lifecycle gives you the structure to make that happen, without forcing you into a rigid workflow that doesn't fit your team. The six stages are a framework, not a straightjacket. Use them as guardrails, and let the connected dashboard, blocker tracking, and audit trail handle the rest.
Ready to bring order to your release process? Get started with TestApp.io and create your first version today. For detailed setup instructions, visit the help center.
]]>The result? Some features get tested rigorously. Others barely get a glance. And when a bug ships to production, the postmortem always comes back to the same root cause: "We did not test that scenario."
TestApp.io's AI task generation reads your release notes and produces targeted, platform-aware QA tasks that cover the changes in that build. It does not replace your testers' judgment. It gives them a comprehensive starting point so nothing falls through the cracks.
Here is the core concept: when you upload a new build to TestApp.io, you include release notes describing what changed. The AI reads those notes along with your app's context (description, platform, previous patterns) and generates up to 15 QA task suggestions tailored to that specific build.
These are not generic "test the login flow" tasks. They are targeted to the actual changes. If your release notes say "Fixed crash when rotating device on the payment screen," the AI generates tasks like verifying the rotation behavior on the payment screen across different device orientations, checking that the payment flow completes after rotation, and testing edge cases like rotating mid-transaction.
The generated tasks are suggestions, not mandates. You review them, edit what needs adjusting, remove what is irrelevant, and bulk-create the ones you want. They land on your task board as real tasks with priorities and assignees, ready for your testing workflow.
Let us compare the two approaches on a real-world release.
Say your latest build includes these changes:
- Added dark mode support for all main screens
- Fixed crash when uploading images larger than 10MB
- Improved loading time for the dashboard by 40%
- Added pull-to-refresh on the notifications screen
- Fixed incorrect badge count after clearing notifications
- Updated minimum supported iOS version to 15.0A QA lead reads the notes and creates tasks. On a busy day, this is what gets written:
Four tasks for six changes. The badge count fix and the iOS version update are not covered. Two of the four tasks lack enough detail for a tester to execute them without asking follow-up questions.
This is not because the QA lead is careless. They are busy, they are context-switching between three releases, and writing detailed QA tasks is mentally taxing work that happens at the end of an already full day.
The AI reads the same release notes and generates something closer to this:
Fifteen tasks covering all six changes, with specific test scenarios, edge cases, and platform considerations. A tester can pick up any of these and execute them without ambiguity.
The time investment? A few seconds to click "Generate Tasks" and a couple of minutes to review and adjust. Compare that to 20-30 minutes of manual task writing that still misses scenarios.
Here is the step-by-step workflow.
When you upload a new build to TestApp.io — whether through the dashboard, the CLI (ta-cli), or your CI/CD pipeline — include release notes describing what changed in this build.
The more specific your release notes, the better the AI's output. More on this later in the tips section.
Once the build is uploaded and processed, go to the release in your TestApp.io dashboard. You will find the release notes displayed along with the build details.
Look for the Generate Tasks option associated with the release. Clicking it sends the release notes, along with your app's context (app description, platform — iOS or Android), to the AI engine.
The generation takes a few seconds. When it completes, you see a list of suggested QA tasks.
This is the important part. AI-generated tasks are suggestions, not final outputs. Review each one with your tester's eye:
Click into any generated task to modify it before creation. You can change:
Think of this as a review pass, not a rewrite. The AI gives you 80% of the content; you add the 20% that requires human context.
Once you have reviewed and edited the suggestions, select the ones you want to keep and bulk-create them. They immediately appear on your task board as real tasks, ready to be assigned and worked on.
You can create all 15 suggestions, or just the 8 that are most relevant. There is no obligation to accept everything the AI generates.
The quality of AI-generated tasks depends on the context available. Here is what the AI uses:
This is the primary input. The AI parses the release notes to understand what changed, what was fixed, what was added, and what was modified. Structured release notes (bullet points, categorized changes) produce better results than a single paragraph of prose.
Your app's description in TestApp.io provides background context. If your app is described as a "financial services app for iOS and Android," the AI can factor in domain-specific concerns like security, data accuracy, and compliance-related testing.
The AI knows whether the build is for iOS or Android and tailors tasks accordingly. An iOS build might get tasks related to iOS-specific behaviors (like permission dialogs, App Transport Security, or device rotation). An Android build gets tasks relevant to Android's ecosystem (like varied screen sizes, back button behavior, or permission handling).
This platform awareness means you do not have to mentally filter out irrelevant platform suggestions. The tasks are already scoped to the right platform.
Generated tasks do not live in a separate silo. Once created, they are full-fledged tasks on your TestApp.io task board with all the standard capabilities:
This last point is worth emphasizing. If you are using the JIRA or Linear integration, AI-generated tasks flow into your developers' issue trackers just like any other task. The developer does not need to know or care that the task was AI-generated. It appears on their board like any other issue.
The quality of the output directly correlates with the quality of the input. Here are practical tips for getting the most useful task suggestions.
Compare these two versions of the same change:
Vague: "Fixed bugs and improved performance"
Specific: "Fixed crash on payment screen when rotating device during transaction. Improved dashboard load time from 3.2s to 1.8s by optimizing API calls."
The vague version gives the AI almost nothing to work with. The specific version produces targeted, testable tasks.
Structure your release notes as a bulleted list of changes. Each bullet becomes a potential source of one or more test tasks. A paragraph of prose is harder for the AI to parse into distinct, testable changes.
"Added pull-to-refresh on notifications" tells the AI what changed. "Added pull-to-refresh on notifications to resolve user complaints about stale notification data" also tells it why, which can produce more thoughtful edge-case tasks (like testing with stale cache data or poor network conditions).
If a change only affects certain OS versions, device types, or configurations, mention it in the notes. "Updated minimum iOS version to 15.0" gives the AI explicit information to generate version-boundary testing tasks.
"Fixed login bug and redesigned the settings page" is two changes that should be two bullets. Separating them helps the AI generate distinct tasks for each change rather than conflating them.
The best workflow is: generate tasks, take a short break or switch context, then come back and review. Fresh eyes catch the suggestions that are too generic or miss your app's specific edge cases.
AI task generation is most valuable in these scenarios:
To be clear about the boundaries: AI task generation does not replace exploratory testing, domain expertise, or the intuition that experienced testers develop over years. It will not catch the subtle interaction bug that only happens when you navigate between three specific screens in a particular order while on a slow network.
What it does is handle the routine, systematic task creation that takes up a disproportionate amount of QA planning time. It ensures that every change in the release notes has corresponding test coverage. It catches the obvious tasks so your testers can spend their energy on the non-obvious ones.
Think of it as a QA task first draft. A really good first draft that covers the fundamentals, leaving your team free to add the nuanced, experience-driven test scenarios that no AI can generate.
If you are manually creating QA tasks from release notes today, AI task generation can reclaim that time and improve your test coverage simultaneously. The workflow is simple: upload a build with release notes, generate tasks, review, create.
Try it on your next release at portal.testapp.io. Write detailed release notes, generate the tasks, and compare the output to what you would have created manually. Most teams find the AI catches scenarios they would have missed.
For additional details on task management workflows, check the help center.
]]>This disconnect is not just annoying. It costs real time. Every manual copy-paste, every "hey, did you update the ticket?" Slack message, every missed status change adds friction to a process that should be seamless.
TestApp.io's JIRA integration solves this with genuine 2-way, real-time sync. Changes flow in both directions automatically. No middleware, no Zapier workarounds, no cron jobs. Here is how to set it up from scratch, and how to get the most out of it once it is running.
Before diving into setup, here is a clear picture of what you get:
The integration uses Atlassian's OAuth 2.0 flow, which means you are not handing over API tokens or service account credentials. Here is how to get started:
In your TestApp.io dashboard, go to your version's settings and find the Integrations tab. You will see JIRA listed as an available integration.
Click Connect on the JIRA integration card. This redirects you to Atlassian's OAuth consent screen. You will need to:
Once authorized, you are redirected back to TestApp.io with the connection established. The OAuth token is stored securely and handles refresh automatically, so you will not need to re-authorize unless you explicitly revoke access.
The integration requests access to read and write issues, comments, and project metadata. It does not request admin-level permissions for your Atlassian organization. Only the JIRA projects you explicitly select will be accessible.
After connecting, you need to tell TestApp.io which JIRA project to sync with. This is a one-to-one mapping: one TestApp.io version syncs with one JIRA project.
From the integration settings panel:
A few things to keep in mind here:
This is where the integration gets powerful. Field mapping lets you define how statuses and priorities translate between the two systems.
JIRA and TestApp.io likely use different status names. Maybe your JIRA workflow has "To Do," "In Development," "Code Review," "QA," and "Done." TestApp.io uses statuses that are more QA-focused.
The mapping interface lets you pair each JIRA status with a TestApp.io status. For example:
| JIRA Status | TestApp.io Status |
|---|---|
| To Do | Open |
| In Development | Open |
| QA | In Progress |
| Done | Closed |
This mapping works in both directions. When a task moves to "Closed" in TestApp.io, JIRA updates it to "Done" (or whatever you mapped). When a developer moves an issue to "QA" in JIRA, it appears as "In Progress" in TestApp.io.
Similarly, map priority levels between the two systems. TestApp.io uses a priority scale of Low, Normal, High, Critical, and Blocker. JIRA typically uses Lowest, Low, Medium, High, and Highest. Set up the mapping that makes sense for your team's conventions:
| JIRA Priority | TestApp.io Priority |
|---|---|
| Highest | Blocker |
| High | Critical |
| Medium | High |
| Low | Normal |
| Lowest | Low |
Take a few minutes to get these mappings right. They form the backbone of how accurately your tasks stay in sync across both systems.
Once field mappings are configured, the webhook-based sync is live. Here is what happens in practice:
A developer updates an issue in JIRA — changes the status from "To Do" to "In Development," adds a comment, or bumps the priority. Within seconds, those changes appear on the corresponding task in TestApp.io. Your QA team sees the update without switching tools or asking for a status update.
A tester finds a bug during a testing session, updates the task priority to "Blocker," and adds a comment with reproduction steps. That change flows back to JIRA immediately. The developer sees the priority change on their JIRA board without anyone having to ping them.
What happens if someone edits the same field in both systems simultaneously? The integration uses a last-write-wins approach with the sync history providing full visibility into what changed and when. In practice, true simultaneous edits are rare, but the audit trail ensures nothing is silently overwritten without a record.
Most teams do not start from zero. You probably have an existing backlog of issues in JIRA that relate to your mobile app. Rather than recreating them manually in TestApp.io, use the import feature.
From the integration settings:
Imported issues become full TestApp.io tasks with bidirectional sync enabled. Any future changes in either system stay synchronized.
A practical tip: do not import everything blindly. Start with issues that are actively being tested or are in your current sprint. You can always import more later.
The reverse scenario is also common: you have been using TestApp.io's built-in task management and now want those tasks reflected in JIRA. The migration feature handles this.
After migration, those tasks exist in both systems with sync enabled going forward. The original TestApp.io tasks are not deleted; they become synced tasks linked to their JIRA counterparts.
The sync history is one of those features you do not think about until you need it — and then you really need it. Every sync event is recorded with:
This is invaluable for debugging. If a tester says "I updated the status an hour ago but JIRA still shows the old value," you can check the sync history and see exactly what happened. Failed syncs can be retried directly from the history view.
Even well-configured integrations occasionally run into hiccups. Here are the most common issues and how to resolve them:
If your JIRA admin modifies the project's workflow (adds new statuses, removes old ones, changes transitions), your field mappings may become stale. When a task moves to a status that is not mapped, the sync cannot determine where to put it.
Fix: Go to integration settings and update your status mappings to include the new JIRA statuses. The sync will resume for any pending changes.
If someone revokes the OAuth grant from the Atlassian side, or if the token expires without a successful refresh, the integration will stop syncing.
Fix: Re-authorize by clicking Connect again in the integration settings. Your existing field mappings and sync history are preserved; only the auth token is refreshed.
If you import issues and then also have someone manually create the same tasks, you can end up with duplicates. The integration tracks linked issues by their JIRA issue key, so manually created tasks are not automatically deduplicated.
Fix: Before importing, communicate with your team that JIRA issues are being pulled in automatically. Delete any manually created duplicates after import.
Network issues or temporary outages can cause webhook deliveries to fail. The sync history will show these as failed events.
Fix: Check the sync history for failed events and use the retry option. If failures persist, verify that your network allows outbound webhook traffic and that no firewall rules are blocking the connection.
If the Atlassian user who authorized the integration does not have write access to certain JIRA fields or transitions, syncs that try to update those fields will fail.
Fix: Ensure the authorizing user has sufficient permissions in the JIRA project. They need to be able to create issues, edit fields, transition statuses, and add comments.
After working with teams who run this integration daily, here are some patterns that consistently work well:
The JIRA integration turns two separate tools into a unified workflow. Developers stay in JIRA. Testers stay in TestApp.io. Changes flow automatically, and everyone has the same picture of what is happening.
If you are spending time copying issue details between tools, manually updating statuses, or wondering whether your JIRA board reflects reality, this integration eliminates that overhead.
Set up the connection at portal.testapp.io, and check the help center for additional guides on fine-tuning your integration settings.
]]>