TL;DR
- Zonka Feedback's Flutter SDK lets you collect in-app surveys across iOS and Android from a single integration: one pubspec.yaml dependency, no platform channel bridging.
- Minimum requirements: Flutter 3.0+, Android compileSdk 34+, iOS 14+.
- Setup takes under 30 minutes: add the dependency, initialize with your SDK token and region, call
startSurvey(). - Custom attributes let you tag responses by platform, screen, or user ID to separate iOS data from Android data in your dashboard.
App Store reviews tell you what went wrong. In-app feedback tells you when — and why, and on which platform, and at which exact moment in your user's session.
That difference matters more in Flutter than anywhere else. You've built one codebase. But your users are on two platforms, dozens of device configurations, and multiple OS versions. When something breaks or frustrates, the blended average in your dashboard won't tell you it's an Android 12 rendering issue. You need platform-tagged, event-triggered feedback, collected while the user is still in the app.
This guide covers how to do that with Zonka Feedback's Flutter SDK. Prerequisites, installation, initialization, optional parameters, user identification, and the trigger patterns that actually work in Flutter apps.
Why Flutter Creates a Specific Feedback Problem
Most developers think about Flutter's cross-platform promise as a collection benefit: one integration, data from everywhere. That's partly true. But the architecture that makes Flutter so portable also creates feedback blind spots that iOS-native and Android-native apps don't have.
Flutter doesn't use native UI components. It renders everything through its own engine: Skia, and more recently Impeller. When a user on iOS says a button "feels off" or a gesture doesn't respond the way they expected, it's not an iOS HIG compliance issue you can trace through standard iOS tooling. It's something happening inside Flutter's widget tree. You need feedback tied to the specific screen and interaction, not a one-line App Store review two weeks later.
Single codebase, single test surface. Your QA team ran through the same code path on both platforms. But users experience layout differently by device size, OS version, and platform convention. Android users expect back navigation. iOS users expect swipe gestures. The user who rated your onboarding a 2 on Android — was that because the flow itself was confusing, or because the back button behavior felt wrong? Without platform-segmented data, you're guessing.
Then there's hot reload. It's one of Flutter's best development features. It's also a feedback data problem — if your survey SDK fires during development sessions, those responses contaminate your production feedback pool. You need an integration that respects environment boundaries.
None of this is a Flutter flaw. It's the feedback consequence of the architecture. And in-app surveys are the most direct fix. This guide covers the implementation layer for Flutter specifically.
Triggering Surveys at the Right Flutter Lifecycle Moments
Generic survey timing advice ("trigger after key interactions") doesn't translate well to Flutter. Flutter has specific lifecycle mechanisms. Use them.
AppLifecycleState Transitions
Flutter's WidgetsBindingObserver gives you access to AppLifecycleState. Two states matter most for survey timing. (Full lifecycle state reference in the Flutter docs.)
AppLifecycleState.resumed fires when your app returns to foreground. Right moment for re-engagement surveys, but only after a meaningful dormancy threshold. A user gone for 30+ days who comes back is worth asking. One who backgrounded the app for 10 minutes isn't. Use a SharedPreferences timestamp check to gate it.
AppLifecycleState.paused fires just before the app backgrounds. Last viable moment to catch abandonment intent. Keep it to one question. The user is leaving.
Navigator Route Changes
Flutter's RouteObserver combined with RouteAware lets you attach survey logic to specific named routes. The pattern that works: trigger a feature-usage survey when a user pops back from a feature route, not when they enter it. Exit means the interaction completed. That's when feedback is most accurate.
class _MyFeatureScreenState extends State<MyFeatureScreen> with RouteAware { @override void didPopNext() { // User returned from this screen - trigger feedback ZFSurvey().startSurvey(); } }
Widget Dispose Events
Overriding dispose() in a StatefulWidget gives you precise survey timing at screen close, more reliable than polling for exit intent.
@override void dispose() { ZFSurvey().startSurvey(); super.dispose(); }
Use it selectively. Not every screen close needs a survey. Reserve it for high-value exits: checkout completion, onboarding final step, key feature screen.
Environment Gating With kDebugMode
Hot reload in development will fire surveys if you let it. Gate your survey logic so development sessions stay out of your production data:
if (!kDebugMode) { ZFSurvey().startSurvey(); }
One line. Worth it.
For mobile app survey timing and placement best practices beyond Flutter-specific mechanics, that guide covers the broader context.
How to Set Up the Zonka Feedback Flutter SDK
1. Prerequisites
Before integrating, you'll need:
- An active Zonka Feedback account. Start a free trial if you don't have one.
- A survey created in your account with your preferred question types.
- Your SDK token: go to Distribute > In-App tab, enable the toggle, copy the token.
2. Minimum System Requirements
| Platform | Requirement |
| Flutter | 3.0.0 or higher |
| Android compileSdk | 34 or higher |
| Android Gradle Plugin | 8.1.0 or higher |
| iOS | 14 or higher |
3. Installing the SDK
Add it via pub.dev. The package is listed as zonkafeedback_sdk on pub.dev and linked from the in-app feedback SDK page. Add it to your pubspec.yaml:
dependencies: zonkafeedback_sdk: ^<latest_version>
Run flutter pub get after adding the dependency. No platform channel bridging required. The package is built natively for Flutter, not wrapped from a native iOS or Android SDK.
4. Initializing the SDK
Initialize once. lib/main.dart is the right place.
import 'package:zonkafeedback_sdk/zonkafeedback_sdk.dart'; class _MyAppState extends State<MyApp> { @override void initState() { super.initState(); ZFSurvey().init( token: '<your_sdk_token>', zfRegion: '<your_region>', context: context ); } }
Specify the region your account belongs to:
- US – United States
- EU – Europe
- IN – India
5. Starting a Survey
import 'package:zonkafeedback_sdk/zonkafeedback_sdk.dart'; ZFSurvey().startSurvey();
Pair this with the lifecycle trigger patterns from the section above rather than a timer.
6. Optional: Device Data and Custom Attributes
Device Data
Enable sendDeviceDetails to capture OS version, device type, and screen resolution alongside every response.
ZFSurvey().sendDeviceDetails(true);
Particularly useful in Flutter because the same codebase runs on hardware with wildly different characteristics. A bug reproducible on Android 11 but not 13 is much faster to isolate when responses are tagged with OS version.
Custom Attributes
Pass any context you want tied to a response: screen name, feature flag state, subscription tier. And, critically, which platform the user is on.
Map<String, String> properties = { 'platform': Platform.isIOS ? 'ios' : 'android', 'screen': 'onboarding_step_3', 'subscription': 'pro', }; ZFSurvey().sendCustomAttributes(properties);
The platform attribute is the most important one for Flutter apps. Without it, your dashboard shows blended iOS + Android data. With it, you can filter the view and spot platform-specific problems the aggregate was hiding.
Custom attributes also let you:
- Identify specific users instead of collecting anonymous responses by default.
- Trigger surveys based on user actions: feature engagement, subscription changes, screen sequences.
- Filter and segment feedback by any dimension you care about.
7. Identifying Logged-In Users
Pass user details to associate responses with specific accounts:
| Parameter | Type | Example |
| contact_name | string | "Robin James" |
| contact_email | string | "robin@example.com" |
| contact_mobile | string | "+14234XXXX" |
| contact_uniqueid | string | "1XJ2" |
Map<String, dynamic> properties = { 'contact_name': 'Robin James', 'contact_email': 'robin@example.com', 'contact_uniqueId': '1XJ2', 'contact_mobile': '+14234XXXX' }; ZFSurvey().userInfo(properties);
On logout: Clear visitor data so the next session starts clean.
ZFSurvey().clear();
Without this, feedback from a new user session can be attributed to the previous user's account. Small thing. Worth doing every time.
Flutter Survey Packages: What's Actually Out There
Search pub.dev for feedback or survey SDKs and you'll find the selection is thin. Most CX and feedback platforms offer web SDKs or native iOS/Android SDKs. When Flutter developers want to use them, they typically have to wrap the native SDK using Flutter's platform channel interface, which works but creates a maintenance dependency every time the native SDK updates. Every new native version potentially means revisiting your platform channel implementation.
Zonka Feedback's Flutter SDK is built natively for Flutter, not wrapped. That means a single pubspec.yaml entry, no MethodChannel boilerplate, and initialization that doesn't require separate iOS and Android configuration files beyond what Flutter already manages.
Worth being clear about what comparison actually matters here. It's not between the handful of survey packages on pub.dev. It's between integration approaches. Platform channel wrappers are real options. They just add maintenance overhead your team has to own long-term.
If you're running a mixed mobile stack (Flutter alongside React Native, or native iOS and Android apps in the same portfolio), Zonka covers all four with consistent initialization and a single dashboard. The React Native SDK, iOS SDK, and Android SDK follow the same configuration pattern and push data to the same account.
One case where the mobile SDK isn't the right tool: if you're deploying a Flutter web build, use the web SDK instead. The mobile SDK is built for Flutter iOS and Android targets.
Best Practices for Flutter In-App Feedback
1. Tie Triggers to Flutter Lifecycle Events, Not Timers
A 5-second timer firing on app open is the least useful survey trigger in Flutter. The user might be mid-onboarding, or just backgrounding briefly. Use AppLifecycleState.resumed with a session count gate. Or dispose() on a specific feature screen. Both give you timing tied to actual user behavior, not a guess about when users might be receptive.
2. Microsurveys Outperform Long Forms on Mobile
Flutter's widget overlay renders surveys on top of your existing UI. Long surveys compete for screen real estate, especially on smaller Android devices. One to three questions is the right ceiling. A single NPS or CSAT question at the right moment gets more honest responses than a 10-question form at the wrong one.
For NPS: trigger after 30+ days of active usage using a SharedPreferences gate on AppLifecycleState.resumed. For CSAT: trigger on dispose() after a support interaction or key feature completion. Metric-appropriate timing matters more than convenient timing. If you need a starting point for survey questions, the mobile app feedback survey template has pre-built question sets for common mobile feedback scenarios.
3. Always Pass the platform Custom Attribute
This is the one most Flutter teams skip. It's also the one that changes what your data tells you most dramatically. If iOS satisfaction averages 4.3 and Android averages 2.9, the 3.6 blended average is hiding an Android problem. Pass Platform.isIOS ? 'ios' : 'android' on every survey call. Then segment in your dashboard. You'll see things the aggregate was burying.
4. Gate Surveys with kDebugMode
Hot reload sessions will trigger surveys if you don't guard against it. Wrap every startSurvey() call:
if (!kDebugMode) { ZFSurvey().startSurvey(); }
Keeps development activity out of your production feedback pool. Your survey response rate metrics will reflect real users, not QA sessions.
5. Read Platform-Segmented Data, Not Aggregate Averages
Points 3 and 4 pay off together: two distinct signals instead of one blended one. When iOS and Android scores diverge significantly, you have a platform-specific issue — not a general UX problem. That distinction changes the fix. It changes which team owns it. It changes how urgent it is.
After the Integration: Reading Your Flutter Feedback Data
Setting up the SDK is the fast part. What most Flutter teams underuse is what the data actually tells them.
Raw responses aren't enough. Platform-tagged, event-tagged feedback is only useful when you can find patterns. The platform custom attribute you passed in Step 6 created the data foundation. Now the product team needs to read it.
In Zonka's dashboard, filter responses by the custom attributes you passed during setup: platform, screen, appVersion. Look for patterns. Does your onboarding screen consistently score lower on Android? Are there specific versions where satisfaction dropped? Is there a feature screen where users exit fast and rate low?
Microsurveys in Flutter give you screen-level signal, not just app-level signal. A thumbs up/down on a feature screen. A 1–5 rating after onboarding. An NPS trigger after 30 days of active usage. Each one maps to a specific moment in the user journey. Read them together and you get a picture of where the experience holds up and where it doesn't. That's what the product feedback guide describes as closing the loop: not just collecting responses, but connecting them to what gets built next.
If you're reading patterns across multiple mobile platforms (Flutter alongside React Native or native apps), Zonka's product feedback analytics gives you one unified view across all SDK sources. One dashboard, not three tabs in three tools.
The integration is the start. The habit of reviewing platform-segmented data after every release is where the investment actually returns.
One thing most teams don't do until they've been running in-app feedback for a few months: set a review cadence. Not a quick dashboard refresh. A deliberate session after each release where someone asks three questions. Did any platform's score shift? Did any screen's satisfaction drop? Did any app version show a pattern the previous one didn't? Those three questions, answered consistently after every release, are what turn a survey integration into a product improvement system.
Developers who set this up and walk away rarely see the value. Teams that build the review habit catch an Android 12 rendering issue before it becomes an App Store rating problem. They notice that onboarding step 3 loses iOS users at twice the Android rate. They find out a new feature landed well on tablets but poorly on phones. None of that shows up in aggregate metrics. But all of it shows up in platform-segmented, screen-tagged, version-filtered feedback data. That's exactly what the SDK was set up to produce.
Start Small. One Platform Attribute. One Screen.
The SDK setup takes under 30 minutes. The part that takes longer is deciding what to actually measure.
Most Flutter teams who get this right don't start by instrumenting every screen. They pick one moment that matters: the end of onboarding, the exit from their most-used feature, the 30-day re-engagement — and wire one survey to it. They pass the platform custom attribute from day one. They gate with kDebugMode. Then they watch what comes back after the next release.
That first round of segmented, lifecycle-triggered data is usually enough to identify one real problem that aggregate metrics were hiding. That's the moment the integration stops feeling like a setup task and starts feeling like an actual feedback system.
Start there. One trigger, one question, one platform tag. Build the review habit around it. The rest follows.
Start a free 14-day trial or book a demo to see the Flutter SDK in action.