I’ve been integrating FlutterFlow’s new AI Agents into a production app, and wanted to share what I learned the good, the bad, and the stuff that wasn’t in the docs.
What Worked
The built-in UI actions were a quick way to get basic chat functionality up and running. Sending messages to OpenAI (used GPT-4 1106-preview) and getting responses was smooth.
System messages help set the tone/role for the agent. That part’s easy if your use case is simple.
What Caught Me Off Guard
1. Conversation context isn’t truly managed
You’re given a
conversationId
, but it’s on you to persist and reuse it.I ended up storing this in Firestore under each user’s session doc. If you forget to include it, the agent starts from scratch.
2. Image inputs were hit-or-miss
Ran into silent failures uploading JPGs.
Turns out FlutterFlow was sending them as
application/octet-stream
. Had to write a Cloud Function to sniff and correct the MIME type toimage/jpeg
.
3. No error feedback
If your API call fails (invalid key, file too big, model issue), you get no error message just blank screen.
I added basic logging in my Cloud Functions (
console.error
and Firestore writes) to catch issues like missing API keys or size overages.
4. Static system prompts
You can set a system message for the agent, but it’s static. If you want different instructions for admins vs. guests, for example, you’ll need to skip the default UI action and hit your Cloud Function directly.
5. Model differences
GPT-4 (1106-preview) was best for structured responses (e.g. JSON parsing or form generation).
Gemini (via Vertex AI) worked, but responses were more vague and slower. Didn’t try Claude yet.