Recently working on an AI video tool that generates videos from text or images using multiple models in one place. The idea is to make it easier to compare how different models interpret the same prompt, instead of relying on a single output.
The workflow is kept simple and browser-based, so it’s more about testing ideas and iterating quickly rather than editing. It’s been interesting to see how different models produce different styles, even with the same input.
Curious if anyone here has tried building similar multi-model workflows or integrating different AI APIs into FlutterFlow apps. Would love to hear how others approach this.