The Flutter Kit logoThe Flutter Kit
Tutorial

How to Build an AI Flutter App with OpenAI & ChatGPT in 2026

Build an AI chat interface in Flutter with streaming responses, image generation, and the OpenAI API.

Ahmed GaganAhmed Gagan
15 min read

AI features are no longer a novelty -- users expect them. This guide shows you how to build an AI-powered Flutter app with the OpenAI API, including a ChatGPT-style chat interface, streaming responses, and DALL-E image generation. I will walk you through the secure backend proxy pattern, the Flutter UI implementation, and the architecture that scales from a prototype to a production app on both iOS and Android.

Why You Need a Backend Proxy

This is the most important architectural decision in any AI-powered mobile app: never embed your OpenAI API key in client-side code. If your API key is in the Flutter binary, anyone can decompile your app and extract it. They will drain your OpenAI account in hours.

The solution is a backend proxy -- a lightweight server that stores your API key securely and forwards requests to OpenAI on behalf of your app. Your Flutter app talks to your server. Your server talks to OpenAI. The API key never leaves the server.

For indie Flutter developers, Firebase Cloud Functions is the best choice for this proxy. It is serverless (no infrastructure to manage), scales automatically, has a generous free tier, and integrates seamlessly with Firebase Auth so you can authenticate requests. Here is the architecture:

Flutter App → Firebase Cloud Function → OpenAI API
     ↑                    ↓
     └──── Response ──────┘

The Cloud Function:
1. Receives the user's message
2. Validates the Firebase Auth token
3. Forwards the request to OpenAI
4. Streams the response back to Flutter

Setting Up the Firebase Cloud Function

First, create a Cloud Function that proxies requests to OpenAI. This function handles authentication, rate limiting, and the actual API call:

// functions/src/index.ts
import * as functions from 'firebase-functions';
import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

export const chatCompletion = functions.https.onCall(
  async (data, context) => {
    // Verify the user is authenticated
    if (!context.auth) {
      throw new functions.https.HttpsError(
        'unauthenticated',
        'Must be signed in to use AI features.'
      );
    }

    const { messages, model = 'gpt-4o' } = data;

    if (!messages || !Array.isArray(messages)) {
      throw new functions.https.HttpsError(
        'invalid-argument',
        'Messages array is required.'
      );
    }

    try {
      const completion = await openai.chat.completions.create({
        model,
        messages,
        max_tokens: 2048,
        temperature: 0.7,
      });

      return {
        content: completion.choices[0]?.message?.content ?? '',
        usage: completion.usage,
      };
    } catch (error: any) {
      throw new functions.https.HttpsError(
        'internal',
        error.message ?? 'OpenAI request failed.'
      );
    }
  }
);

Set your OpenAI API key as a Firebase environment variable:

firebase functions:secrets:set OPENAI_API_KEY

Deploy the function:

firebase deploy --only functions

Building the AI Service in Dart

On the Flutter side, create a service that communicates with your Cloud Function:

// ai_service.dart
import 'package:cloud_functions/cloud_functions.dart';

class AIService {
  final FirebaseFunctions _functions = FirebaseFunctions.instance;

  Future<String> sendMessage(List<Map<String, String>> messages) async {
    try {
      final callable = _functions.httpsCallable(
        'chatCompletion',
        options: HttpsCallableOptions(
          timeout: const Duration(seconds: 60),
        ),
      );

      final result = await callable.call({
        'messages': messages,
        'model': 'gpt-4o',
      });

      return result.data['content'] as String;
    } on FirebaseFunctionsException catch (e) {
      throw AIException(e.message ?? 'AI request failed');
    }
  }

  Future<String> generateImage(String prompt) async {
    try {
      final callable = _functions.httpsCallable(
        'generateImage',
        options: HttpsCallableOptions(
          timeout: const Duration(seconds: 120),
        ),
      );

      final result = await callable.call({
        'prompt': prompt,
        'size': '1024x1024',
      });

      return result.data['url'] as String;
    } on FirebaseFunctionsException catch (e) {
      throw AIException(e.message ?? 'Image generation failed');
    }
  }
}

class AIException implements Exception {
  final String message;
  AIException(this.message);

  @override
  String toString() => message;
}

Building the Chat UI

The chat interface is where users interact with the AI. Here is a production-ready chat screen with message history, loading states, and error handling:

// chat_screen.dart
import 'package:flutter/material.dart';

class ChatMessage {
  final String role; // 'user' or 'assistant'
  final String content;
  final DateTime timestamp;

  ChatMessage({
    required this.role,
    required this.content,
    DateTime? timestamp,
  }) : timestamp = timestamp ?? DateTime.now();

  Map<String, String> toApiMessage() => {
        'role': role,
        'content': content,
      };
}

class ChatScreen extends StatefulWidget {
  const ChatScreen({super.key});

  @override
  State<ChatScreen> createState() => _ChatScreenState();
}

class _ChatScreenState extends State<ChatScreen> {
  final _controller = TextEditingController();
  final _scrollController = ScrollController();
  final _messages = <ChatMessage>[];
  bool _isLoading = false;

  Future<void> _sendMessage() async {
    final text = _controller.text.trim();
    if (text.isEmpty || _isLoading) return;

    setState(() {
      _messages.add(ChatMessage(role: 'user', content: text));
      _isLoading = true;
    });
    _controller.clear();
    _scrollToBottom();

    try {
      final aiService = AIService(); // Or inject via DI
      final response = await aiService.sendMessage(
        _messages.map((m) => m.toApiMessage()).toList(),
      );

      setState(() {
        _messages.add(
          ChatMessage(role: 'assistant', content: response),
        );
      });
    } catch (e) {
      setState(() {
        _messages.add(
          ChatMessage(
            role: 'assistant',
            content: 'Error: ${e.toString()}',
          ),
        );
      });
    } finally {
      setState(() => _isLoading = false);
      _scrollToBottom();
    }
  }

  void _scrollToBottom() {
    WidgetsBinding.instance.addPostFrameCallback((_) {
      if (_scrollController.hasClients) {
        _scrollController.animateTo(
          _scrollController.position.maxScrollExtent,
          duration: const Duration(milliseconds: 300),
          curve: Curves.easeOut,
        );
      }
    });
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(title: const Text('AI Chat')),
      body: Column(
        children: [
          Expanded(
            child: ListView.builder(
              controller: _scrollController,
              padding: const EdgeInsets.all(16),
              itemCount: _messages.length,
              itemBuilder: (context, index) {
                final msg = _messages[index];
                return _MessageBubble(message: msg);
              },
            ),
          ),
          if (_isLoading)
            const Padding(
              padding: EdgeInsets.all(8),
              child: CircularProgressIndicator(),
            ),
          _InputBar(
            controller: _controller,
            onSend: _sendMessage,
            isLoading: _isLoading,
          ),
        ],
      ),
    );
  }
}

Implementing Streaming Responses

For a ChatGPT-like experience, you want responses to appear word by word. This requires Server-Sent Events (SSE) from your Cloud Function:

// streaming_ai_service.dart
import 'dart:async';
import 'dart:convert';
import 'package:http/http.dart' as http;
import 'package:firebase_auth/firebase_auth.dart';

class StreamingAIService {
  final String _functionUrl;

  StreamingAIService(this._functionUrl);

  Stream<String> streamChat(
    List<Map<String, String>> messages,
  ) async* {
    final token =
        await FirebaseAuth.instance.currentUser?.getIdToken();
    if (token == null) throw AIException('Not authenticated');

    final request = http.Request('POST', Uri.parse(_functionUrl));
    request.headers.addAll({
      'Content-Type': 'application/json',
      'Authorization': 'Bearer $token',
    });
    request.body = jsonEncode({
      'messages': messages,
      'stream': true,
    });

    final response = await http.Client().send(request);

    await for (final chunk in response.stream
        .transform(utf8.decoder)
        .transform(const LineSplitter())) {
      if (chunk.startsWith('data: ') && chunk != 'data: [DONE]') {
        final json = jsonDecode(chunk.substring(6));
        final content =
            json['choices']?[0]?['delta']?['content'] as String?;
        if (content != null) {
          yield content;
        }
      }
    }
  }
}

Use this in your chat screen with a StreamBuilder or by appending to the current message as chunks arrive:

// In your chat screen
Future<void> _sendMessageStreaming() async {
  final text = _controller.text.trim();
  if (text.isEmpty) return;

  setState(() {
    _messages.add(ChatMessage(role: 'user', content: text));
    _messages.add(ChatMessage(role: 'assistant', content: ''));
    _isLoading = true;
  });
  _controller.clear();

  try {
    final stream = _streamingService.streamChat(
      _messages
          .where((m) => m.content.isNotEmpty)
          .map((m) => m.toApiMessage())
          .toList(),
    );

    await for (final chunk in stream) {
      setState(() {
        _messages.last = ChatMessage(
          role: 'assistant',
          content: _messages.last.content + chunk,
        );
      });
      _scrollToBottom();
    }
  } catch (e) {
    setState(() {
      _messages.last = ChatMessage(
        role: 'assistant',
        content: 'Error: ${e.toString()}',
      );
    });
  } finally {
    setState(() => _isLoading = false);
  }
}

Adding Image Generation with DALL-E

Image generation adds a compelling visual AI feature. Add a Cloud Function for DALL-E:

// functions/src/index.ts (add to existing file)
export const generateImage = functions.https.onCall(
  async (data, context) => {
    if (!context.auth) {
      throw new functions.https.HttpsError(
        'unauthenticated',
        'Must be signed in.'
      );
    }

    const { prompt, size = '1024x1024' } = data;

    const response = await openai.images.generate({
      model: 'dall-e-3',
      prompt,
      size,
      n: 1,
    });

    return { url: response.data[0]?.url ?? '' };
  }
);

Display the generated image in Flutter:

// image_generation_screen.dart
class ImageGenerationScreen extends StatefulWidget {
  const ImageGenerationScreen({super.key});

  @override
  State<ImageGenerationScreen> createState() =>
      _ImageGenerationScreenState();
}

class _ImageGenerationScreenState
    extends State<ImageGenerationScreen> {
  final _controller = TextEditingController();
  String? _imageUrl;
  bool _isGenerating = false;

  Future<void> _generate() async {
    if (_controller.text.isEmpty) return;
    setState(() => _isGenerating = true);

    try {
      final aiService = AIService();
      final url =
          await aiService.generateImage(_controller.text);
      setState(() => _imageUrl = url);
    } catch (e) {
      ScaffoldMessenger.of(context).showSnackBar(
        SnackBar(content: Text(e.toString())),
      );
    } finally {
      setState(() => _isGenerating = false);
    }
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(title: const Text('Image Generator')),
      body: Padding(
        padding: const EdgeInsets.all(16),
        child: Column(
          children: [
            TextField(
              controller: _controller,
              decoration: const InputDecoration(
                hintText: 'Describe the image you want...',
              ),
            ),
            const SizedBox(height: 16),
            FilledButton(
              onPressed: _isGenerating ? null : _generate,
              child: _isGenerating
                  ? const CircularProgressIndicator()
                  : const Text('Generate Image'),
            ),
            const SizedBox(height: 24),
            if (_imageUrl != null)
              ClipRRect(
                borderRadius: BorderRadius.circular(16),
                child: Image.network(
                  _imageUrl!,
                  fit: BoxFit.cover,
                ),
              ),
          ],
        ),
      ),
    );
  }
}

Architecture Best Practices

Here is how to structure AI features in a production Flutter app:

  • Use BLoC for state management — Keep AI chat state in a dedicated ChatBloc that handles message sending, streaming, and error states. This keeps widgets thin and testable.
  • Cache conversations locally — Use Hive or SQLite to persist chat history. Users expect to see previous conversations when they reopen the app.
  • Implement rate limiting — Add rate limiting in your Cloud Function to prevent abuse. Track usage per user and show remaining credits in the UI.
  • Handle token limits — GPT-4o has a context window. Trim older messages from the conversation when approaching the limit. Keep a system message and the most recent messages.
  • Gate AI features behind premium — Use RevenueCat entitlements to restrict AI features to paying subscribers. Offer a free tier with limited messages per day.

Cost Management

OpenAI API costs can add up quickly. Here is what to expect:

ModelInput CostOutput CostBest For
GPT-4o mini$0.15/1M tokens$0.60/1M tokensSimple tasks, high volume
GPT-4o$2.50/1M tokens$10.00/1M tokensComplex reasoning, quality
DALL-E 3$0.04-0.08/imageImage generation

For a typical AI chat app with 1,000 daily active users sending 10 messages each, expect $50-200/month with GPT-4o mini or $500-2,000/month with GPT-4o. Gate premium models behind your subscription paywall to ensure the cost is covered by revenue.

Skip the Setup -- Use The Flutter Kit

Everything I covered -- the Cloud Functions proxy, the AI service layer, the streaming chat UI, image generation, conversation caching, and rate limiting -- is already built and tested in The Flutter Kit. You get a complete AI integration that works on both iOS and Android, with the OpenAI key securely stored server-side and all the UI components ready to customize.

Get The Flutter Kit for $69 and start building AI features into your app today. Or browse the feature list to see everything that is included.

Share this article

Ready to ship your Flutter app faster?

The Flutter Kit gives you a production-ready Flutter codebase with onboarding, paywalls, auth, AI integrations, and more. Stop building boilerplate. Start building your product.

Get The Flutter Kit