Docs
CodeRabbit
Cloudflare
AG Grid
SerpAPI
Netlify
OpenRouter
Neon
WorkOS
Clerk
Electric
PowerSync
Sentry
Railway
Prisma
Strapi
Unkey
CodeRabbit
Cloudflare
AG Grid
SerpAPI
Netlify
OpenRouter
Neon
WorkOS
Clerk
Electric
PowerSync
Sentry
Railway
Prisma
Strapi
Unkey
Class References
Function References
Interface References
Type Alias References
Variable References
Media

Generation Hooks

Generation Hooks

TanStack AI provides framework hooks for every generation type: image, speech, transcription, summarization, and video. Each hook connects to a server endpoint and manages loading, error, and result state for you.

Overview

Generation hooks share a consistent API across all media types:

HookInputResult Type
useGenerateImageImageGenerateInputImageGenerationResult
useGenerateSpeechSpeechGenerateInputTTSResult
useTranscriptionTranscriptionGenerateInputTranscriptionResult
useSummarizeSummarizeGenerateInputSummarizationResult
useGenerateVideoVideoGenerateInputVideoGenerateResult
useGenerationGeneric TInputGeneric TResult

Every hook returns the same core shape: generate, result, isLoading, error, status, stop, and reset. You provide either a connection (streaming transport) or a fetcher (direct async call).

Server Setup

Before using hooks on the client, you need a server endpoint that runs the generation and returns the result as SSE. Here's a minimal image generation endpoint:

typescript
// routes/api/generate/image.ts
import { generateImage, toServerSentEventsResponse } from '@tanstack/ai'
import { openaiImage } from '@tanstack/ai-openai'

export async function POST(req: Request) {
  const { prompt, size, numberOfImages } = await req.json()

  const stream = generateImage({
    adapter: openaiImage('dall-e-3'),
    prompt,
    size,
    numberOfImages,
    stream: true,
  })

  return toServerSentEventsResponse(stream)
}

The same pattern applies to all generation types -- swap generateImage for generateSpeech, generateTranscription, summarize, or generateVideo. See the individual media guides for server-side details.

useGenerateImage

Trigger image generation and render the results.

tsx
import { useGenerateImage, fetchServerSentEvents } from '@tanstack/ai-react'
import { useState } from 'react'

function ImageGenerator() {
  const [prompt, setPrompt] = useState('')
  const { generate, result, isLoading, error, reset } = useGenerateImage({
    connection: fetchServerSentEvents('/api/generate/image'),
  })

  return (
    <div>
      <input
        value={prompt}
        onChange={(e) => setPrompt(e.target.value)}
        placeholder="Describe an image..."
      />
      <button
        onClick={() => generate({ prompt })}
        disabled={isLoading || !prompt.trim()}
      >
        {isLoading ? 'Generating...' : 'Generate'}
      </button>

      {error && <p>Error: {error.message}</p>}

      {result?.images.map((img, i) => (
        <img
          key={i}
          src={img.url || `data:image/png;base64,${img.b64Json}`}
          alt={img.revisedPrompt || 'Generated image'}
        />
      ))}

      {result && <button onClick={reset}>Clear</button>}
    </div>
  )
}

The generate function accepts an ImageGenerateInput:

FieldTypeDescription
promptstringText description of the desired image (required)
numberOfImagesnumberNumber of images to generate
sizestringImage size in WIDTHxHEIGHT format (e.g., "1024x1024")
modelOptionsRecord<string, any>Model-specific options

useGenerateSpeech

Convert text to speech and play it back.

tsx
import { useGenerateSpeech, fetchServerSentEvents } from '@tanstack/ai-react'
import { useRef } from 'react'

function SpeechGenerator() {
  const audioRef = useRef<HTMLAudioElement>(null)
  const { generate, result, isLoading, error } = useGenerateSpeech({
    connection: fetchServerSentEvents('/api/generate/speech'),
  })

  return (
    <div>
      <button
        onClick={() => generate({ text: 'Hello, welcome to TanStack AI!', voice: 'alloy' })}
        disabled={isLoading}
      >
        {isLoading ? 'Generating...' : 'Generate Speech'}
      </button>

      {error && <p>Error: {error.message}</p>}

      {result && (
        <audio
          ref={audioRef}
          src={`data:audio/${result.format};base64,${result.audio}`}
          controls
          autoPlay
        />
      )}
    </div>
  )
}

The generate function accepts a SpeechGenerateInput:

FieldTypeDescription
textstringThe text to convert to speech (required)
voicestringThe voice to use (e.g., "alloy", "echo")
format'mp3' | 'opus' | 'aac' | 'flac' | 'wav' | 'pcm'Output audio format
speednumberAudio speed (0.25 to 4.0)
modelOptionsRecord<string, any>Model-specific options

The TTSResult contains audio (base64-encoded), format, and optionally duration and contentType.

useTranscription

Transcribe audio files to text.

tsx
import { useTranscription, fetchServerSentEvents } from '@tanstack/ai-react'

function Transcriber() {
  const { generate, result, isLoading, error } = useTranscription({
    connection: fetchServerSentEvents('/api/transcribe'),
  })

  const handleFile = (e: React.ChangeEvent<HTMLInputElement>) => {
    const file = e.target.files?.[0]
    if (file) {
      const reader = new FileReader()
      reader.onload = () => {
        generate({ audio: reader.result as string, language: 'en' })
      }
      reader.readAsDataURL(file)
    }
  }

  return (
    <div>
      <input type="file" accept="audio/*" onChange={handleFile} />

      {isLoading && <p>Transcribing...</p>}
      {error && <p>Error: {error.message}</p>}

      {result && (
        <div>
          <h3>Transcription</h3>
          <p>{result.text}</p>
          {result.language && <p>Language: {result.language}</p>}
          {result.duration && <p>Duration: {result.duration}s</p>}
        </div>
      )}
    </div>
  )
}

The generate function accepts a TranscriptionGenerateInput:

FieldTypeDescription
audiostring | File | BlobAudio data -- base64 string, File, or Blob (required)
languagestringLanguage in ISO-639-1 format (e.g., "en")
promptstringOptional prompt to guide the transcription
responseFormat'json' | 'text' | 'srt' | 'verbose_json' | 'vtt'Output format
modelOptionsRecord<string, any>Model-specific options

useSummarize

Summarize long text with configurable output styles.

tsx
import { useSummarize, fetchServerSentEvents } from '@tanstack/ai-react'
import { useState } from 'react'

function Summarizer() {
  const [text, setText] = useState('')
  const { generate, result, isLoading, error } = useSummarize({
    connection: fetchServerSentEvents('/api/summarize'),
  })

  return (
    <div>
      <textarea
        value={text}
        onChange={(e) => setText(e.target.value)}
        placeholder="Paste text to summarize..."
        rows={8}
      />
      <button
        onClick={() => generate({ text, style: 'bullet-points', maxLength: 200 })}
        disabled={isLoading || !text.trim()}
      >
        {isLoading ? 'Summarizing...' : 'Summarize'}
      </button>

      {error && <p>Error: {error.message}</p>}

      {result && (
        <div>
          <h3>Summary</h3>
          <p>{result.summary}</p>
        </div>
      )}
    </div>
  )
}

The generate function accepts a SummarizeGenerateInput:

FieldTypeDescription
textstringThe text to summarize (required)
maxLengthnumberMaximum length of the summary
style'bullet-points' | 'paragraph' | 'concise'Summary style
focusArray<string>Topics to focus on
modelOptionsRecord<string, any>Model-specific options

useGenerateVideo

Video generation is asynchronous -- a job is created on the server, then polled for status until completion. The hook manages the full lifecycle and exposes jobId and videoStatus so you can show progress.

tsx
import { useGenerateVideo, fetchServerSentEvents } from '@tanstack/ai-react'

function VideoGenerator() {
  const { generate, result, jobId, videoStatus, isLoading, error } =
    useGenerateVideo({
      connection: fetchServerSentEvents('/api/generate/video'),
      onStatusUpdate: (status) => {
        console.log(`Video ${status.jobId}: ${status.status} (${status.progress}%)`)
      },
    })

  return (
    <div>
      <button
        onClick={() => generate({ prompt: 'A flying car over a city', duration: 5 })}
        disabled={isLoading}
      >
        {isLoading ? 'Generating...' : 'Generate Video'}
      </button>

      {isLoading && videoStatus && (
        <div>
          <p>Job: {jobId}</p>
          <p>Status: {videoStatus.status}</p>
          {videoStatus.progress != null && (
            <progress value={videoStatus.progress} max={100} />
          )}
        </div>
      )}

      {error && <p>Error: {error.message}</p>}

      {result && (
        <video src={result.url} controls autoPlay style={{ maxWidth: '100%' }} />
      )}
    </div>
  )
}

The generate function accepts a VideoGenerateInput:

FieldTypeDescription
promptstringText description of the desired video (required)
sizestringVideo size -- format depends on provider (e.g., "16:9", "1280x720")
durationnumberVideo duration in seconds
modelOptionsRecord<string, any>Model-specific options

useGenerateVideo returns two extra properties beyond the standard set:

PropertyTypeDescription
jobIdstring | nullThe current job ID, set when the server creates a video job
videoStatusVideoStatusInfo | nullLive status updates with status, progress, and jobId

The VideoStatusInfo type:

typescript
interface VideoStatusInfo {
  jobId: string
  status: 'pending' | 'processing' | 'completed' | 'failed'
  progress?: number     // 0-100
  url?: string          // Set when completed
  error?: string        // Set when failed
}

The hook also accepts onJobCreated and onStatusUpdate callbacks for fine-grained tracking.

Base Hook: useGeneration

All specialized hooks are built on useGeneration. Use it directly when you have a custom generation type that doesn't fit the built-in hooks.

tsx
import { useGeneration, fetchServerSentEvents } from '@tanstack/ai-react'

interface EmbeddingInput {
  text: string
  model?: string
}

interface EmbeddingResult {
  embedding: Array<number>
  model: string
  usage: { totalTokens: number }
}

function EmbeddingGenerator() {
  const { generate, result, isLoading, error } = useGeneration<
    EmbeddingInput,
    EmbeddingResult
  >({
    connection: fetchServerSentEvents('/api/generate/embedding'),
  })

  return (
    <div>
      <button onClick={() => generate({ text: 'Hello world' })} disabled={isLoading}>
        Generate Embedding
      </button>
      {result && <p>Dimensions: {result.embedding.length}</p>}
    </div>
  )
}

Options

UseGenerationOptions<TInput, TResult> accepts:

OptionTypeDescription
connectionConnectConnectionAdapterStreaming transport (SSE, HTTP stream, custom)
fetcherGenerationFetcher<TInput, TResult>Direct async function (no streaming protocol needed)
idstringUnique identifier for this generation instance
bodyRecord<string, any>Additional body parameters sent with connection requests
onResult(result: TResult) => TOutput | null | voidTransform or react to results
onError(error: Error) => voidError callback
onProgress(progress: number, message?: string) => voidProgress updates (0-100)
onChunk(chunk: StreamChunk) => voidPer-chunk callback (connection mode only)

Return Value

UseGenerationReturn<TOutput> provides:

PropertyTypeDescription
generate(input: TInput) => Promise<void>Trigger a generation request
resultTOutput | nullThe generation result, or null
isLoadingbooleanWhether a generation is in progress
errorError | undefinedCurrent error, if any
statusGenerationClientState'idle' | 'generating' | 'success' | 'error'
stop() => voidAbort the current generation
reset() => voidClear result, error, and return to idle

Result Transforms

The onResult callback can transform what gets stored in result:

tsx
const { result } = useGenerateImage({
  connection: fetchServerSentEvents('/api/generate/image'),
  onResult: (raw) => raw.images.map((img) => img.url || img.b64Json),
})
// result is now string[] instead of ImageGenerationResult

Framework Variants

All generation hooks are available across React, Vue, and Svelte with the same capabilities. The API shapes are identical -- only the naming convention and reactive primitives differ.

Generation TypeReact (@tanstack/ai-react)Vue (@tanstack/ai-vue)Svelte (@tanstack/ai-svelte)
ImageuseGenerateImageuseGenerateImagecreateGenerateImage
SpeechuseGenerateSpeechuseGenerateSpeechcreateGenerateSpeech
TranscriptionuseTranscriptionuseTranscriptioncreateTranscription
SummarizationuseSummarizeuseSummarizecreateSummarize
VideouseGenerateVideouseGenerateVideocreateGenerateVideo
Base (generic)useGenerationuseGenerationcreateGeneration

All three packages re-export fetchServerSentEvents, fetchHttpStream, and stream from @tanstack/ai-client for convenience.

Vue note: Return values are wrapped in DeepReadonly<ShallowRef<>> -- access them with .value in both <script> and <template>.

Svelte note: Functions use the create* naming convention and return Svelte 5 reactive state via $state.

Next Steps