Getting Started
The MLC provider enables you to run large language models directly on-device in React Native applications. This includes popular models like Llama, Phi-3, Mistral, and Qwen that run entirely on-device for privacy, performance, and offline capabilities.
Installation
Install the MLC provider:
npm install @react-native-ai/mlc
cd ios && pod install
While you can use the MLC provider standalone, we recommend using it with the Vercel AI SDK for a much better developer experience. The AI SDK provides unified APIs, streaming support, and advanced features. To use with the AI SDK, you'll need v5 and required polyfills:
Requirements
- React Native New Architecture - Required for native module functionality
- Increased Memory Limit capability - Required for large model loading
Configuration
iOS
Add the "Increased Memory Limit" capability in Xcode:
- Open your iOS project in Xcode
- Go to Signing & Capabilities tab
- Add "Increased Memory Limit" capability

Basic Usage
Import the MLC provider and use it with the AI SDK:
import { mlc } from '@react-native-ai/mlc';
import { generateText } from 'ai';
const model = mlc.languageModel("Llama-3.2-3B-Instruct");
await model.download();
await model.prepare();
const result = await generateText({
model,
prompt: 'Explain quantum computing in simple terms'
});
Next Steps
- Model Management - Complete guide to model lifecycle, available models, and API reference
- Generating - Learn how to generate text and stream responses