100% Apple Foundation Models β SDK Compatible Implementation
OpenFoundationModels is a complete open-source implementation of Apple's Foundation Models framework (iOS 26/macOS 15 Xcode 17b3), providing 100% API compatibility while enabling use outside Apple's ecosystem.
Apple Foundation Models is an excellent framework, but has significant limitations:
- Apple Intelligence Required: Only available on Apple Intelligence-enabled devices
- Apple Platform Exclusive: Works only on iOS 26+, macOS 15+
- Provider Locked: Only Apple-provided models supported
- On-Device Only: No integration with external LLM services
OpenFoundationModels solves these limitations as an Apple-compatible alternative implementation:
// Apple Foundation Models (Apple ecosystem only)
import FoundationModels
// OpenFoundationModels (works everywhere)
import OpenFoundationModels
// 🎯 100% API Compatible - No code changes required
let session = LanguageModelSession(
model: SystemLanguageModel.default,
guardrails: .default,
tools: [],
instructions: nil
)
✅ Apple Official API Compliant: Code migration with just import
change
✅ Multi-Platform: Works on Linux, Windows, Android, etc.
✅ Provider Choice: OpenAI, Anthropic, local models, and more
✅ Enterprise Ready: Integrates with existing infrastructure
Get started with OpenFoundationModels in minutes:
# Clone and run sample chat applications
git clone https://github.com/1amageek/OpenFoundationModels-Samples.git
cd OpenFoundationModels-Samples
# Option 1: On-device chat (no setup required)
swift run foundation-chat
# Option 2: OpenAI-powered chat
export OPENAI_API_KEY="your_api_key_here"
swift run openai-chat
import OpenFoundationModels
// Apple's official API - works everywhere
let session = LanguageModelSession()
let response = try await session.respond {
Prompt("Hello, OpenFoundationModels!")
}
print(response.content)
import OpenFoundationModels
import OpenFoundationModelsOpenAI
let provider = OpenAIProvider(apiKey: "your_key")
let session = LanguageModelSession(model: provider.gpt4o)
let response = try await session.respond {
Prompt("Explain Swift concurrency")
}
┌─────────────────────────────────────────────────────────┐
│ Application Layer │
├─────────────────────────────────────────────────────────┤
│ LanguageModelSession │ SystemLanguageModel │ Tools │
├─────────────────────────────────────────────────────────┤
│ Response<T> │ ResponseStream<T> │ @Macro │
├─────────────────────────────────────────────────────────┤
│ Generable Protocol │ GenerationSchema │ Transcript │
├─────────────────────────────────────────────────────────┤
│ Provider Abstraction │
├─────────────────────────────────────────────────────────┤
│ OpenAI │ Anthropic │ Local Models │ Mock │
└─────────────────────────────────────────────────────────┘
Apple's official model access point
public final class SystemLanguageModel: LanguageModel, Observable, Sendable {
/// Apple Official: Single default model instance
public static let `default`: SystemLanguageModel
/// Apple Official: Model availability status
public var availability: AvailabilityStatus { get }
/// Apple Official: Convenience availability property
public var isAvailable: Bool { get }
}
Main class managing conversation state and context
public final class LanguageModelSession: Observable, @unchecked Sendable {
/// Apple Official initialization pattern
public convenience init(
model: SystemLanguageModel = SystemLanguageModel.default,
guardrails: Guardrails = .default,
tools: [any Tool] = [],
instructions: Instructions? = nil
)
/// Apple Official response generation (closure-based)
public func respond(
options: GenerationOptions = .default,
isolation: isolated (any Actor)? = nil,
prompt: () throws -> Prompt
) async throws -> Response<String>
/// Apple Official structured generation
public func respond<Content: Generable>(
generating: Content.Type,
options: GenerationOptions = .default,
includeSchemaInPrompt: Bool = true,
isolation: isolated (any Actor)? = nil,
prompt: () throws -> Prompt
) async throws -> Response<Content>
}
Core protocol for type-safe structured data generation
public protocol Generable: ConvertibleFromGeneratedContent,
ConvertibleToGeneratedContent,
PartiallyGenerable,
Sendable,
SendableMetatype {
/// Apple Official: Compile-time schema generation
static var generationSchema: GenerationSchema { get }
/// Apple Official: Conversion from GeneratedContent
static func from(generatedContent: GeneratedContent) throws -> Self
}
Protocol for LLM function execution
public protocol Tool: Sendable, SendableMetatype {
associatedtype Arguments: Generable
/// Apple Official: Tool name
static var name: String { get }
/// Apple Official: Tool description
static var description: String { get }
/// Apple Official: Execution method
func call(arguments: Arguments) async throws -> ToolOutput
}
Type-safe response processing and streaming
/// Apple Official: Generic response
public struct Response<Content: Sendable>: Sendable {
public let content: Content
public let transcriptEntries: ArraySlice<Transcript.Entry>
}
/// Apple Official: Streaming response
public struct ResponseStream<Content: Sendable>: AsyncSequence, Sendable {
public typealias Element = Response<Content>.Partial
}
dependencies: [
.package(url: "https://github.com/1amageek/OpenFoundationModels.git", from: "1.0.0")
]
dependencies: [
.package(url: "https://github.com/1amageek/OpenFoundationModels.git", from: "1.0.0"),
.package(url: "https://github.com/1amageek/OpenFoundationModels-OpenAI.git", from: "1.0.0")
]
Try the complete sample applications immediately:
# Clone samples repository
git clone https://github.com/1amageek/OpenFoundationModels-Samples.git
cd OpenFoundationModels-Samples
# Run on-device chat (no API key required)
swift run foundation-chat
# Run OpenAI chat (requires API key)
export OPENAI_API_KEY="your_api_key_here"
swift run openai-chat
import OpenFoundationModels
// Check model availability
let model = SystemLanguageModel.default
guard model.isAvailable else {
print("Model not available")
return
}
// Create session (Apple Official API)
let session = LanguageModelSession(
model: model,
guardrails: .default,
tools: [],
instructions: nil
)
// Apple Official closure-based prompt
let response = try await session.respond {
Prompt("Tell me about Swift 6.1 new features")
}
print(response.content)
// Apple Official @Generable macro (fully implemented)
@Generable
struct ProductReview {
@Guide(description: "Product name", .pattern("^[A-Za-z0-9\\s]+$"))
let productName: String
@Guide(description: "Rating score", .range(1...5))
let rating: Int
@Guide(description: "Review comment", .count(50...500))
let comment: String
@Guide(description: "Recommendation", .enumeration(["Highly Recommend", "Recommend", "Neutral", "Not Recommend"]))
let recommendation: String
}
// Generate structured data
let response = try await session.respond(
generating: ProductReview.self,
includeSchemaInPrompt: true
) {
Prompt("Generate a review for iPhone 15 Pro")
}
// Type-safe access
print("Product: \(response.content.productName)")
print("Rating: \(response.content.rating)/5")
print("Comment: \(response.content.comment)")
// Apple Official streaming API
let stream = session.streamResponse {
Prompt("Explain the history of Swift programming language in detail")
}
for try await partial in stream {
print(partial.content, terminator: "")
if partial.isComplete {
print("\n--- Generation Complete ---")
break
}
}
@Generable
struct BlogPost {
let title: String
let content: String
let tags: [String]
}
let stream = session.streamResponse(
generating: BlogPost.self
) {
Prompt("Write a blog post about Swift Concurrency")
}
for try await partial in stream {
if let post = partial.content as? BlogPost {
print("Title: \(post.title)")
print("Progress: \(post.content.count) characters")
}
if partial.isComplete {
print("Article generation complete!")
}
}
// Apple Official Tool protocol implementation
struct WeatherTool: Tool {
typealias Arguments = WeatherQuery
static let name = "get_weather"
static let description = "Get current weather for a city"
func call(arguments: WeatherQuery) async throws -> ToolOutput {
// Weather API call (implementation example)
let weather = try await fetchWeather(city: arguments.city)
return ToolOutput("🌤️ Weather in \(arguments.city): \(weather)")
}
}
@Generable
struct WeatherQuery {
@Guide(description: "City name", .pattern("^[\\p{L}\\s]+$"))
let city: String
}
// Session with tools
let session = LanguageModelSession(
model: SystemLanguageModel.default,
guardrails: .default,
tools: [WeatherTool()],
instructions: nil
)
let response = try await session.respond {
Prompt("What's the weather like in Tokyo today?")
}
// LLM automatically calls WeatherTool and incorporates results
print(response.content)
// Apple Official @InstructionsBuilder pattern
let session = LanguageModelSession {
"You are a helpful and knowledgeable Swift programming instructor."
"Explain concepts clearly with practical examples for beginners."
"Include appropriate comments in code samples."
}
// Guardrails configuration
let guardrails = Guardrails(
allowedTopics: ["programming", "swift", "technology"],
restrictedContent: ["personal_info", "financial_advice"],
maxResponseLength: 1000
)
let session = LanguageModelSession(
model: SystemLanguageModel.default,
guardrails: guardrails,
tools: [],
instructions: Instructions("Swift technical advisor specialist")
)
import OpenFoundationModels
import OpenFoundationModelsOpenAI
// Initialize OpenAI provider
let openAIProvider = OpenAIProvider(apiKey: "your_api_key_here")
// Create session with OpenAI model (same Apple API!)
let session = LanguageModelSession(
model: openAIProvider.gpt4o, // GPT-4o model
guardrails: .default,
tools: [],
instructions: nil
)
// Same Apple API, powered by OpenAI
let response = try await session.respond {
Prompt("Explain quantum computing in simple terms")
}
print(response.content)
// Structured generation with OpenAI
@Generable
struct TechnicalExplanation {
@Guide(description: "Main concept", .count(20...100))
let concept: String
@Guide(description: "Simple explanation", .count(100...300))
let explanation: String
@Guide(description: "Real-world applications", .count(50...200))
let applications: [String]
}
let structuredResponse = try await session.respond(
generating: TechnicalExplanation.self
) {
Prompt("Explain quantum computing")
}
print("Concept: \(structuredResponse.content.concept)")
print("Explanation: \(structuredResponse.content.explanation)")
# Run all tests
swift test
# Category-specific tests
swift test --filter tag:generable # Structured generation tests
swift test --filter tag:core # Core API tests
swift test --filter tag:integration # Integration tests
swift test --filter tag:performance # Performance tests
- ✅ SystemLanguageModel: 100% Apple official specification compliance
- ✅ LanguageModelSession: All initialization patterns supported
- ✅ Tool Protocol: SendableMetatype conformance verified
- ✅ Generable Protocol: Fully implemented
- ✅ Response/ResponseStream: Generic type support
- ✅ @Generable Macro: Complete functionality verified
- ✅ Transcript: All nested types implemented
For detailed verification information, see TESTING.md.
swift build
swift-format --in-place --recursive Sources/ Tests/
swift package generate-documentation
OpenFoundationModels provides a complete ecosystem with core framework, provider integrations, and sample applications:
- OpenFoundationModels - Apple Foundation Models compatible core framework
- 100% API compatibility with Apple's official specification
- 154 tests passing with comprehensive coverage
- OpenFoundationModels-OpenAI ✅ Complete
- Full GPT model support (GPT-4o, GPT-4o Mini, GPT-4 Turbo, o1, o1-pro, o3, o3-pro, o4-mini)
- Streaming and multimodal capabilities
- Production-ready with rate limiting and error handling
- OpenFoundationModels-Samples ✅ Complete
foundation-chat
: On-device chat using Apple's SystemLanguageModelopenai-chat
: Cloud-based chat using OpenAI models- Interactive CLI applications with full streaming support
Provider adapters can be added for:
- Anthropic (Claude 3 Haiku, Sonnet, Opus, etc.)
- Google (Gemini Pro, Ultra, etc.)
- Local Models (Ollama, llama.cpp, etc.)
- Azure OpenAI Service
- AWS Bedrock
- Warning-Free Compilation: Zero compiler warnings
- Memory Efficient: Proper memory management with transcript compaction
- Concurrent: Full Swift 6.1+ concurrency support
- 154 Tests Passing: Comprehensive test coverage
- Type Safe: Generic response system with compile-time checking
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
- Clone the repository
- Run
swift test
to verify everything works - Implement your changes
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- Apple for the Foundation Models framework design and API
- The Swift community for excellent concurrency and macro tools
- Contributors and early adopters
- OpenFoundationModels-OpenAI - Complete OpenAI provider integration
- OpenFoundationModels-Samples - Sample chat applications and demos
- Swift OpenAI - OpenAI API client
- LangChain Swift - LangChain for Swift
- Ollama Swift - Ollama client for Swift
Note: This is an independent open-source implementation and is not affiliated with Apple Inc. Apple, Foundation Models, and related trademarks are property of Apple Inc.