Run Llama 3.2 and Gemma 2 locally in your iOS and macOS apps in 3 lines of code. Open-source, offline, and private.
To add Lightpack to your Xcode project:
- In Xcode, select "File" → "Add Packages..."
- In the search bar, enter the URL of the Lightpack repository
https://github.com/lightpack-run/lightpack.git
- Set the Dependency Rule to "Branch" and select "main"
- In the "Add to Target" section, select your app target (e.g., HelloLightpack2)
- Choose the version rule you want to follow (e.g., "Up to Next Major" version)
- Click "Add Package"
Your package setup should look similar to this:
After installation, you can import Lightpack in your Swift files:
import Lightpack
If you're developing a Swift package, add the following line to your Package.swift
file's dependencies:
.package(url: "https://github.com/lightpack-run/lightpack.git", from: "0.0.6")
Then, include "Lightpack" as a dependency for your target:
.target(name: "YourTarget", dependencies: ["Lightpack"]),
You can then import Lightpack in your Swift files:
import Lightpack
Before you can use Lightpack, you'll need to obtain an API key. Follow these steps:
- Visit https://lightpack.run
- Sign up for an account if you haven't already
- Go to API Keys
- Click on "Create New" to generate a new API key
- Copy the generated API key
Once you have your API key, you can initialize Lightpack in your code like this:
let lightpack = Lightpack(apiKey: "your_api_key")
Replace "your_api_key"
with the actual API key you copied from the Lightpack website.
Important: Keep your API key secure and never share it publicly or commit it to version control systems. Usage of API keys is subject to our Privacy Policy, which includes information on how we collect and process API usage data.
Here's a simple example to get you started with Lightpack:
import Lightpack
let lightpack = Lightpack(apiKey: "your_api_key")
do {
let messages = [LPChatMessage(role: .user, content: "Why is the sky blue?")]
var response = ""
try await lightpack.chatModel("23a77013-fe73-4f26-9ab2-33d315a71924", messages: messages) { token in
response += token
print("Received token: \(token)")
}
print("Full response: \(response)")
} catch {
print("[Lightpack] Error: \(error)")
}
This example initializes Lightpack with your API key and then uses the chatModel
function to interact with the default model.
Here's a basic SwiftUI example that demonstrates how to use Lightpack in a chat interface:
import SwiftUI
import Lightpack
struct ContentView: View {
@StateObject private var lightpack = Lightpack(apiKey: "your_api_key")
@State private var userInput = ""
@State private var chatMessages: [LPChatMessage] = []
@State private var isLoading = false
var body: some View {
VStack {
ScrollView {
ForEach(chatMessages, id: \.content) { message in
MessageView(message: message)
}
}
HStack {
TextField("Type a message", text: $userInput)
.textFieldStyle(RoundedBorderTextFieldStyle())
Button("Send") {
sendMessage()
}
.disabled(userInput.isEmpty || isLoading)
}
.padding()
}
}
func sendMessage() {
let userMessage = LPChatMessage(role: .user, content: userInput)
chatMessages.append(userMessage)
isLoading = true
Task {
do {
var assistantResponse = ""
try await lightpack.chatModel(messages: chatMessages) { token in
assistantResponse += token
}
let assistantMessage = LPChatMessage(role: .assistant, content: assistantResponse)
chatMessages.append(assistantMessage)
isLoading = false
userInput = ""
} catch {
print("[Lightpack] Error: \(error)")
isLoading = false
}
}
}
}
struct MessageView: View {
let message: LPChatMessage
var body: some View {
HStack {
if message.role == .user {
Spacer()
}
Text(message.content)
.padding()
.background(message.role == .user ? Color.blue : Color.gray)
.foregroundColor(.white)
.cornerRadius(10)
if message.role == .assistant {
Spacer()
}
}
.padding(.horizontal)
}
}
#Preview {
ContentView()
}
This SwiftUI example creates a simple chat interface where:
- Users can type messages and send them to the AI model.
- The AI model's responses are displayed in the chat.
- Messages are visually differentiated between user and AI.
- A loading state is managed to prevent sending multiple messages while waiting for a response.
Remember to replace "your_api_key" with your actual Lightpack API key.
Lightpack supports various model families. Here's an overview of the available families:
Family | Author | Parameters | License | Paper | Family ID |
---|---|---|---|---|---|
Llama 3.1 | Meta | 8B | Custom | AI Meta | 3dbcfe36-17fc-45b8-acb6-b3af2c320431 |
Llama 3 | Meta | 8B | Custom | AI Meta | 4dd3eef8-c83e-4338-b7b9-17a9ae2a557e |
Gemma 2 | 9B | Custom | DeepMind | 50be08ec-d6a1-45c8-8c6f-efa34ee9ba17 | |
Gemma 1 | 2B | Custom | DeepMind | 4464c014-d2ed-4be6-a8c5-0cf86c6c87ab | |
Phi 3 | Microsoft | Mini-4K | Custom | arXiv | 7d64ec31-667f-45bb-8d6e-fdb0dffe7fe4 |
Mistral v0.3 | Mistral | 7B | Apache 2.0 | Mistral Docs | 3486641f-27ee-4eee-85be-68a1826873ca |
Qwen2 | Alibaba | 0.5B, 1.5B, 7B | Apache 2.0 | arXiv | a9a97695-2573-4d12-99e0-371aae7ac009 |
TinyLlama | Zhang Peiyuan | 1.1B | Apache 2.0 | arXiv | 75f98968-be6d-48c8-9b32-e76598e262be |
For more detailed information about each model, including available versions and specific capabilities, please refer to Lightpack models.
Lightpack is designed to be flexible and user-friendly. Here's an overview of the typical workflow:
- Get the model ID: You can use
getModels()
to show users a list of models to select from. If no model ID is provided, we use a default model. - Download the model: Use
downloadModel(modelId)
to download the selected model. - Load the model: Use
loadModel(modelId)
to load the model into the chat context window. - Chat with the model: Use
chatModel(modelId, messages)
to interact with the loaded model.
For convenience, you can skip directly to using chatModel()
. If any prerequisite steps (like downloading or loading) haven't been completed, Lightpack will handle them automatically. This means you can provide a seamless experience by just calling chatModel()
, and we'll take care of using the default model ID, downloading, loading, and chat setup all at once.
However, for the best user experience, especially with larger models, we recommend handling these steps explicitly in your app's UI to provide progress feedback to the user.
Online vs. Offline Functionality Here's a breakdown of which functions require an internet connection and which can work offline: Online-only Functions:
downloadModel(modelId)
getModels()
getModelFamilies()
resumeDownloadModel()
Offline-capable Functions (if model is already downloaded or downloading):
loadModel(modelId)
chatModel()
pauseDownloadModel()
cancelDownloadModel()
removeModels()
clearChat()
Note that while chatModel() can work offline with a downloaded model, it will automatically attempt to download the model if it's not available locally, which requires an internet connection.
Here are quick code snippets for each of the core functions provided by Lightpack:
lightpack.getModels(
bitMax: 8,
bitMin: 0,
familyIds: ["3dbcfe36-17fc-45b8-acb6-b3af2c320431"],
modelIds: nil,
page: 1,
pageSize: 10,
parameterIds: ["8B"],
quantizationIds: ["Q4_K_M"],
sizeMax: 5, // 5 GB
sizeMin: 1, // 500 MB
sort: "size:desc"
) { result in
switch result {
case .success((let response, let updatedModelIds)):
print("Fetched \(response.models.count) models")
print("Updated model IDs: \(updatedModelIds)")
response.models.forEach { model in
print("Model: \(model.title), Size: \(model.size) GB")
}
case .failure(let error):
print("Error fetching models: \(error)")
}
}
lightpack.getModelFamilies(
familyIds: ["3dbcfe36-17fc-45b8-acb6-b3af2c320431"],
modelParameterIds: ["8B"],
page: 1,
pageSize: 5,
sort: "title:asc"
) { result in
switch result {
case .success((let response, let updatedFamilyIds)):
print("Fetched \(response.modelFamilies.count) model families")
print("Updated family IDs: \(updatedFamilyIds)")
response.modelFamilies.forEach { family in
print("Family: \(family.title), Parameters: \(family.modelParameterIds)")
}
case .failure(let error):
print("Error fetching model families: \(error)")
}
}
Replace the example "23a77013-fe73-4f26-9ab2-33d315a71924"
with your actual model ID
Task {
do {
try await lightpack.downloadModel("23a77013-fe73-4f26-9ab2-33d315a71924")
print("Model downloaded successfully")
} catch {
print("Error downloading model: \(error)")
}
}
Task {
do {
try await lightpack.pauseDownloadModel("23a77013-fe73-4f26-9ab2-33d315a71924")
print("Model download paused")
} catch {
print("Error pausing download: \(error)")
}
}
Task {
do {
try await lightpack.resumeDownloadModel("23a77013-fe73-4f26-9ab2-33d315a71924")
print("Model download resumed")
} catch {
print("Error resuming download: \(error)")
}
}
Task {
do {
try await lightpack.cancelDownloadModel("23a77013-fe73-4f26-9ab2-33d315a71924")
print("Model download cancelled")
} catch {
print("Error cancelling download: \(error)")
}
}
Task {
do {
// Remove specific models
try await lightpack.removeModels(modelIds: ["model_id_1", "model_id_2"], removeAll: false)
// Or remove all models
// try await lightpack.removeModels(removeAll: true)
print("Models removed successfully")
} catch {
print("Error removing models: \(error)")
}
}
Task {
do {
try await lightpack.loadModel("23a77013-fe73-4f26-9ab2-33d315a71924")
print("Model loaded and set as active")
} catch {
print("Error loading model: \(error)")
}
}
Task {
do {
let messages = [
LPChatMessage(role: .user, content: "Why is water blue?")
]
try await lightpack.chatModel("23a77013-fe73-4f26-9ab2-33d315a71924", messages: messages) { token in
print(token)
}
} catch {
print("Error in chat: \(error)")
}
}
Task {
do {
try await lightpack.clearChat()
print("Chat history cleared")
} catch {
print("Error clearing chat: \(error)")
}
}
Lightpack provides a way to check for model updates using the getModels
or getModelFamilies
functions. If any updated models are returned, you can use this information to prompt users to update their local models or automatically update them in the background.
Here's how you can check for model updates:
func checkForModelUpdates() {
lightpack.getModels(modelIds["23a77013-fe73-4f26-9ab2-33d315a71924"]) { result in
switch result {
case .success((let response, let updatedModelIds)):
if !updatedModelIds.isEmpty {
print("Updates available for \(updatedModelIds.count) models")
// Prompt user to update or automatically update
for modelId in updatedModelIds {
updateModel(modelId: modelId)
}
} else {
print("All models are up to date")
}
case .failure(let error):
print("Error checking for updates: \(error)")
}
}
}
func updateModel(modelId: String) {
Task {
do {
try await lightpack.downloadModel(modelId)
print("Model \(modelId) updated successfully")
} catch {
print("Error updating model \(modelId): \(error)")
}
}
}
In this example:
- We use the
getModels
function to fetch the latest model information. - We check the
updatedModelIds
array in the result to see if any models have updates available. - If updates are available, we either prompt the user or automatically update the models using the
downloadModel
function.
You can call the checkForModelUpdates
function periodically (e.g., once a day or when your app starts) to ensure your local models are up to date.
Remember to handle the update process gracefully, especially for large model files, by showing progress to the user and allowing them to pause or cancel the update if needed.
[Previous content remains unchanged]
Lightpack exposes several public variables that provide information about the current state of models and families. These variables are marked with @Published
and can be observed in SwiftUI views or used in UIKit applications.
-
models: [String: LPModel]
A dictionary of all models, keyed by their model IDs. EachLPModel
contains information about a specific model, including its status, size, and other metadata. -
families: [String: LPModelFamily]
A dictionary of all model families, keyed by their family IDs. EachLPModelFamily
contains information about a group of related models. -
loadedModel: LPModel?
The currently loaded model, if any. This will benil
if no model is currently loaded. -
totalModelSize: Float
The total size of all downloaded models in GB.
You can access these variables directly from your Lightpack instance. Here's an example of how to use them:
let lightpack = Lightpack(apiKey: "your_api_key")
// Print information about all models
for (modelId, model) in lightpack.models {
print("Model ID: \(modelId)")
print("Model Title: \(model.title)")
print("Model Status: \(model.status)")
print("Model Size: \(model.size) GB")
print("---")
}
// Print information about the currently loaded model
if let loadedModel = lightpack.loadedModel {
print("Loaded Model: \(loadedModel.title)")
} else {
print("No model currently loaded")
}
// Print the total size of all downloaded models
print("Total size of downloaded models: \(lightpack.totalModelSize) GB")
In SwiftUI, you can observe changes to these variables by creating a @StateObject
of your Lightpack instance:
struct ContentView: View {
@StateObject var lightpack = Lightpack(apiKey: "your_api_key")
var body: some View {
VStack {
Text("Number of models: \(lightpack.models.count)")
Text("Number of families: \(lightpack.families.count)")
if let loadedModel = lightpack.loadedModel {
Text("Loaded model: \(loadedModel.title)")
} else {
Text("No model loaded")
}
Text("Total model size: \(lightpack.totalModelSize) GB")
}
}
}
This way, your view will automatically update whenever these variables change.
For more detailed information on each function, please refer to the inline documentation or the full API reference.
We welcome contributions to Lightpack! If you'd like to contribute, please follow these steps:
- Fork the repository on GitHub.
- Create a new branch for your feature or bug fix.
- Make your changes and commit them with clear, descriptive commit messages.
- Push your changes to your fork.
- Create a pull request from your fork to the main Lightpack repository.
To create a pull request:
- Navigate to the main page of the Lightpack repository.
- Click on "Pull requests" and then on the "New pull request" button.
- Select your fork and the branch containing your changes.
- Fill out the pull request template with a clear title and description of your changes.
- Click "Create pull request".
We'll review your pull request and provide feedback as soon as possible. Thank you for your contribution!
If you encounter a bug while using Lightpack, we appreciate your help in reporting it. Please follow these steps to submit a bug report:
- Go to the Issues page of the Lightpack repository on GitHub.
- Click on the "New issue" button.
- Choose the "Bug report" template if available, or start a blank issue.
- Provide a clear and descriptive title for the issue.
- In the body of the issue, please include:
- A detailed description of the bug
- Steps to reproduce the issue
- What you expected to happen
- What actually happened
- Your environment (OS version, Xcode version, Lightpack version, etc.)
- Any relevant code snippets or screenshots
- Click "Submit new issue" when you're done.
Before submitting a new bug report, please search the existing issues to see if someone has already reported the problem. If you find a similar issue, you can add additional information in the comments.
We appreciate your help in improving Lightpack!
Lightpack is designed with privacy in mind, operating primarily on-device. However, we do collect certain analytics and usage data to improve our services. For full details on what data we collect and how we use it, please refer to our Privacy Policy.
By using Lightpack, you agree to our Terms of Service. We encourage you to read these terms and our Privacy Policy to understand your rights and responsibilities when using our service.
Please see the model licenses in Available Model Families or Lightpack Models. Different models may have different licensing terms, so it's important to review the specific license for each model you intend to use.
If you need assistance or have any questions about Lightpack, including inquiries about our privacy practices or terms of service, please don't hesitate to reach out. You can contact the founders directly at founders@lightpack.run.
We strive to respond to all inquiries as quickly as possible and appreciate your feedback and questions. For more information on how we handle your data, please see our Privacy Policy.