AI Providers 🤖
Add powerful AI to your mobile app - without exposing your API keys!
ProtectMyAPI lets you use 20+ AI providers (ChatGPT, Claude, Stable Diffusion, etc.) directly from your iOS, Android, or Flutter app. Your API keys stay safe on our servers, and every request is verified to come from your real app.
Why use ProtectMyAPI for AI?
- ✅ Your API keys are NEVER in your app (can’t be stolen!)
- ✅ Every request is verified from a real device
- ✅ No backend code needed - call AI directly from your app
- ✅ Works with 20+ providers out of the box
How It Works (Super Simple!)
Your App → ProtectMyAPI (adds your API key) → AI Provider (OpenAI, etc.)
↓
Verifies it's really
your app on a real device- You add your API keys in the ProtectMyAPI Dashboard (one time setup)
- Your app calls ProtectMyAPI’s SDK
- ProtectMyAPI verifies the request is legitimate
- ProtectMyAPI adds your API key and forwards to the AI provider
- Response comes back to your app
Your app never sees the API key! 🎉
Supported Providers
Click any provider to jump to its documentation!
| Provider | What It Does | Best For |
|---|---|---|
| 🧠 OpenAI | ChatGPT, GPT-4, DALL-E, Whisper | Chatbots, image generation, transcription |
| 🤖 Anthropic | Claude AI | Long documents, analysis, coding |
| ✨ Google Gemini | Text, vision, audio, video | Multimodal apps, Google integration |
| 🎨 Stability AI | Image generation, editing, 3D | Art, product photos, visual content |
| 🔊 ElevenLabs | Voice synthesis, cloning | Audiobooks, voice assistants |
| 🚀 Replicate | 1000s of ML models | Anything you can imagine |
| ⚡ Together AI | Fast open-source models | Cost-effective AI |
| 🔍 Perplexity | AI-powered search | Research, fact-checking |
| 💨 Groq | Ultra-fast inference | Real-time chat |
| 🇫🇷 Mistral | European AI models | Code generation, EU compliance |
| 🧮 DeepSeek | Reasoning AI (R1) | Complex problems, math |
| 🌐 OpenRouter | 200+ models | Model flexibility |
| 🖼️ Fal.ai | Fast image generation | Quick prototypes |
| 🔥 Fireworks | Fast LLM inference | Production apps |
| 🌍 DeepL | Translation | Multi-language apps |
| 🦁 Brave Search | Web search API | Search features |
| 🌤️ Open-Meteo | Weather data | Weather apps |
| 🔄 EachAI | Workflows | Automation |
| ☁️ Azure OpenAI | Enterprise OpenAI | Compliance needs |
| ☁️ Azure Anthropic | Enterprise Claude | Compliance needs |
Quick Start
Already have ProtectMyAPI set up? Here’s how to add AI:
import ProtectMyAPI
// Make sure ProtectMyAPI is initialized first!
// ProtectMyAPI.shared.configure(appId: "your-app-id")
// Then use any AI provider:
let openai = ProtectMyAPI.openAIService()
let answer = try await openai.chat(
message: "Hello! What can you do?",
model: "gpt-4o-mini"
)
print(answer)Don’t forget: Add your AI provider API keys in the ProtectMyAPI Dashboard first! Go to your app → Settings → API Keys.
Complete Examples By Provider
Below you’ll find detailed examples for every AI provider. Each section shows you exactly how to use that provider in iOS, Android, and Flutter.
Quick Navigation
| Category | Providers |
|---|---|
| 💬 Chat & Language | OpenAI · Anthropic · Gemini · DeepSeek · Groq · Mistral · Together AI · OpenRouter · Fireworks |
| 🎨 Image Generation | Stability AI · Replicate · Fal.ai |
| 🔍 Search & Data | Perplexity · Brave Search · Open-Meteo |
| 🔊 Audio & Voice | ElevenLabs |
| 🌍 Translation | DeepL |
| 🔄 Workflows | EachAI |
| ☁️ Enterprise | Azure AI |
Tip: Click any provider above to jump directly to its documentation!
Gemini (Google) ✨
Google’s powerful multimodal AI - handles text, images, audio, and video!
Text Generation
let gemini = ProtectMyAPI.geminiService()
// Simple text generation
let text = try await gemini.generateText(
prompt: "Write a haiku about coding",
systemInstruction: "You are a creative poet",
maxTokens: 100
)
// Full request with all options
let request = GeminiGenerateContentRequest(
contents: [
GeminiContent(parts: [.text("Analyze this data")])
],
generationConfig: GeminiGenerationConfig(
maxOutputTokens: 1024,
temperature: 0.7
),
safetySettings: GeminiSafetySettings.permissive
)
let response = try await gemini.generateContent(body: request, model: "gemini-2.0-flash")
print(response.text ?? "")Image Analysis (Vision)
let gemini = ProtectMyAPI.geminiService()
// Load image data
guard let imageData = UIImage(named: "photo")?.jpegData(compressionQuality: 0.8) else { return }
let response = try await gemini.analyzeImage(
prompt: "What do you see in this image?",
imageData: imageData,
mimeType: "image/jpeg"
)
print(response.text ?? "")Audio Transcription
let gemini = ProtectMyAPI.geminiService()
let audioURL = Bundle.main.url(forResource: "recording", withExtension: "m4a")!
let audioData = try Data(contentsOf: audioURL)
let transcript = try await gemini.transcribeAudio(
audioData: audioData,
mimeType: "audio/mp4"
)
print(transcript)Multi-turn Chat
let gemini = ProtectMyAPI.geminiService()
// Create a chat session
let chat = gemini.chat(
systemInstruction: "You are a helpful coding assistant",
model: "gemini-2.0-flash"
)
// Have a conversation
let response1 = try await chat.send("What is Swift?")
print("Assistant: \(response1)")
let response2 = try await chat.send("Show me a simple example")
print("Assistant: \(response2)")
// Clear history if needed
chat.clearHistory()Structured Output (JSON)
let gemini = ProtectMyAPI.geminiService()
// Define the schema
let schema: [String: ProtectMyAPIJSONValue] = [
"type": "array",
"items": [
"type": "object",
"properties": [
"name": ["type": "string"],
"ingredients": ["type": "array", "items": ["type": "string"]]
]
]
]
// Get structured output
let data = try await gemini.generateStructuredOutput(
prompt: "List 3 simple recipes",
schema: schema
)
// Or decode directly to a type
struct Recipe: Decodable {
let name: String
let ingredients: [String]
}
let recipes: [Recipe] = try await gemini.generateStructuredOutput(
prompt: "List 3 simple recipes",
type: [Recipe].self,
schema: schema
)OpenAI (ChatGPT) 🧠
The most popular AI - powers ChatGPT! Great for chatbots, image generation, text-to-speech, and more.
Chat Completions
let openai = ProtectMyAPI.openAIService()
// Simple chat
let response = try await openai.chat(
message: "Explain machine learning",
model: "gpt-4o-mini",
systemPrompt: "You are a teacher"
)
// Multi-turn conversation
let chat = openai.chatSession(model: "gpt-4o-mini", systemPrompt: "You are helpful")
let reply1 = try await chat.send("Hello!")
let reply2 = try await chat.send("Tell me a joke")Image Generation (DALL-E)
let openai = ProtectMyAPI.openAIService()
let result = try await openai.generateImage(
prompt: "A futuristic city at sunset",
model: "dall-e-3",
size: .square1024,
quality: .hd
)
if let url = result.data.first?.url {
print("Image URL: \(url)")
}Anthropic (Claude) 🤖
Claude is known for being helpful, harmless, and honest. Great for analysis, long documents, and coding assistance.
Messages
let anthropic = ProtectMyAPI.anthropicService()
// Simple message
let response = try await anthropic.chat(
message: "What are the benefits of functional programming?",
model: "claude-3-5-sonnet-20241022",
systemPrompt: "You are a programming expert"
)
// Multi-turn conversation
let chat = anthropic.chatSession(
model: "claude-3-5-sonnet-20241022",
systemPrompt: "You are a helpful assistant"
)
let reply = try await chat.send("Hello Claude!")Advanced Features
ProtectMyAPI SDKs include extended features for all AI providers that match or exceed other AI proxy services.
Streaming Responses
All providers support real-time streaming for responsive UIs.
// Gemini streaming
let gemini = ProtectMyAPI.geminiService()
for try await text in gemini.streamText(prompt: "Tell me a story") {
print(text, terminator: "")
}
// OpenAI streaming
let openai = ProtectMyAPI.openAIService()
for try await text in openai.streamChat(message: "Tell me a story") {
print(text, terminator: "")
}
// Anthropic streaming
let anthropic = ProtectMyAPI.anthropicService()
for try await text in anthropic.streamChat(message: "Tell me a story") {
print(text, terminator: "")
}
// Streaming with tool support
let request = OpenAIChatRequest(model: "gpt-4o", messages: messages, tools: tools)
for try await text in openai.streamChatWithTools(body: request) { name, args in
let result = await executeFunction(name, args)
print("Tool \(name) returned: \(result)")
} {
print(text, terminator: "")
}Streaming Chat Sessions
Maintain conversation history with streaming responses.
// Create a streaming session
let session = anthropic.streamingChatSession(
model: "claude-3-5-sonnet-20241022",
systemPrompt: "You are a helpful assistant"
)
// Stream each response
for try await text in session.send("Hello!") {
print(text, terminator: "")
}
print()
// Continue the conversation
for try await text in session.send("Tell me more") {
print(text, terminator: "")
}Google Search Grounding (Gemini)
Get answers with real-time web information.
let gemini = ProtectMyAPI.geminiService()
// Search grounded response
let response = try await gemini.searchGrounded(
prompt: "What are the latest AI developments?",
model: "gemini-2.0-flash"
)
print(response.text)
// Access sources
if let metadata = response.groundingMetadata {
for chunk in metadata.groundingChunks {
print("Source: \(chunk.title ?? "") - \(chunk.uri ?? "")")
}
}
// Stream search grounded content
for try await text in gemini.streamSearchGrounded(prompt: "Latest tech news") {
print(text, terminator: "")
}Web Search (OpenAI)
Use OpenAI’s web search capability.
let openai = ProtectMyAPI.openAIService()
let result = try await openai.webSearch(
query: "Latest news about Apple Vision Pro",
model: "gpt-4o",
searchOptions: OpenAIWebSearchOptions(
searchContextSize: "medium",
userLocation: OpenAIUserLocation(
type: "approximate",
country: "US"
)
)
)
print(result)Content Moderation (OpenAI)
Moderate text and images for policy violations.
let openai = ProtectMyAPI.openAIService()
// Text moderation
let result = try await openai.moderateText("Check this content...")
if result.flagged {
print("Content flagged!")
for (category, flagged) in result.categories where flagged {
print(" - \(category): \(result.scores[category] ?? 0)")
}
}
// Image moderation
let imageResult = try await openai.moderateImage("https://example.com/image.jpg")Image Editing (OpenAI)
Edit images using gpt-image-1.
let openai = ProtectMyAPI.openAIService()
let editedImages = try await openai.editImage(
prompt: "Add a rainbow to the sky",
imageData: originalImageData,
maskData: maskData, // Optional: define area to edit
model: "gpt-image-1",
n: 1,
size: "1024x1024",
quality: "hd"
)
for url in editedImages {
print("Edited image: \(url)")
}Video Processing (Gemini)
Upload and analyze videos.
let gemini = ProtectMyAPI.geminiService()
// Upload a video
let uploadedFile = try await gemini.uploadVideo(
data: videoData,
mimeType: "video/mp4",
displayName: "my-video.mp4"
)
// Wait for processing
let readyFile = try await gemini.waitForFileProcessing(
name: uploadedFile.name,
timeout: 300
)
// Analyze the video
let analysis = try await gemini.analyzeVideo(
fileUri: readyFile.uri!,
prompt: "Describe what happens in this video"
)
print(analysis)
// Clean up
try await gemini.deleteFile(name: readyFile.name)Imagen Image Generation (Gemini)
Generate images using Google’s Imagen 3.
let gemini = ProtectMyAPI.geminiService()
let images = try await gemini.generateImageWithImagen(
prompt: "A serene Japanese garden at sunrise",
model: "imagen-3.0-generate-002",
numberOfImages: 2,
aspectRatio: "16:9",
negativePrompt: "blurry, low quality"
)
for imageData in images {
let image = UIImage(data: imageData)
// Display image
}
// Edit images with Imagen
let editedImages = try await gemini.editImageWithImagen(
prompt: "Remove the background",
imageData: originalImageData,
editMode: .inpaintRemove
)PDF Analysis (Anthropic)
Analyze PDF documents with Claude.
let anthropic = ProtectMyAPI.anthropicService()
// Load PDF
let pdfURL = Bundle.main.url(forResource: "document", withExtension: "pdf")!
let pdfData = try Data(contentsOf: pdfURL)
// Analyze PDF
let analysis = try await anthropic.analyzePDF(
prompt: "Summarize the key points of this document",
pdfData: pdfData,
model: "claude-3-5-sonnet-20241022",
maxTokens: 4096
)
print(analysis)
// Stream PDF analysis
for try await text in anthropic.streamAnalyzePDF(
prompt: "Extract all dates and deadlines",
pdfData: pdfData
) {
print(text, terminator: "")
}Prompt Caching (Anthropic)
Cache system prompts for efficiency.
let anthropic = ProtectMyAPI.anthropicService()
// Long system prompt (e.g., documentation, code context)
let systemPrompt = """
You are an expert in the following codebase...
[Large context here - 1000+ tokens benefit from caching]
"""
// Multiple requests reuse the cached system prompt
let response1 = try await anthropic.chatWithCachedSystem(
message: "How do I add a new feature?",
systemPrompt: systemPrompt,
cacheTTL: .fiveMinutes
)
let response2 = try await anthropic.chatWithCachedSystem(
message: "What tests should I write?",
systemPrompt: systemPrompt,
cacheTTL: .fiveMinutes
)File Storage & Vector Stores (OpenAI)
Manage files and vector stores for RAG applications.
let openai = ProtectMyAPI.openAIService()
// Upload a file
let file = try await openai.uploadFile(
data: documentData,
filename: "knowledge-base.txt",
purpose: "assistants"
)
// Create a vector store
let vectorStore = try await openai.createVectorStore(
name: "My Knowledge Base",
fileIds: [file.id],
metadata: ["category": "documentation"]
)
// Add more files
let newFile = try await openai.addFileToVectorStore(
vectorStoreId: vectorStore.id,
fileId: anotherFile.id
)
// List files
let files = try await openai.listFiles(purpose: "assistants")
// Clean up
try await openai.deleteVectorStore(vectorStoreId: vectorStore.id)
try await openai.deleteFile(fileId: file.id)Responses API (OpenAI)
Use OpenAI’s new Responses API with built-in tools.
let openai = ProtectMyAPI.openAIService()
// Create a response with web search
let response = try await openai.createResponse(
input: "What's happening in AI today?",
model: "gpt-4o",
tools: [
.webSearch(),
.codeInterpreter()
],
instructions: "Be concise and factual"
)
print(response.textContent)
// Stream a response
for try await event in openai.streamResponse(
input: "Analyze this data and create a chart",
tools: [.codeInterpreter()]
) {
switch event.type {
case "response.output_text.delta":
print(event.data["delta"] ?? "", terminator: "")
default:
break
}
}Realtime API Configuration (OpenAI)
Get WebSocket configuration for real-time audio applications.
let openai = ProtectMyAPI.openAIService()
let config = try await openai.realtimeSessionConfig(
model: "gpt-4o-realtime-preview",
modalities: ["text", "audio"],
voice: "alloy",
instructions: "You are a friendly assistant"
)
// Use config.webSocketUrl and config.clientSecret
// to establish a WebSocket connection
print("WebSocket URL: \(config.webSocketUrl)")
print("Token expires at: \(config.expiresAt)")Code Execution (Gemini)
Execute code within Gemini for calculations and data analysis.
let gemini = ProtectMyAPI.geminiService()
let result = try await gemini.generateWithCodeExecution(
prompt: "Calculate the fibonacci sequence up to 100 and plot it",
model: "gemini-2.0-flash"
)
print("Response: \(result.text)")
for code in result.executedCode {
print("Language: \(code.language)")
print("Code: \(code.code)")
print("Output: \(code.output ?? "No output")")
}DeepSeek 🧮
Powerful reasoning AI from China! DeepSeek excels at complex problems, math, and coding with transparent chain-of-thought reasoning.
What makes DeepSeek special: The DeepSeek-R1 model shows you its “thinking process” step by step, making it great for understanding how it arrives at answers. Excellent for math, logic puzzles, and complex reasoning!
Chat Completions
let deepseek = ProtectMyAPI.deepSeekService()
// Simple chat
let response = try await deepseek.chat(
message: "Explain machine learning",
model: .chat,
systemPrompt: "You are a helpful assistant"
)
print(response)
// Streaming
for try await chunk in deepseek.streamChat(message: "Write a story") {
print(chunk, terminator: "")
}Reasoning (DeepSeek-Reasoner)
The DeepSeek-Reasoner model provides transparent reasoning with visible chain-of-thought tokens.
let deepseek = ProtectMyAPI.deepSeekService()
// Get reasoning with chain-of-thought
let result = try await deepseek.reason(
message: "Solve: If 3x + 5 = 20, what is x?",
systemPrompt: "Show your work step by step"
)
print("Answer: \(result.content)")
if let reasoning = result.reasoningContent {
print("Reasoning: \(reasoning)")
}
// Track reasoning tokens
if let details = result.usage?.completionTokensDetails {
print("Reasoning tokens: \(details.reasoningTokens ?? 0)")
}
// Stream reasoning
for try await chunk in deepseek.streamReason(message: "Explain quantum entanglement") {
if let reasoning = chunk.reasoningContent {
print("[Reasoning] \(reasoning)", terminator: "")
}
if let content = chunk.content {
print(content, terminator: "")
}
}Multi-turn Chat Sessions
let deepseek = ProtectMyAPI.deepSeekService()
// Create a session
let session = deepseek.chatSession(
model: .chat,
systemPrompt: "You are a math tutor"
)
let response1 = try await session.send("What is calculus?")
let response2 = try await session.send("Can you give me an example?")
// Streaming session
let streamingSession = deepseek.streamingChatSession(
model: .chat,
systemPrompt: "You are creative"
)
for try await chunk in streamingSession.send("Tell me a joke") {
print(chunk, terminator: "")
}Brave Search 🦁
Privacy-focused search API! Get web, news, image, video, and local search results without tracking.
What makes Brave Search special: Unlike Google, Brave doesn’t track users. Great for building search features that respect privacy. Includes web, news, images, videos, and local business search!
Web Search
let brave = ProtectMyAPI.braveSearchService()
// Simple web search
let results = try await brave.webSearch(
query: "Swift programming language",
count: 10,
safesearch: .moderate
)
for result in results.web {
print("\(result.title): \(result.url)")
print(result.description ?? "")
}
// With AI summary
let searchWithSummary = try await brave.webSearch(
query: "Latest tech news",
summary: true
)
if let summary = searchWithSummary.summary?.summary {
print("Summary: \(summary)")
}News Search
let brave = ProtectMyAPI.braveSearchService()
let news = try await brave.newsSearch(
query: "technology",
freshness: .pastDay,
count: 20
)
for article in news.results {
print("\(article.title) - \(article.age ?? "")")
print("Source: \(article.source ?? "Unknown")")
}Image & Video Search
let brave = ProtectMyAPI.braveSearchService()
// Image search
let images = try await brave.imageSearch(
query: "nature photography",
safesearch: .strict
)
for image in images.results {
print("\(image.title): \(image.url)")
print("Size: \(image.properties?.width ?? 0)x\(image.properties?.height ?? 0)")
}
// Video search
let videos = try await brave.videoSearch(
query: "Swift tutorials",
freshness: .pastMonth
)
for video in videos.results {
print("\(video.title) - \(video.duration ?? "")")
print("Views: \(video.views ?? 0)")
}Local Search (Places)
let brave = ProtectMyAPI.braveSearchService()
let places = try await brave.localSearch(
query: "coffee shops near Times Square",
count: 10
)
for place in places.results {
print("\(place.title)")
print("Address: \(place.address ?? "N/A")")
print("Rating: \(place.rating?.ratingValue ?? 0)/\(place.rating?.bestRating ?? 5)")
print("Phone: \(place.phone ?? "N/A")")
}Search Suggestions
let brave = ProtectMyAPI.braveSearchService()
let suggestions = try await brave.suggest(
query: "how to",
rich: true
)
for suggestion in suggestions.suggestions {
print(suggestion.query)
if suggestion.isEntity == true {
print(" Title: \(suggestion.title ?? "")")
}
}DeepL Translation 🌍
The most accurate AI translator! Translate text in 35+ languages with natural-sounding results.
What makes DeepL special: DeepL consistently beats Google Translate in blind tests. Supports document translation, custom glossaries for brand terms, and preserves formatting. Your users can communicate globally!
Text Translation
let deepl = ProtectMyAPI.deepLService()
// Simple translation
let translated = try await deepl.translateText(
text: "Hello, how are you?",
targetLang: .german,
formality: .more
)
print(translated) // "Hallo, wie geht es Ihnen?"
// Batch translation
let response = try await deepl.translate(
text: ["Hello", "Goodbye", "Thank you"],
targetLang: .spanish
)
for translation in response.translations {
print("\(translation.text) (detected: \(translation.detectedSourceLanguage ?? "unknown"))")
}Document Translation
Translate entire documents (PDF, DOCX, PPTX, etc.) while preserving formatting.
let deepl = ProtectMyAPI.deepLService()
// Load document
guard let documentData = try? Data(contentsOf: documentURL) else { return }
// Translate and wait for result
let translatedData = try await deepl.translateDocument(
document: documentData,
filename: "report.pdf",
targetLang: .french,
formality: .less,
pollInterval: 2.0,
timeout: 300.0
)
// Save translated document
try translatedData.write(to: outputURL)
// Or use manual control
let handle = try await deepl.uploadDocument(
document: documentData,
filename: "report.docx",
targetLang: .german
)
// Check status periodically
let status = try await deepl.getDocumentStatus(
documentId: handle.documentId,
documentKey: handle.documentKey
)
print("Status: \(status.status), Remaining: \(status.secondsRemaining ?? 0)s")
// Download when ready
if status.status == .done {
let result = try await deepl.downloadDocument(
documentId: handle.documentId,
documentKey: handle.documentKey
)
}Glossaries
Create custom glossaries to ensure consistent translation of domain-specific terms.
let deepl = ProtectMyAPI.deepLService()
// Create a glossary
let glossary = try await deepl.createGlossary(
name: "Tech Terms",
sourceLang: .english,
targetLang: .german,
entries: [
"API": "API",
"cloud computing": "Cloud-Computing",
"machine learning": "maschinelles Lernen"
]
)
print("Created glossary: \(glossary.glossaryId)")
// Use glossary in translation
let translated = try await deepl.translate(
text: ["Our API uses machine learning"],
targetLang: .german,
glossaryId: glossary.glossaryId
)
// List all glossaries
let glossaries = try await deepl.listGlossaries()
// Get glossary entries
let entries = try await deepl.getGlossaryEntries(glossaryId: glossary.glossaryId)
// Delete glossary
try await deepl.deleteGlossary(glossaryId: glossary.glossaryId)Usage Statistics
let deepl = ProtectMyAPI.deepLService()
let usage = try await deepl.getUsage()
print("Characters used: \(usage.characterCount)/\(usage.characterLimit)")
print("Usage: \(String(format: "%.1f", usage.usagePercentage))%")Azure AI Services ☁️
Enterprise-grade AI with your Azure account! Use OpenAI and Anthropic models through your existing Azure infrastructure.
What makes Azure special: If your company already uses Azure, you can run AI through your existing contracts, compliance, and billing. Same great models, your cloud!
Azure OpenAI
// Create Azure OpenAI service
let azureOpenAI = ProtectMyAPI.azureOpenAIService(
resourceName: "my-resource",
deploymentId: "gpt-4o",
apiVersion: "2024-08-01-preview"
)
// Simple chat
let response = try await azureOpenAI.chat(
message: "Hello from Azure!",
systemPrompt: "You are helpful"
)
print(response)
// Streaming
for try await chunk in azureOpenAI.streamChat(message: "Tell me a story") {
print(chunk, terminator: "")
}
// Embeddings
let embeddings = try await azureOpenAI.createEmbeddings(
input: ["Hello world", "Azure AI"]
)Azure Anthropic
// Create Azure Anthropic service
let azureAnthropic = ProtectMyAPI.azureAnthropicService(
resourceName: "my-resource",
deploymentId: "claude-3-5-sonnet"
)
// Simple chat
let response = try await azureAnthropic.chat(
message: "Hello from Azure!",
systemPrompt: "You are helpful",
model: "claude-3-5-sonnet-20241022"
)
print(response)
// Streaming
for try await chunk in azureAnthropic.streamChat(message: "Explain AI") {
print(chunk, terminator: "")
}Backend Configuration
To use AI providers, you need to configure the corresponding endpoints in your ProtectMyAPI dashboard:
- Go to your app’s Endpoints tab
- Add a new endpoint for each AI provider you want to use:
ai/gemini/*→https://generativelanguage.googleapis.com/v1betaai/openai/*→https://api.openai.com/v1ai/anthropic/*→https://api.anthropic.com/v1ai/deepseek/*→https://api.deepseek.com/v1ai/brave/*→https://api.search.brave.com/res/v1ai/deepl/*→https://api.deepl.com/v2(orapi-free.deepl.comfor free tier)azure-openai/*→https://{resource}.openai.azure.comazure-anthropic/*→https://{resource}.anthropic.azure.com
- Add your API keys in the Secrets section
- Configure request transforms to inject the API key
Never expose your AI provider API keys in your mobile app. ProtectMyAPI handles this securely by storing keys server-side and injecting them into requests.
ElevenLabs 🔊
The most realistic AI voices! Turn text into natural speech, clone voices, and generate sound effects.
What makes ElevenLabs special: These voices sound human - not robotic. Clone any voice with just a few minutes of audio. Perfect for podcasts, audiobooks, game characters, and accessibility features!
Text-to-Speech
let elevenLabs = ProtectMyAPI.elevenLabsService()
// Simple text-to-speech
let audioData = try await elevenLabs.textToSpeech(
text: "Hello, welcome to ProtectMyAPI!",
voice: .rachel,
model: .multilingualV2
)
// Play the audio or save to file
try audioData.write(to: URL(fileURLWithPath: "output.mp3"))
// With voice settings
let customAudio = try await elevenLabs.textToSpeech(
text: "This is more expressive speech",
voice: .adam,
voiceSettings: ElevenLabsVoiceSettings(
stability: 0.5,
similarityBoost: 0.75,
style: 0.5,
useSpeakerBoost: true
)
)
// Stream audio for real-time playback
for try await chunk in elevenLabs.streamTextToSpeech(
text: "This is a longer text that will be streamed",
voice: .charlie
) {
// Process audio chunks in real-time
audioPlayer.enqueue(chunk)
}Speech-to-Speech
Transform one voice into another while preserving the speech content:
// Transform voice
let transformedAudio = try await elevenLabs.speechToSpeech(
audioData: sourceAudioData,
targetVoice: .rachel,
model: .multilingualV2
)
// Stream transformation
for try await chunk in elevenLabs.streamSpeechToSpeech(
audioData: sourceAudio,
targetVoice: .adam
) {
audioPlayer.enqueue(chunk)
}Voice Cloning
Create custom voices from audio samples:
// Clone a voice from samples
let clonedVoice = try await elevenLabs.createVoiceClone(
name: "My Custom Voice",
audioSamples: [sample1Data, sample2Data, sample3Data],
description: "A warm, friendly voice"
)
// Use the cloned voice
let audio = try await elevenLabs.textToSpeech(
text: "Hello from my cloned voice!",
voiceId: clonedVoice.voiceId
)
// Delete when done
try await elevenLabs.deleteVoice(clonedVoice.voiceId)Sound Effects
Generate sound effects from text descriptions:
// Generate a sound effect
let soundEffect = try await elevenLabs.generateSoundEffect(
text: "A thunderstorm with heavy rain and distant thunder",
durationSeconds: 10.0,
promptInfluence: 0.5
)Audio Isolation
Remove background noise from audio:
let cleanAudio = try await elevenLabs.isolateAudio(noisyAudioData)Fal.ai 🖼️
Blazing-fast image generation! Create images in seconds with Stable Diffusion XL, Flux, and more.
What makes Fal.ai special: Speed! Fal.ai optimizes models for fast inference. Generate high-quality images without making users wait. Great for real-time creative apps!
Image Generation
let fal = ProtectMyAPI.falService()
// Fast SDXL generation
let sdxlImages = try await fal.generateFastSDXL(
prompt: "A serene mountain landscape at sunset",
negativePrompt: "blurry, low quality",
imageSize: .landscapeHd,
numImages: 2
)
// Flux generation (high quality)
let fluxImages = try await fal.generateFlux(
prompt: "A photorealistic portrait of a robot",
imageSize: .squareHd,
numInferenceSteps: 28,
guidanceScale: 3.5
)
// Flux Schnell (fast)
let fastImages = try await fal.generateFluxSchnell(
prompt: "A cute cat wearing a hat",
numInferenceSteps: 4
)
// Flux Pro (premium quality)
let proImages = try await fal.generateFluxPro(
prompt: "Professional product photography"
)
// With LoRA
let loraImages = try await fal.generateFluxLoRA(
prompt: "A portrait in my custom style",
loras: [FalLoRAWeight(path: "path/to/lora", scale: 1.0)]
)Virtual Try-On
// Virtual try-on with IDM-VTON
let tryOnResult = try await fal.createTryOn(
humanImageUrl: "https://example.com/person.jpg",
garmentImageUrl: "https://example.com/shirt.jpg",
category: .upperBody
)LoRA Training
Train custom LoRA models:
// Train a Flux LoRA
let training = try await fal.trainFluxLoRA(
imagesDataUrl: "https://example.com/training-images.zip",
triggerWord: "mystyle",
steps: 1000,
isStyle: true
)
print("LoRA available at: \(training.result?.diffusersLoraFile ?? "")")Image Manipulation
// Image-to-Image transformation
let transformed = try await fal.imageToImage(
prompt: "Transform into a watercolor painting",
imageUrl: "https://example.com/photo.jpg",
strength: 0.75
)
// Inpainting
let inpainted = try await fal.inpaint(
prompt: "A red car",
imageUrl: "https://example.com/image.jpg",
maskUrl: "https://example.com/mask.png"
)
// Upscaling
let upscaled = try await fal.upscale(
imageUrl: "https://example.com/small.jpg",
scale: 4,
faceEnhance: true
)
// Background removal
let noBackground = try await fal.removeBackground(
imageUrl: "https://example.com/photo.jpg"
)
// Face swap
let swapped = try await fal.faceSwap(
baseImageUrl: "https://example.com/base.jpg",
swapImageUrl: "https://example.com/face.jpg"
)Fireworks AI 🔥
Fast open-source model hosting! Run Llama, Mixtral, and DeepSeek R1 at blazing speeds.
What makes Fireworks special: They optimize open-source models for speed. Get Llama 3 70B responses faster than most providers. Plus function calling and JSON mode support!
Chat Completions
let fireworks = ProtectMyAPI.fireworksService()
// Simple chat
let response = try await fireworks.chat(
message: "Explain machine learning",
model: .llama3_70b,
systemPrompt: "You are a helpful AI teacher"
)
// Streaming
for try await chunk in fireworks.streamChat(
message: "Write a story about a robot",
model: .mixtral8x22b
) {
print(chunk, terminator: "")
}
// Full request with all options
let fullResponse = try await fireworks.createChatCompletion(
body: FireworksChatRequest(
model: FireworksModel.llama3_1_405b.value,
messages: [
FireworksMessage.system("You are helpful"),
FireworksMessage.user("Hello!")
],
maxTokens: 1000,
temperature: 0.7,
topP: 0.9
)
)DeepSeek R1
Access DeepSeek R1 reasoning model through Fireworks:
// Buffered response
let r1Response = try await fireworks.deepSeekR1(
message: "Solve: What is 15% of 340?",
systemPrompt: "Show your work step by step"
)
print(r1Response.choices.first?.message.content ?? "")
// Streaming for real-time thinking
for try await chunk in fireworks.streamDeepSeekR1(
message: "Explain the theory of relativity"
) {
if let content = chunk.choices.first?.delta?.content {
print(content, terminator: "")
}
}Embeddings
let embeddings = try await fireworks.createEmbeddings(
input: ["Hello world", "Machine learning is fascinating"],
model: "nomic-ai/nomic-embed-text-v1.5"
)
for embedding in embeddings.data {
print("Embedding \(embedding.index): \(embedding.embedding.count) dimensions")
}Image Generation
let images = try await fireworks.generateImage(
prompt: "A futuristic city at night",
negativePrompt: "blurry, low quality",
width: 1024,
height: 1024,
steps: 30,
guidanceScale: 7.0
)
for image in images.data {
print("Image URL: \(image.url ?? "base64 encoded")")
}Chat Sessions
// Create a chat session for multi-turn conversations
let session = fireworks.chatSession(
model: .llama3_70b,
systemPrompt: "You are a helpful coding assistant"
)
let response1 = try await session.send("What is recursion?")
let response2 = try await session.send("Can you give me an example in Python?")
let response3 = try await session.send("Now in JavaScript")
// Clear history if needed
session.clearHistory()EachAI 🔄
AI workflow automation! Build complex AI pipelines without code.
What makes EachAI special: Chain multiple AI operations together. Create workflows that combine text analysis, image generation, and data processing. Trigger them from your app with one API call!
Triggering Workflows
let eachAI = ProtectMyAPI.eachAIService()
// Trigger a workflow
let trigger = try await eachAI.triggerWorkflow(
workflowId: "workflow-123",
inputs: ["text": "Process this content"]
)
print("Execution ID: \(trigger.executionId)")
// Poll for completion
let result = try await eachAI.pollForCompletion(
executionId: trigger.executionId,
timeoutSeconds: 300,
pollIntervalSeconds: 2
)
print("Status: \(result.status)")
print("Outputs: \(result.outputs ?? [:])")Run Workflow and Wait
Convenience method that combines triggering and polling:
// Run and wait in one call
let execution = try await eachAI.runWorkflow(
workflowId: "workflow-123",
inputs: ["prompt": "Generate a summary"],
timeoutSeconds: 300
)
if execution.status == "completed" {
print("Result: \(execution.outputs ?? [:])")
} else if let error = execution.error {
print("Error: \(error.message ?? "Unknown")")
}
// Or get outputs directly
let outputs = try await eachAI.runWorkflowGetOutputs(
workflowId: "workflow-123",
inputs: ["prompt": "Analyze this data"]
)Workflow Management
// List all workflows
let workflows = try await eachAI.listWorkflows()
for workflow in workflows.workflows {
print("\(workflow.name): \(workflow.description ?? "")")
}
// Get workflow details
let workflow = try await eachAI.getWorkflow("workflow-123")
print("Inputs: \(workflow.inputs ?? [])")
print("Outputs: \(workflow.outputs ?? [])")
// List executions
let executions = try await eachAI.listExecutions(
workflowId: "workflow-123",
status: "completed",
limit: 10
)
// Get execution logs
let logs = try await eachAI.getExecutionLogs("execution-456")
for log in logs.logs {
print("[\(log.level)] \(log.message)")
}
// Cancel an execution
try await eachAI.cancelExecution("execution-789")Groq 💨
The fastest AI inference on the planet! Get responses in milliseconds, not seconds.
What makes Groq special: Custom LPU chips make Groq 10x+ faster than GPUs. Llama 3 70B generates 250+ tokens/second. Perfect for real-time apps where speed matters!
Chat Completions
let groq = ProtectMyAPI.groqService()
// Simple chat
let response = try await groq.chat(
message: "What is the meaning of life?",
model: .llama3_3_70b
)
// With system prompt
let response = try await groq.chat(
message: "Write a poem about AI",
model: .mixtral8x7b,
systemPrompt: "You are a creative poet",
maxTokens: 500,
temperature: 0.8
)Streaming
// Stream chat response
for try await chunk in groq.streamChat(
message: "Tell me a story",
model: .llama3_1_8b
) {
print(chunk, terminator: "")
}Audio Transcription (Whisper)
// Transcribe audio
let transcription = try await groq.transcribe(
audioData: audioData,
filename: "recording.mp3",
model: .whisperLargeV3,
language: "en"
)
// Translate to English
let translation = try await groq.translate(
audioData: audioData,
filename: "speech.wav"
)Vision
// Chat with image
let analysis = try await groq.chatWithVision(
message: "What's in this image?",
imageUrl: "https://example.com/image.jpg",
model: .llama3_2_90bVision
)JSON Mode
// Get structured JSON response
let json = try await groq.chatWithJson(
message: "List 3 programming languages with their year of creation",
model: .llama3_3_70b
)Tool Use
let tools = [
GroqTool(
function: GroqFunction(
name: "get_weather",
description: "Get current weather for a location",
parameters: [
"type": "object",
"properties": [
"location": ["type": "string", "description": "City name"]
],
"required": ["location"]
]
)
)
]
let response = try await groq.chatWithTools(
messages: [GroqMessage.user("What's the weather in Tokyo?")],
tools: tools,
model: .llama3_3_70b
)Groq Models
| Model | ID | Context | Use Case |
|---|---|---|---|
| Llama 3.3 70B | llama-3.3-70b-versatile | 128K | Best quality, versatile |
| Llama 3.1 405B | llama-3.1-405b-reasoning | 128K | Complex reasoning |
| Llama 3.1 70B | llama-3.1-70b-versatile | 128K | High quality |
| Llama 3.1 8B | llama-3.1-8b-instant | 128K | Fast, efficient |
| Llama 3.2 90B Vision | llama-3.2-90b-vision-preview | 128K | Vision + text |
| Llama 3.2 11B Vision | llama-3.2-11b-vision-preview | 128K | Fast vision |
| Mixtral 8x7B | mixtral-8x7b-32768 | 32K | MoE, code |
| Gemma 2 9B | gemma2-9b-it | 8K | Compact |
| Whisper Large V3 | whisper-large-v3 | - | Best transcription |
| Whisper Large V3 Turbo | whisper-large-v3-turbo | - | Fast transcription |
Mistral 🇫🇷
Europe’s leading AI lab! Powerful models with Codestral for code and great embeddings.
What makes Mistral special: Best-in-class code generation with Codestral. Multilingual models that excel at European languages. GDPR-friendly option for EU apps!
Chat Completions
let mistral = ProtectMyAPI.mistralService()
// Simple chat
let response = try await mistral.chat(
message: "Explain machine learning",
model: .mistralLarge
)
// With parameters
let response = try await mistral.chat(
message: "Write a function to sort an array",
model: .codestral,
systemPrompt: "You are an expert programmer",
maxTokens: 1000,
temperature: 0.3
)Code Generation (Codestral)
// Generate code
let code = try await mistral.generateCode(
prompt: "Write a binary search function in Python",
suffix: "# Test the function",
maxTokens: 500
)
// Fill in the middle
let filled = try await mistral.generateCode(
prompt: "def fibonacci(n):\n ",
suffix: "\n return result"
)Embeddings
// Single embedding
let embedding = try await mistral.createEmbedding(
input: "Machine learning is fascinating"
)
// Multiple embeddings
let embeddings = try await mistral.createEmbeddings(
inputs: ["Hello world", "Goodbye world"],
model: .mistralEmbed
)Vision (Pixtral)
// Chat with image
let analysis = try await mistral.chatWithVision(
message: "Describe this image in detail",
imageUrl: "https://example.com/photo.jpg",
model: .pixtralLarge
)Mistral Models
| Model | ID | Context | Use Case |
|---|---|---|---|
| Mistral Large | mistral-large-latest | 128K | Best quality |
| Mistral Small | mistral-small-latest | 32K | Balanced |
| Mistral NeMo | open-mistral-nemo | 128K | Long context |
| Codestral | codestral-latest | 32K | Code generation |
| Codestral Mamba | open-codestral-mamba | 256K | Fast code |
| Pixtral Large | pixtral-large-latest | 128K | Vision + text |
| Pixtral 12B | pixtral-12b-2409 | 128K | Fast vision |
| Mistral Embed | mistral-embed | 8K | Embeddings |
| Ministral 8B | ministral-8b-latest | 128K | Efficient |
| Ministral 3B | ministral-3b-latest | 128K | Compact |
OpenRouter 🌐
One API, 200+ models! Access any AI model with automatic fallback and routing.
What makes OpenRouter special: Why pick one model? Access GPT-4, Claude, Llama, and 200+ more through a single API. Automatic fallback if a provider is down. Compare models easily!
Chat Completions
let openRouter = ProtectMyAPI.openRouterService()
// Simple chat with single model
let response = try await openRouter.chat(
message: "What is quantum computing?",
models: [.gpt4o]
)
// Multi-model with fallback
let response = try await openRouter.chat(
message: "Explain relativity",
models: [.claudeSonnet, .gpt4o, .geminiFlash],
route: .fallback,
systemPrompt: "You are a physics professor"
)Routing Modes
OpenRouter supports different routing strategies:
- fallback: Try models in order until one succeeds
- price: Choose the cheapest available model
- latency: Choose the fastest available model
// Price-optimized routing
let response = try await openRouter.chat(
message: "Simple question",
models: [.gpt4o, .claudeSonnet, .mistralLarge],
route: .priced
)
// Latency-optimized routing
let response = try await openRouter.chat(
message: "Quick response needed",
models: [.geminiFlash, .gpt4oMini, .claudeHaiku],
route: .latency
)Structured Outputs (JSON Schema)
let schema: [String: Any] = [
"type": "object",
"properties": [
"name": ["type": "string"],
"age": ["type": "integer"],
"skills": [
"type": "array",
"items": ["type": "string"]
]
],
"required": ["name", "age", "skills"]
]
let json = try await openRouter.chatWithSchema(
message: "Create a profile for a senior Python developer",
schema: schema,
schemaName: "developer_profile",
models: [.gpt4o]
)Vision
let analysis = try await openRouter.chatWithVision(
message: "What objects are in this image?",
imageUrl: "https://example.com/photo.jpg",
models: [.gpt4o, .claudeSonnet]
)List Available Models
// List all available models
let models = try await openRouter.listModels()
for model in models.data {
print("\(model.name): \(model.id)")
print(" Context: \(model.contextLength ?? 0)")
print(" Price: \(model.pricing?.prompt ?? "N/A")/1K tokens")
}OpenRouter Models
| Provider | Model | ID |
|---|---|---|
| OpenAI | GPT-4o | openai/gpt-4o |
| OpenAI | GPT-4o Mini | openai/gpt-4o-mini |
| OpenAI | o1 | openai/o1 |
| OpenAI | o3-mini | openai/o3-mini |
| Anthropic | Claude 3.5 Sonnet | anthropic/claude-3.5-sonnet |
| Anthropic | Claude 3.5 Haiku | anthropic/claude-3.5-haiku |
| Gemini 2.0 Flash | google/gemini-2.0-flash-exp:free | |
| Gemini Flash 1.5 | google/gemini-flash-1.5 | |
| Meta | Llama 3.3 70B | meta-llama/llama-3.3-70b-instruct |
| Meta | Llama 3.1 405B | meta-llama/llama-3.1-405b-instruct |
| DeepSeek | DeepSeek Chat | deepseek/deepseek-chat |
| DeepSeek | DeepSeek R1 | deepseek/deepseek-r1 |
| xAI | Grok 2 | x-ai/grok-2 |
| Mistral | Mistral Large | mistralai/mistral-large |
| Qwen | Qwen 2.5 72B | qwen/qwen-2.5-72b-instruct |
Open-Meteo 🌤️
Free weather data for your app! Forecasts, historical data, air quality, and marine conditions.
What makes Open-Meteo special: Completely free for non-commercial use! No API key needed for basic weather. Great for hobby projects, weather widgets, and location-aware apps.
Weather Forecast
let openMeteo = ProtectMyAPI.openMeteoService()
// Simple forecast
let weather = try await openMeteo.getSimpleForecast(
latitude: 40.7128,
longitude: -74.0060,
days: 7
)
// Current weather
if let current = weather.current {
print("Temperature: \(current.temperature2m ?? 0)°C")
print("Humidity: \(current.relativeHumidity2m ?? 0)%")
}
// Daily forecast
if let daily = weather.daily {
for (index, date) in (daily.time ?? []).enumerated() {
let max = daily.temperatureMax?[index] ?? 0
let min = daily.temperatureMin?[index] ?? 0
print("\(date): \(min)°C - \(max)°C")
}
}Custom Variables
let weather = try await openMeteo.getForecast(
latitude: 51.5074,
longitude: -0.1278,
hourly: [.temperature2m, .precipitation, .windSpeed10m, .uvIndex],
daily: [.temperatureMax, .temperatureMin, .sunrise, .sunset],
current: [.temperature2m, .weatherCode, .isDay],
temperatureUnit: .celsius,
windSpeedUnit: .kmh,
forecastDays: 14
)Historical Weather
let historical = try await openMeteo.getHistoricalWeather(
latitude: 48.8566,
longitude: 2.3522,
startDate: "2024-01-01",
endDate: "2024-01-31",
hourly: [.temperature2m, .precipitation],
daily: [.temperatureMax, .temperatureMin]
)Air Quality
// Simple air quality
let airQuality = try await openMeteo.getSimpleAirQuality(
latitude: 35.6762,
longitude: 139.6503
)
if let current = airQuality.current {
print("European AQI: \(current.europeanAqi ?? 0)")
print("US AQI: \(current.usAqi ?? 0)")
print("PM2.5: \(current.pm25 ?? 0) µg/m³")
}
// Custom variables including pollen
let detailed = try await openMeteo.getAirQuality(
latitude: 35.6762,
longitude: 139.6503,
hourly: [.pm10, .pm25, .ozone, .grassPollen, .birchPollen],
current: [.europeanAqi, .usAqi]
)Marine Forecast
let marine = try await openMeteo.getMarineForecast(
latitude: 25.0343,
longitude: -77.3963,
hourly: [.waveHeight, .waveDirection, .wavePeriod],
daily: [.waveHeightMax, .waveDirectionDominant]
)
if let hourly = marine.hourly {
for (index, time) in (hourly.time ?? []).enumerated() {
let height = hourly.waveHeight?[index] ?? 0
print("\(time): \(height)m waves")
}
}Geocoding
// Search for locations
let results = try await openMeteo.searchLocations(
name: "New York",
count: 5,
language: "en"
)
for location in results.results ?? [] {
print("\(location.name), \(location.country ?? "")")
print(" Coordinates: \(location.latitude), \(location.longitude)")
print(" Population: \(location.population ?? 0)")
}Elevation
// Single location
let elevation = try await openMeteo.getElevation(
latitude: 27.9881,
longitude: 86.9250 // Mt. Everest
)
print("Elevation: \(elevation.elevation.first ?? 0)m")
// Multiple locations
let elevations = try await openMeteo.getElevations([
(latitude: 27.9881, longitude: 86.9250),
(latitude: 45.8326, longitude: 6.8652)
])Flood Forecast
let flood = try await openMeteo.getFloodForecast(
latitude: 51.5074,
longitude: -0.1278,
daily: [.riverDischarge, .riverDischargeMean, .riverDischargeMax],
forecastDays: 30
)
if let daily = flood.daily {
for (index, date) in (daily.time ?? []).enumerated() {
let discharge = daily.riverDischarge?[index] ?? 0
print("\(date): \(discharge) m³/s")
}
}Perplexity 🔍
AI-powered search that actually cites its sources! Perfect for research, fact-checking, and getting up-to-date information.
What makes Perplexity special: Unlike regular chatbots, Perplexity searches the web in real-time and shows you exactly where it got its information. Great for getting accurate, current facts!
Chat Completions with Search
let perplexity = ProtectMyAPI.perplexityService()
// Simple search-enhanced chat
let response = try await perplexity.chat(
message: "What are the latest developments in AI?",
model: .sonarPro
)
print(response.choices.first?.message.content ?? "")
// With citations
if let citations = response.citations {
print("Sources:")
for (index, url) in citations.enumerated() {
print(" [\(index + 1)] \(url)")
}
}Streaming
let perplexity = ProtectMyAPI.perplexityService()
for try await text in perplexity.streamChat(
message: "Explain quantum computing",
model: .sonar
) {
print(text, terminator: "")
}Full Request Options
let request = PerplexityChatRequest(
model: .sonarReasoningPro,
messages: [
.system("You are a helpful research assistant"),
.user("What is the current state of fusion energy?")
],
maxTokens: 2048,
temperature: 0.7,
searchRecencyFilter: .month, // week, month, year
returnCitations: true,
returnImages: true,
returnRelatedQuestions: true
)
let response = try await perplexity.createChatCompletion(request: request)
// Access related questions
if let relatedQuestions = response.relatedQuestions {
print("Related questions:")
for question in relatedQuestions {
print(" - \(question)")
}
}
// Access images from search
if let images = response.images {
for image in images {
print("Image: \(image)")
}
}Available Models
| Model | Description |
|---|---|
sonar | Fast search-enhanced model |
sonarPro | Advanced search with better reasoning |
sonarReasoning | Chain-of-thought reasoning with search |
sonarReasoningPro | Best reasoning capabilities |
Replicate 🚀
Run ANY machine learning model - thousands to choose from! Replicate hosts everything from image generators to video models to audio processors.
What makes Replicate special: It’s like an app store for AI models. Want to try the latest FLUX image model? A video generator? A music creator? Replicate probably has it. You can even train your own models!
Image Generation (Flux)
let replicate = ProtectMyAPI.replicateService()
// Quick Flux Schnell generation
let images = try await replicate.generateFluxSchnell(
prompt: "A majestic castle on a mountain",
aspectRatio: "16:9",
numOutputs: 1
)
for imageUrl in images {
print("Image: \(imageUrl)")
}
// Flux Dev (higher quality)
let devImages = try await replicate.generateFluxDev(
prompt: "Hyperrealistic portrait of a robot",
guidance: 3.5,
numInferenceSteps: 28
)
// Flux Pro (best quality)
let proImages = try await replicate.generateFluxPro(
prompt: "Professional product photo",
aspectRatio: "1:1",
safetyTolerance: 2
)Running Any Model
let replicate = ProtectMyAPI.replicateService()
// Create a prediction
let prediction = try await replicate.createPrediction(
model: "stability-ai/sdxl",
input: [
"prompt": "A beautiful sunset over the ocean",
"negative_prompt": "blurry, low quality",
"num_outputs": 2
]
)
// Wait for completion
let result = try await replicate.waitForPrediction(
id: prediction.id,
pollingInterval: 1.0
)
if let outputs = result.output?.asArray {
for output in outputs {
print("Output: \(output)")
}
}Model Discovery
let replicate = ProtectMyAPI.replicateService()
// Get model info
let model = try await replicate.getModel(
owner: "black-forest-labs",
name: "flux-schnell"
)
print("Model: \(model.name)")
print("Description: \(model.description ?? "")")
// Search models
let results = try await replicate.searchModels(query: "image generation")
for model in results.results {
print("\(model.owner)/\(model.name)")
}
// List available hardware
let hardware = try await replicate.listHardware()
for hw in hardware {
print("\(hw.name): \(hw.sku)")
}Training Custom Models
let replicate = ProtectMyAPI.replicateService()
// Create a training
let training = try await replicate.createTraining(
modelOwner: "ostris",
modelName: "flux-dev-lora-trainer",
versionId: "latest",
destination: "your-username/my-model",
input: [
"input_images": "https://example.com/training-images.zip",
"steps": 1000,
"trigger_word": "mytoken"
]
)
// Check training status
let status = try await replicate.getTraining(id: training.id)
print("Training status: \(status.status)")Stability AI 🎨
The kings of image generation! Create stunning images, edit photos, upscale images, and even generate 3D models.
What makes Stability AI special: They created Stable Diffusion, one of the most popular AI image generators. Their latest models (Ultra, Core, SD3.5) produce incredibly detailed, photorealistic images. Plus they have editing tools (remove backgrounds, inpaint, outpaint) and even 3D model generation!
Image Generation
let stability = ProtectMyAPI.stabilityService()
// Ultra (best quality)
let ultraImage = try await stability.generateUltra(
prompt: "A serene Japanese garden with cherry blossoms",
negativePrompt: "blurry, distorted",
aspectRatio: "16:9",
seed: 12345,
outputFormat: "webp"
)
print("Ultra image: \(ultraImage.count) bytes")
// Core (balanced)
let coreImage = try await stability.generateCore(
prompt: "Abstract digital art",
stylePreset: "digital-art"
)
// SD3.5 (fast)
let sd35Image = try await stability.generateSD35(
prompt: "Photorealistic landscape",
model: "sd3.5-large",
cfgScale: 7.0
)Image Upscaling
let stability = ProtectMyAPI.stabilityService()
// Fast upscale (4x)
let upscaled = try await stability.upscaleFast(
image: imageData,
outputFormat: "png"
)
// Conservative upscale (preserves details)
let conservative = try await stability.upscaleConservative(
image: imageData,
prompt: "high quality photo"
)
// Creative upscale (async - use for best results)
let creativeResult = try await stability.upscaleCreative(
image: imageData,
prompt: "ultra detailed, 8k",
creativity: 0.3
)
// Poll for result
if case .pending(let generationId) = creativeResult {
let result = try await stability.getAsyncResult(
id: generationId,
endpoint: "upscale/creative"
)
}Image Editing
let stability = ProtectMyAPI.stabilityService()
// Search and replace objects
let replaced = try await stability.searchAndReplace(
image: imageData,
prompt: "a golden retriever",
searchPrompt: "the dog"
)
// Remove background
let noBg = try await stability.removeBackground(image: imageData)
// Erase objects (with mask)
let erased = try await stability.erase(
image: imageData,
mask: maskData
)
// Inpaint (fill masked area with prompt)
let inpainted = try await stability.inpaint(
image: imageData,
mask: maskData,
prompt: "beautiful flowers"
)
// Outpaint (extend image)
let extended = try await stability.outpaint(
image: imageData,
prompt: "continued landscape",
left: 200,
right: 200
)
// Recolor objects
let recolored = try await stability.searchAndRecolor(
image: imageData,
prompt: "bright red",
selectPrompt: "the car"
)
// Replace background and relight
let relit = try await stability.replaceBackgroundAndRelight(
image: imageData,
backgroundPrompt: "sunset beach scene"
)Control Features
let stability = ProtectMyAPI.stabilityService()
// Sketch to image
let fromSketch = try await stability.controlSketch(
image: sketchData,
prompt: "modern house architectural render",
controlStrength: 0.7
)
// Structure control (maintain composition)
let structured = try await stability.controlStructure(
image: referenceImage,
prompt: "oil painting style"
)
// Style transfer
let styled = try await stability.controlStyle(
image: contentImage,
prompt: "Van Gogh starry night style",
fidelity: 0.5
)3D Generation
let stability = ProtectMyAPI.stabilityService()
// Fast 3D from image
let model3D = try await stability.generate3DFast(
image: objectImage,
textureResolution: 1024,
foregroundRatio: 0.85
)
// Returns .glb file data
// Point-aware 3D (better for complex objects)
let detailedModel = try await stability.generate3DPointAware(
image: objectImage,
pixelDensity: 48,
remeshOption: "quad"
)Together AI ⚡
Fast, affordable access to the best open-source AI models! Run Llama, Qwen, Mistral, and more at lightning speed.
What makes Together AI special: They host all the best open-source models (Llama 3.3, Qwen 2.5, Mixtral, etc.) with super fast inference. Often cheaper than OpenAI, and you get access to the latest open-source breakthroughs. Plus they support tool calling and structured JSON outputs!
Chat Completions
let together = ProtectMyAPI.togetherService()
// Simple chat
let response = try await together.chat(
message: "Explain the theory of relativity",
model: .llama3_3_70b,
systemPrompt: "You are a physics professor"
)
print(response)
// Full request with options
let request = TogetherChatRequest(
model: .qwen2_5_72b,
messages: [
.system("You are a helpful assistant"),
.user("Write a haiku about coding")
],
maxTokens: 100,
temperature: 0.7,
topP: 0.9
)
let result = try await together.createChatCompletion(request: request)
print(result.choices.first?.message.content ?? "")Streaming
let together = ProtectMyAPI.togetherService()
for try await text in together.streamChat(
message: "Tell me a long story",
model: .llama3_1_70b
) {
print(text, terminator: "")
}Tool Calling (Function Calling)
let together = ProtectMyAPI.togetherService()
// Define tools
let tools = [
TogetherTool(
function: TogetherToolFunction(
name: "get_weather",
description: "Get the current weather for a location",
parameters: TogetherFunctionParameters(
properties: [
"location": TogetherPropertySchema(
type: "string",
description: "City name"
),
"unit": TogetherPropertySchema(
type: "string",
enumValues: ["celsius", "fahrenheit"]
)
],
required: ["location"]
)
)
)
]
let request = TogetherChatRequest(
model: .llama3_1_70b,
messages: [.user("What's the weather in Tokyo?")],
tools: tools,
toolChoice: "auto"
)
let response = try await together.createChatCompletion(request: request)
// Handle tool calls
if let toolCalls = response.choices.first?.message.toolCalls {
for toolCall in toolCalls {
print("Function: \(toolCall.function.name)")
print("Arguments: \(toolCall.function.arguments)")
// Execute function and continue conversation
let result = executeFunction(toolCall.function.name, toolCall.function.arguments)
// Add tool result to messages
var messages = request.messages
messages.append(.assistant("", toolCalls: toolCalls))
messages.append(.tool(result, toolCallId: toolCall.id))
// Continue conversation
let followUp = TogetherChatRequest(
model: .llama3_1_70b,
messages: messages
)
let finalResponse = try await together.createChatCompletion(request: followUp)
}
}JSON Mode / Structured Outputs
let together = ProtectMyAPI.togetherService()
// Simple JSON mode
let jsonRequest = TogetherChatRequest(
model: .llama3_1_70b,
messages: [.user("List 3 programming languages as JSON")],
responseFormat: .json
)
// With schema
let schemaRequest = TogetherChatRequest(
model: .llama3_1_70b,
messages: [.user("Analyze this text sentiment")],
responseFormat: .jsonSchema(
TogetherJSONSchema(
properties: [
"sentiment": TogetherPropertySchema(
type: "string",
enumValues: ["positive", "negative", "neutral"]
),
"confidence": TogetherPropertySchema(type: "number"),
"keywords": TogetherPropertySchema(type: "array")
],
required: ["sentiment", "confidence"]
)
)
)
let result = try await together.createChatCompletion(request: schemaRequest)
let jsonString = result.choices.first?.message.content ?? ""Embeddings
let together = ProtectMyAPI.togetherService()
let response = try await together.createEmbeddings(
request: TogetherEmbeddingRequest(
model: .bgeLarge,
input: ["Hello world", "How are you?"]
)
)
for embedding in response.data {
print("Embedding \(embedding.index): \(embedding.embedding.count) dimensions")
}Image Generation
let together = ProtectMyAPI.togetherService()
let response = try await together.generateImage(
request: TogetherImageRequest(
model: .fluxSchnell,
prompt: "A cyberpunk cityscape at night",
n: 1,
width: 1024,
height: 1024,
steps: 4
)
)
for image in response.data {
if let url = image.url {
print("Image URL: \(url)")
}
}Available Models
Chat Models:
| Model | Description |
|---|---|
llama3_3_70b | Meta Llama 3.3 70B Instruct Turbo |
llama3_1_405b | Meta Llama 3.1 405B Instruct |
qwen2_5_72b | Qwen 2.5 72B Instruct |
qwen2_5_coder_32b | Qwen 2.5 Coder 32B |
mixtral8x22b | Mixtral 8x22B Instruct |
deepseekCoder_33b | DeepSeek Coder 33B |
Image Models:
| Model | Description |
|---|---|
fluxSchnell | FLUX.1 Schnell (free tier) |
fluxDev | FLUX.1 Dev |
fluxPro | FLUX.1 Pro |
stableDiffusionXL | Stable Diffusion XL |
Embedding Models:
| Model | Description |
|---|---|
bgeLarge | BGE Large EN v1.5 |
e5MistralInstruct | E5 Mistral 7B Instruct |
Need Help? 🆘
Questions? Join our Discord community or email [email protected]. We’re happy to help!
Common Issues
“API key not found” error → Make sure you’ve added your API key in the ProtectMyAPI Dashboard. Go to your app → Settings → API Keys.
“Attestation failed” error → You’re probably testing on a simulator/emulator. Use a real device!
“Rate limited” error → You’ve hit your usage limits. Check your plan in the dashboard or wait a bit before trying again.