iOS SDK π±
Build secure iOS apps with hardware-backed device attestation and 20+ AI providers.
How it works: Every API request from your app is cryptographically verified using Appleβs App Attest (Secure Enclave). This proves the request came from YOUR legitimate app on a real iPhone/iPad - not a hacker, bot, or jailbroken device. Your API keys never touch the device.
Requirements
| Requirement | Details |
|---|---|
| iOS 14.0+ | App Attest was introduced in iOS 14 |
| Xcode 15+ | For building your app |
| Physical Device | Simulator doesnβt support App Attest |
| ProtectMyAPI Account | Sign up free |
Testing: You MUST test on a real iPhone or iPad. The iOS Simulator cannot perform device attestation.
Installation
Add the SDK
Swift Package Manager (Recommended)
- Open your project in Xcode
- Go to File β Add Package Dependencies
- Paste:
https://github.com/protectmyapi/ios-sdk - Click Add Package
Package.swift:
dependencies: [
.package(url: "https://github.com/protectmyapi/ios-sdk", from: "1.0.0")
]Enable App Attest
- Select your app target in Xcode
- Go to Signing & Capabilities
- Click + Capability
- Add App Attest
Initialize the SDK
SwiftUI:
import SwiftUI
import ProtectMyAPI
@main
struct MyApp: App {
init() {
ProtectMyAPI.shared.configure(appId: "app_your_id_here")
}
var body: some Scene {
WindowGroup {
ContentView()
}
}
}UIKit:
import UIKit
import ProtectMyAPI
@main
class AppDelegate: UIResponder, UIApplicationDelegate {
func application(_ application: UIApplication,
didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
ProtectMyAPI.shared.configure(appId: "app_your_id_here")
return true
}
}Configuration Options
The SDK offers extensive configuration for security and behavior:
let config = ProtectMyAPIConfiguration(
appId: "app_your_id_here",
environment: .production, // .production or .development
// Security Options
enableAppAttest: true, // Use Apple's App Attest (default: true)
enableCertificatePinning: true, // Pin TLS certificates (default: true)
pinnedPublicKeyHashes: [ // Custom certificate hashes
"sha256/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA="
],
enableSecurityChecks: true, // Run security checks (default: true)
allowJailbrokenDevices: false, // Block jailbroken devices (default: false)
allowSimulator: false, // Block simulator (default: false)
enableRequestSigning: true, // Sign all requests (default: true)
// Network Options
timeout: 30.0, // Request timeout in seconds
maxRetryAttempts: 3, // Auto-retry failed requests
retryDelay: 1.0, // Initial retry delay
// Logging
logLevel: .info // .none, .error, .warning, .info, .debug
)
ProtectMyAPI.shared.configure(config)Configuration Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
appId | String | Required | Your app ID from the dashboard |
environment | Environment | .production | .production or .development |
enableAppAttest | Bool | true | Enable Apple App Attest verification |
enableCertificatePinning | Bool | true | Pin TLS certificates to prevent MITM attacks |
pinnedPublicKeyHashes | [String] | nil | Custom certificate hash pins |
enableSecurityChecks | Bool | true | Run jailbreak, debugger, and tampering checks |
allowJailbrokenDevices | Bool | false | Allow requests from jailbroken devices |
allowSimulator | Bool | false | Allow requests from iOS Simulator |
enableRequestSigning | Bool | true | Cryptographically sign all requests |
timeout | TimeInterval | 30.0 | Network request timeout |
maxRetryAttempts | Int | 3 | Number of automatic retries |
retryDelay | TimeInterval | 1.0 | Initial delay between retries (exponential backoff) |
logLevel | LogLevel | .info | Logging verbosity |
Making Secure Requests
HTTP Methods
// GET request
let users: [User] = try await ProtectMyAPI.shared.get("https://api.example.com/users")
// POST request
let newUser = try await ProtectMyAPI.shared.post(
"https://api.example.com/users",
body: ["name": "John", "email": "[email protected]"]
)
// PUT request
let updated = try await ProtectMyAPI.shared.put(
"https://api.example.com/users/123",
body: ["name": "John Updated"]
)
// DELETE request
try await ProtectMyAPI.shared.delete("https://api.example.com/users/123")
// Generic request with full control
let response = try await ProtectMyAPI.shared.request(
url: "https://api.example.com/data",
method: "PATCH",
headers: ["X-Custom-Header": "value"],
body: someData
)Using URLRequest
var request = URLRequest(url: URL(string: "https://api.example.com/users")!)
request.httpMethod = "POST"
request.setValue("application/json", forHTTPHeaderField: "Content-Type")
request.httpBody = try JSONEncoder().encode(["name": "John"])
let (data, response) = try await ProtectMyAPISession.shared.data(for: request)Fetching Secrets
Retrieve secrets stored in your ProtectMyAPI dashboard securely:
// Get a single secret
let stripeKey = try await ProtectMyAPI.shared.getSecret("stripe_publishable_key")
// Get multiple secrets
let secrets = try await ProtectMyAPI.shared.getSecrets(["stripe_key", "pusher_key"])Device Registration
Register the device for attestation (done automatically, but can be called manually):
let registration = try await ProtectMyAPI.shared.registerDevice()
print("Device ID: \(registration.deviceId)")
print("Attestation valid: \(registration.isValid)")Security Features
Security Report
Get a comprehensive security assessment of the device:
let report = ProtectMyAPI.shared.getSecurityReport()
print("Jailbroken: \(report.isJailbroken)")
print("Debugger attached: \(report.isDebuggerAttached)")
print("Running in Simulator: \(report.isRunningInEmulator)")
print("Reverse engineered: \(report.isReverseEngineered)")
print("Tampered: \(report.isTampered)")
print("Failed checks: \(report.failedChecks)")Manual Security Checks
let checker = SecurityChecker.shared
// Individual checks
let isJailbroken = checker.isDeviceJailbroken()
let isDebugged = checker.isDebuggerAttached()
let isSimulator = checker.isRunningInSimulator()
let isTampered = checker.isAppTampered()
// Block if compromised
if isJailbroken || isDebugged {
showSecurityAlert()
return
}Whatβs Detected
| Check | Description |
|---|---|
| Jailbreak | Cydia, Sileo, Unc0ver, checkra1n, and 50+ jailbreak indicators |
| Debugger | LLDB, Xcode debugging, ptrace detection |
| Simulator | iOS Simulator environment |
| Hooking | Frida, Cycript, MobileSubstrate, and other hooking frameworks |
| Tampering | Modified binaries, resigned apps, sandbox violations |
AI Providers
ProtectMyAPI provides native integrations with 20+ AI providers. Your API keys are stored securely on our servers and never touch the device.
OpenAI (GPT-4, DALL-E, Whisper)
let openai = ProtectMyAPI.openAIService()
// Simple chat
let answer = try await openai.chat(
message: "Explain quantum computing",
model: .gpt4oMini,
systemPrompt: "You are a helpful assistant"
)
// Streaming response
for try await chunk in openai.streamChat(message: "Write a story") {
print(chunk, terminator: "")
}
// Full request with all options
let request = OpenAIChatRequest(
model: "gpt-4o",
messages: [
.system("You are a coding expert"),
.user("Write a Swift function to sort an array")
],
maxTokens: 1000,
temperature: 0.7,
topP: 0.9,
presencePenalty: 0.1,
frequencyPenalty: 0.1
)
let response = try await openai.chatCompletion(body: request)
// Vision - analyze images
let imageData = UIImage(named: "photo")!.jpegData(compressionQuality: 0.8)!
let description = try await openai.analyzeImage(
prompt: "What's in this image?",
imageData: imageData,
model: .gpt4o
)
// Image generation with DALL-E
let images = try await openai.generateImage(
prompt: "A futuristic city at sunset",
model: "dall-e-3",
size: .landscape1792x1024,
quality: .hd,
style: .vivid
)
// Text to speech
let audioData = try await openai.textToSpeech(
text: "Hello, welcome to my app!",
voice: .nova,
model: "tts-1-hd",
speed: 1.0
)
// Audio transcription (Whisper)
let audioFile = // ... load audio data
let transcription = try await openai.transcribeAudio(
audioData: audioFile,
language: "en",
prompt: "Technical discussion about APIs"
)
// Embeddings
let embeddings = try await openai.createEmbedding(
input: "Hello world",
model: "text-embedding-3-small"
)
// Multi-turn conversation
let session = openai.chatSession(model: .gpt4o, systemPrompt: "You are helpful")
let response1 = try await session.send("What is Swift?")
let response2 = try await session.send("How does it compare to Kotlin?")
session.clearHistory()Anthropic (Claude)
let claude = ProtectMyAPI.anthropicService()
// Simple chat
let answer = try await claude.chat(
message: "Explain machine learning",
model: .claude35Sonnet,
maxTokens: 1024
)
// Streaming
for try await chunk in claude.streamChat(message: "Write an essay") {
print(chunk, terminator: "")
}
// With system prompt
let response = try await claude.chat(
message: "Review this code",
model: .claude35Sonnet,
systemPrompt: "You are an expert code reviewer"
)
// Vision - analyze images
let analysis = try await claude.analyzeImage(
prompt: "Describe this diagram",
imageData: imageData,
mimeType: "image/png"
)
// Tool use (function calling)
let tools = [
AnthropicTool(
name: "get_weather",
description: "Get the current weather in a location",
inputSchema: [
"type": "object",
"properties": [
"location": ["type": "string", "description": "City name"]
],
"required": ["location"]
]
)
]
let toolResponse = try await claude.createMessageWithTools(
message: "What's the weather in San Francisco?",
tools: tools
)
// Multi-turn conversation
let session = claude.chatSession(model: .claude35Sonnet)
let r1 = try await session.send("What is Rust?")
let r2 = try await session.send("Show me an example")Google Gemini
let gemini = ProtectMyAPI.geminiService()
// Simple text generation
let text = try await gemini.generateText(
prompt: "Write a haiku about coding",
model: "gemini-2.0-flash"
)
// Streaming
for try await chunk in gemini.streamContent(prompt: "Explain relativity") {
print(chunk, terminator: "")
}
// Vision - analyze images
let response = try await gemini.analyzeImage(
prompt: "What's in this photo?",
imageData: imageData,
mimeType: "image/jpeg",
model: "gemini-2.0-flash"
)
// Audio transcription
let transcript = try await gemini.transcribeAudio(
audioData: audioFile,
mimeType: "audio/mp4"
)
// Structured output (JSON mode)
let structuredResponse = try await gemini.generateStructuredOutput(
prompt: "List 3 programming languages with their use cases",
schema: [
"type": "array",
"items": [
"type": "object",
"properties": [
"name": ["type": "string"],
"useCase": ["type": "string"]
]
]
]
)
// Image generation
let imageData = try await gemini.generateImage(
prompt: "A serene mountain landscape",
model: "gemini-2.0-flash-exp-image-generation"
)
// Multi-turn chat
let chat = gemini.chat(systemInstruction: "You are a math tutor")
let a1 = try await chat.send("What is calculus?")
let a2 = try await chat.send("Give me an example")Mistral AI
let mistral = ProtectMyAPI.mistralService()
// Chat completion
let answer = try await mistral.chat(
message: "Explain neural networks",
model: .mistralLarge
)
// Streaming
for try await chunk in mistral.streamChat(message: "Write a poem") {
print(chunk, terminator: "")
}
// Code generation
let code = try await mistral.chat(
message: "Write a Python quicksort",
model: .codestral,
systemPrompt: "You are an expert programmer"
)
// Embeddings
let embeddings = try await mistral.createEmbedding(
input: "Hello world",
model: .mistralEmbed
)
// JSON mode
let json = try await mistral.chat(
message: "List top 5 cities",
model: .mistralLarge,
responseFormat: .json
)Groq (Ultra-Fast Inference)
let groq = ProtectMyAPI.groqService()
// Lightning-fast chat (Llama 3.1 70B in milliseconds!)
let answer = try await groq.chat(
message: "What is the speed of light?",
model: .llama31_70b
)
// Streaming
for try await chunk in groq.streamChat(message: "Tell a joke", model: .mixtral) {
print(chunk, terminator: "")
}
// Whisper transcription (fastest in the world!)
let transcript = try await groq.transcribeAudio(
audioData: audioFile,
model: .whisperLargeV3
)DeepSeek
let deepseek = ProtectMyAPI.deepSeekService()
// Chat with DeepSeek V3
let answer = try await deepseek.chat(
message: "Explain transformer architecture",
model: .deepSeekChat
)
// Code generation
let code = try await deepseek.chat(
message: "Implement binary search in Swift",
model: .deepSeekCoder
)
// Reasoning (DeepSeek R1)
let reasoning = try await deepseek.chat(
message: "Solve this math problem step by step...",
model: .deepSeekReasoner
)Stability AI (Image Generation)
let stability = ProtectMyAPI.stabilityService()
// Stable Image Ultra (highest quality)
let ultraImage = try await stability.generateUltra(
request: StabilityUltraRequest(
prompt: "A majestic castle on a cliff",
negativePrompt: "blurry, low quality",
aspectRatio: .landscape16x9
),
apiKey: "your-key" // or use ProtectMyAPI proxy
)
// Stable Image Core (balanced)
let coreImage = try await stability.generateCore(
request: StabilityCoreRequest(
prompt: "A cute robot",
stylePreset: .digitalArt
)
)
// SD3.5 (latest model)
let sd35Image = try await stability.generateSD35(
request: StabilitySD35Request(
prompt: "Abstract art",
model: .sd35Large
)
)
// Upscale images (4x)
let upscaled = try await stability.upscaleFast(
imageData: originalImage,
outputFormat: .png
)
// Conservative upscale (preserves details)
let upscaledConservative = try await stability.upscaleConservative(
imageData: originalImage,
prompt: "A detailed landscape photo"
)
// Image editing
let edited = try await stability.searchAndReplace(
imageData: originalImage,
searchPrompt: "cat",
prompt: "dog"
)
// Remove background
let noBackground = try await stability.removeBackground(imageData: photo)
// Inpainting
let inpainted = try await stability.inpaint(
imageData: originalImage,
maskData: maskImage,
prompt: "A beautiful garden"
)
// Outpainting (expand image)
let expanded = try await stability.outpaint(
imageData: originalImage,
direction: .right,
prompt: "Continue the landscape"
)
// Control generation (sketch to image)
let fromSketch = try await stability.controlSketch(
imageData: sketchData,
prompt: "A detailed house"
)
// Style transfer
let stylized = try await stability.controlStyle(
imageData: contentImage,
styleImageData: styleImage
)
// Video generation (Stable Video)
let video = try await stability.imageToVideo(
imageData: startImage,
seed: 12345
)ElevenLabs (Voice & Audio)
let elevenlabs = ProtectMyAPI.elevenLabsService()
// Text to speech
let audio = try await elevenlabs.textToSpeech(
text: "Hello, welcome to my app!",
voiceId: "EXAVITQu4vr4xnSDxMaL", // Sarah
modelId: .multilingualV2,
voiceSettings: ElevenLabsVoiceSettings(
stability: 0.5,
similarityBoost: 0.75,
style: 0.5,
useSpeakerBoost: true
)
)
// Streaming TTS
for try await chunk in elevenlabs.streamTextToSpeech(text: "Long text...") {
audioPlayer.append(chunk)
}
// Speech to speech (voice conversion)
let converted = try await elevenlabs.speechToSpeech(
audio: originalAudio,
voiceId: "targetVoiceId",
removeBackgroundNoise: true
)
// Voice cloning
let clonedVoice = try await elevenlabs.createVoiceClone(
name: "My Custom Voice",
files: [audioSample1, audioSample2, audioSample3],
description: "A warm, friendly voice"
)
// Sound effects generation
let soundEffect = try await elevenlabs.generateSoundEffect(
text: "A thunderstorm with heavy rain",
durationSeconds: 10
)
// Audio isolation (remove background noise)
let cleanAudio = try await elevenlabs.isolateAudio(audio: noisyAudio)Perplexity (AI Search)
let perplexity = ProtectMyAPI.perplexityService()
// Web-grounded search
let response = try await perplexity.createChatCompletion(
request: PerplexityChatRequest(
model: .sonarLarge,
messages: [.user("What are the latest iPhone 16 features?")],
searchRecencyFilter: .week // Only results from past week
)
)
// Get the answer with citations
print(response.choices.first?.message.content ?? "")
for citation in response.citations ?? [] {
print("Source: \(citation)")
}
// Simple search
let results = try await perplexity.search(
queries: ["Latest AI developments"],
options: PerplexitySearchOptions(
model: .sonarPro,
recencyFilter: .month
)
)
// Streaming search
for try await chunk in perplexity.createChatCompletionStream(request: request) {
print(chunk.choices.first?.delta?.content ?? "", terminator: "")
}Together AI
let together = ProtectMyAPI.togetherService()
// Chat with open-source models
let answer = try await together.createChatCompletion(
request: TogetherChatRequest(
model: "meta-llama/Llama-3.1-70B-Instruct",
messages: [.user("Explain transformers")]
)
)
// Image generation (FLUX)
let image = try await together.generateImage(
prompt: "A cyberpunk city",
model: "black-forest-labs/FLUX.1-schnell",
width: 1024,
height: 768
)
// Embeddings
let embeddings = try await together.createEmbeddings(
request: TogetherEmbeddingRequest(
model: "togethercomputer/m2-bert-80M-8k-retrieval",
input: ["Hello world"]
)
)Replicate
let replicate = ProtectMyAPI.replicateService()
// Run any model
let prediction = try await replicate.createPrediction(
request: ReplicatePredictionRequest(
model: "stability-ai/sdxl",
input: ["prompt": "A beautiful sunset"]
),
waitForCompletion: true
)
// Flux Schnell (fast image gen)
let fluxImage = try await replicate.generateFluxSchnell(
prompt: "A serene lake"
)
// Flux Pro (high quality)
let fluxProImage = try await replicate.generateFluxPro(
prompt: "Detailed portrait"
)
// List and manage predictions
let predictions = try await replicate.listPredictions()
let status = try await replicate.getPrediction(id: "prediction_id")
try await replicate.cancelPrediction(id: "prediction_id")Fireworks AI
let fireworks = ProtectMyAPI.fireworksService()
// Chat completion
let answer = try await fireworks.chat(
message: "Explain quantum computing",
model: .llama3_70b
)
// DeepSeek R1 via Fireworks
let reasoning = try await fireworks.deepSeekR1(
message: "Solve: If x + 5 = 12, what is x?"
)
// Streaming
for try await chunk in fireworks.streamChat(message: "Write a story") {
print(chunk, terminator: "")
}
// Image generation
let image = try await fireworks.generateImage(
prompt: "A fantasy landscape",
model: .stableDiffusionXL
)
// Embeddings
let embeddings = try await fireworks.createEmbedding(input: "Hello world")OpenRouter (200+ Models)
let openrouter = ProtectMyAPI.openRouterService()
// Chat with automatic fallback
let answer = try await openrouter.chat(
message: "Tell me a joke",
models: [.gpt4o, .claudeSonnet, .llama3_70b], // Will try in order
route: .fallback
)
// Vision across models
let description = try await openrouter.chatWithVision(
message: "What's in this image?",
imageURL: "https://example.com/image.jpg",
models: [.gpt4o, .claudeSonnet]
)
// Streaming
for try await chunk in openrouter.streamChat(message: "Write a poem") {
print(chunk, terminator: "")
}
// JSON mode
let json = try await openrouter.chatWithSchema(
message: "List 3 countries",
schema: ["type": "array", "items": ["type": "string"]]
)Brave Search
let brave = ProtectMyAPI.braveSearchService()
// Web search
let webResults = try await brave.webSearch(
query: "best programming languages 2024",
count: 10,
safesearch: .moderate,
freshness: .month
)
// News search
let news = try await brave.newsSearch(
query: "AI developments",
freshness: .day
)
// Image search
let images = try await brave.imageSearch(
query: "cute cats",
count: 20
)
// Video search
let videos = try await brave.videoSearch(query: "Swift tutorials")
// Local search (businesses)
let localResults = try await brave.localSearch(
query: "coffee shops near me",
country: "US"
)
// Autocomplete suggestions
let suggestions = try await brave.suggest(query: "how to")DeepL (Translation)
let deepl = ProtectMyAPI.deepLService()
// Simple translation
let translated = try await deepl.translate(
text: "Hello, how are you?",
to: .german,
from: .english,
formality: .formal
)
// Batch translation
let translations = try await deepl.translate(
texts: ["Hello", "Goodbye", "Thank you"],
to: .spanish
)
// With language detection
let detailed = try await deepl.translateWithDetails(
text: "Bonjour le monde",
to: .english
)
print("Detected: \(detailed.detectedSourceLanguage)")
print("Translation: \(detailed.text)")
// Document translation
let docHandle = try await deepl.translateDocument(
documentData: pdfData,
filename: "document.pdf",
targetLang: .french
)
// Wait for document translation
let translatedDoc = try await deepl.translateDocumentAndWait(
documentData: pdfData,
filename: "document.pdf",
targetLang: .japanese
)Fal.ai (Fast Image Gen)
let fal = ProtectMyAPI.falService()
// Fast SDXL
let sdxlImage = try await fal.generateFastSDXL(
prompt: "A beautiful landscape",
imageSize: .squareHD,
numInferenceSteps: 25,
guidanceScale: 7.5
)
// Flux (high quality)
let fluxImage = try await fal.generateFlux(
prompt: "Detailed portrait",
imageSize: .landscapeHD
)
// Flux Schnell (fastest)
let fastImage = try await fal.generateFluxSchnell(
prompt: "Quick sketch",
numInferenceSteps: 4
)
// Flux Pro
let proImage = try await fal.generateFluxPro(
prompt: "Professional quality",
guidanceScale: 2.5
)
// With custom LoRA
let loraImage = try await fal.generateFluxLoRA(
prompt: "A portrait in custom style",
loras: [
FalLoRAWeight(path: "url-to-lora", scale: 0.8)
]
)
// Virtual try-on
let tryOnResult = try await fal.virtualTryOn(
personImage: personData,
garmentImage: clothingData
)
// LoRA training
let trainingURL = try await fal.uploadTrainingData(zipData: imagesZip)
let trainingJob = try await fal.trainFluxLoRA(
imagesDataURL: trainingURL.absoluteString,
triggerWord: "mysubject",
steps: 1000
)Open-Meteo (Weather)
let weather = ProtectMyAPI.openMeteoService()
// Simple forecast
let forecast = try await weather.getSimpleForecast(
latitude: 37.7749,
longitude: -122.4194,
days: 7
)
// Detailed forecast with specific variables
let detailed = try await weather.getForecast(
latitude: 37.7749,
longitude: -122.4194,
hourly: [.temperature2m, .precipitationProbability, .windSpeed10m],
daily: [.temperatureMax, .temperatureMin, .sunrise, .sunset],
current: [.temperature2m, .weatherCode, .isDay],
temperatureUnit: .fahrenheit,
forecastDays: 14
)
// Historical weather
let historical = try await weather.getHistoricalWeather(
latitude: 37.7749,
longitude: -122.4194,
startDate: "2024-01-01",
endDate: "2024-01-31",
daily: [.temperatureMax, .temperatureMin, .precipitationSum]
)
// Air quality
let airQuality = try await weather.getAirQuality(
latitude: 37.7749,
longitude: -122.4194,
current: [.europeanAqi, .usAqi, .pm10, .pm25]
)
// Marine forecast
let marine = try await weather.getMarineForecast(
latitude: 37.7749,
longitude: -122.4194,
hourly: [.waveHeight, .wavePeriod, .waveDirection]
)
// Geocoding (find coordinates)
let locations = try await weather.searchLocation(name: "San Francisco")Error Handling
do {
let response = try await ProtectMyAPI.shared.get("https://api.example.com/data")
// Success!
} catch ProtectMyAPIError.attestationFailed(let reason) {
// Device couldn't be verified (jailbroken, simulator, etc.)
switch reason {
case .jailbreakDetected:
showAlert("Please use an unmodified device")
case .simulatorDetected:
showAlert("Please test on a real device")
case .serverRejected(let message):
showAlert("Verification failed: \(message)")
}
} catch ProtectMyAPIError.networkError(let error) {
showAlert("Network error: \(error.localizedDescription)")
} catch ProtectMyAPIError.serverError(let code, let message) {
showAlert("Server error \(code): \(message)")
} catch ProtectMyAPIError.rateLimited(let retryAfter) {
showAlert("Too many requests. Try again in \(retryAfter)s")
} catch ProtectMyAPIError.unauthorized {
showAlert("Invalid or expired credentials")
} catch ProtectMyAPIError.invalidConfiguration(let message) {
showAlert("Configuration error: \(message)")
} catch {
showAlert("Error: \(error.localizedDescription)")
}Best Practices
Do:
- Initialize ProtectMyAPI as early as possible
- Always test on a real device before releasing
- Handle all error cases gracefully
- Use streaming for long AI responses
- Enable certificate pinning in production
- Keep
allowJailbrokenDevicesandallowSimulatorfalse in production
Donβt:
- Test only on the Simulator
- Put API keys in your app code
- Ignore error handling
- Disable security checks in production
- Trust attestation results from development builds
How It Works
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Your App β β ProtectMyAPI β β AI Provider β
β β β Server β β (OpenAI, etc) β
ββββββββββ¬βββββββββ ββββββββββ¬βββββββββ ββββββββββ¬βββββββββ
β β β
β 1. Request + attestation β β
β βββββββββββββββββββββββββ>β β
β β β
β β 2. Verify with Apple β
β β β
β β 3. Add API key & forward β
β β βββββββββββββββββββββββββ>β
β β β
β β 4. Get response β
β β <ββββββββββββββββββββββββββ
β β β
β 5. Return response β β
β <ββββββββββββββββββββββββββ β- Your app makes a request β SDK adds cryptographic proof from the Secure Enclave
- ProtectMyAPI verifies with Apple β Apple confirms the request is from your legitimate app
- API key is added server-side β Your secrets never touch the device
- Request is forwarded β ProtectMyAPI proxies to the AI provider
- Response returns β Your app gets the result
FAQ
Q: Why canβt I test on the Simulator?
App Attest requires the Secure Enclave hardware chip, which the Simulator doesnβt have.
Q: Will this work on jailbroken devices?
No. Jailbroken devices fail attestation by design, protecting you from compromised environments.
Q: Does this slow down my app?
The first request takes ~200ms extra for attestation setup. Subsequent requests add ~20ms overhead.
Q: What if ProtectMyAPI servers are down?
We maintain 99.9% uptime with global redundancy. Always implement error handling for edge cases.
Q: Can I use my own backend alongside ProtectMyAPI?
Yes! Use ProtectMyAPI for AI providers and secrets, and call your backend directly for other endpoints.
Next Steps
- π± Android SDK β Add Android support
- π¦ Flutter SDK β Cross-platform development
- π§ AI Providers β Integrate 20+ AI services
- π Dashboard Guide β Manage your apps