본문으로 건너뛰기

© 2026 Molayo

HuggingFace헤드라인2026. 04. 24. 15:15

AnyLanguageModel: 애플 플랫폼에서 로컬/원격 LLM을 위한 통합 Swift API

요약

애플 개발자들이 AI 앱을 만들 때 직면하는 핵심 문제는 모델 통합의 복잡성과 높은 실험 비용입니다. AnyLanguageModel은 이러한 문제를 해결하기 위해 출시된 Swift 패키지입니다. 이 라이브러리는 Apple Foundation Models 프레임워크를 대체할 수 있는 드롭인(drop-in) 방식으로, 단일 API로 로컬 및 원격 LLM을 모두 지원합니다. Core ML, MLX, llama.cpp는 물론 OpenAI, Anthropic 등 클라우드 제공업체까지 통합하여 개발자가 모델 전환에 따른 코드 변경 없이 쉽게

핵심 포인트

  • AnyLanguageModel은 Apple Foundation Models 프레임워크를 대체하는 드롭인(drop-in) 방식으로 설계되어 기존 코드를 최소한으로 수정할 수 있습니다.
  • Core ML, MLX, llama.cpp (GGUF), Ollama는 물론 OpenAI, Anthropic 등 8개 이상의 다양한 모델 제공업체를 단일 API로 통합 지원합니다.
  • Swift 6.1 패키지 트레이트(package traits)를 사용하여 필요한 백엔드만 선택적으로 포함함으로써 의존성 폭증(dependency bloat) 문제를 해결했습니다.
  • Foundation Models의 제약사항을 넘어, 이미지 입력을 지원하는 Vision-Language 모델 기능을 Anthropic 등 클라우드 API를 통해 먼저 구현하여 개발 편의성을 높였습니다.

Introducing AnyLanguageModel: One API for Local and Remote LLMs on Apple Platforms

LLMs have become essential tools for building software. But for Apple developers, integrating them remains unnecessarily painful.

Developers building AI-powered apps typically take a hybrid approach, adopting some combination of:

  • Local models using Core ML or MLX for privacy and offline capability
  • Cloud providers like OpenAI or Anthropic for frontier capabilities
  • Apple's Foundation Models as a system-level fallback

Each comes with different APIs, different requirements, different integration patterns. It's a lot, and it adds up quickly. When I interviewed developers about building AI-powered apps, friction with model integration came up immediately. One developer put it bluntly:

I thought I'd quickly use the demo for a test and maybe a quick and dirty build but instead wasted so much time. Drove me nuts.

The cost to experiment is high, which discourages developers from discovering that local, open-source models might actually work great for their use case.

Today we're announcing AnyLanguageModel, a Swift package that provides a drop-in replacement for Apple's Foundation Models framework with support for multiple model providers. Our goal is to reduce the friction of working with LLMs on Apple platforms and make it easier to adopt open-source models that run locally.

The core idea is simple:

Swap your import statement, keep the same API.

  • import FoundationModels<br>+ import AnyLanguageModel

Here's what that looks like in practice. Start with Apple's built-in model:

let model = SystemLanguageModel.default
let session = LanguageModelSession(model: model)
let response = try await session.respond(to: "Explain quantum computing in one sentence")
print(response.content)

Now try an open-source model running locally via MLX:

let model = MLXLanguageModel(modelId: "mlx-community/Qwen3-4B-4bit")
let session = LanguageModelSession(model: model)
let response = try await session.respond(to: "Explain quantum computing in one sentence")
print(response.content)

AnyLanguageModel supports a range of providers:

  • Apple Foundation Models: Native integration with Apple's system model (macOS 26+ / iOS 26+)
  • Core ML: Run converted models with Neural Engine acceleration
  • MLX: Run quantized models efficiently on Apple Silicon
  • llama.cpp: Load GGUF models via the llama.cpp backend
  • Ollama: Connect to locally-served models via Ollama's HTTP API
  • OpenAI, Anthropic, Google Gemini: Cloud providers for comparison and fallback
  • Hugging Face Inference Providers: Hundreds of cloud models powered by world-class inference providers.

The focus is on local models that you can download from the Hugging Face Hub. Cloud providers are included to lower the barrier to getting started and to provide a migration path. Make it work, then make it right.

Design Choices: Building on Existing APIs

When designing AnyLanguageModel, we faced a choice: create a new abstraction that tries to capture everything, or build on an existing API. We chose the latter, using Apple's Foundation Models framework as the template.

This might seem counterintuitive. Why tie ourselves to Apple's choices? A few reasons:

  • Foundation Models is genuinely well-designed. It leverages Swift features like macros for an ergonomic developer experience, and its abstractions around sessions, tools, and generation map well to how LLMs actually work.
  • It's intentionally limited. Foundation Models represents something like a lowest common denominator for language model capabilities. Rather than seeing this as a weakness, we treat it as a stable foundation (hyuk hyuk). Every Swift developer targeting Apple platforms will encounter this API, so building on it directly means less conceptual overhead.
  • It keeps us grounded. Each additional layer of abstraction takes you further from the problem you're actually solving. Abstractions are powerful, but stack too many and they become a problem in themselves.

The result is that switching between providers requires minimal code changes, and the core abstractions remain clean and predictable.

One challenge with multi-backend libraries is dependency bloat. If you only want to run MLX models, you shouldn't have to pull in llama.cpp and all its dependencies.

AnyLanguageModel uses Swift 6.1 package traits to solve this. You opt in to only the backends you need:

dependencies: [
.package(
url: "https://github.com/mattt/AnyLanguageModel.git",
from: "0.4.0",
traits: ["MLX"] // Pull in MLX dependencies only
)
]

Available traits include CoreML,
MLX,
and Llama
(for llama.cpp / llama.swift).

By default, no heavy dependencies are included.

You get the base API plus cloud providers,
which only require standard URLSession networking.

For Xcode projects (which don't yet support trait declarations directly), you can create a small internal Swift package that depends on AnyLanguageModel with the traits you need, then add that package as a local dependency. The README has detailed instructions.

Extending Beyond Foundation Models

Vision-language models are incredibly capable and widely used. They can describe images, extract text from screenshots, analyze charts, and answer questions about visual content. Unfortunately, Apple's Foundation Models framework doesn't currently support sending images with prompts.

Building on an existing API means accepting its constraints. Apple will likely add image support in a future release (iOS 27, perhaps?), but vision-language models are too useful to wait for. So we've extended beyond what Foundation Models offers today.

Here's an example sending an image to Claude:

let model = AnthropicLanguageModel(
apiKey: ProcessInfo.processInfo.environment["ANTHROPIC_API_KEY"]!,
model: "claude-sonnet-4-5-20250929"
)
let session = LanguageModelSession(model: model)
let response = try await session.respond(
to: "What's in this image?",
image: .init(url: URL(fileURLWithPath: "/path/to/image.png")))

We're taking a calculated risk here; we might design something that conflicts with Apple's eventual implementation. But that's what deprecation warnings are for. Sometimes you have to write the API for the framework that doesn't exist yet.

To see AnyLanguageModel in action, check out chat-ui-swift, a SwiftUI chat application that demonstrates the library's capabilities.

The app includes:

  • Apple Intelligence integration via Foundation Models (macOS 26+)
  • Hugging Face OAuth authentication for accessing gated models
  • Streaming responses
  • Chat persistence

It's meant as a starting point: Fork it, extend it, swap in different models. See how the pieces fit together and adapt it to your needs.

What's Next?

AnyLanguageModel is currently pre-1.0. The core API is stable, but we're actively working on bringing the full feature set of Foundation Models to all adapters, namely:

  • Tool calling across all providers
  • MCP integration for tools and elicitations
  • Guided generation for structured outputs
  • Performance optimizations for local inference

This library is the first step toward something larger. A unified inference API provides the scaffolding needed to build seamless agentic workflows on Apple platforms — applications where models can use tools, access system resources, and accomplish complex tasks. More on that soon. 🤫

AI 자동 생성 콘텐츠

본 콘텐츠는 Hugging Face Blog의 원문을 AI가 자동으로 요약·번역·분석한 것입니다. 원 저작권은 원저작자에게 있으며, 정확한 내용은 반드시 원문을 확인해 주세요.

원문 바로가기
2

댓글

0