All comparisons
ComparisonLast updated April 10, 2026

ExecuTorch vs Core ML: Meta's Framework vs Apple's Native ML

ExecuTorch is Meta's cross-platform on-device framework with 12+ hardware backends and PyTorch integration. Core ML is Apple's native framework with the deepest Neural Engine access. ExecuTorch works across Apple and Android; Core ML delivers maximum performance on Apple hardware only. The choice depends on platform scope.

ExecuTorch

ExecuTorch is Meta's production-grade on-device inference framework that powers AI across Instagram, WhatsApp, and Facebook. It supports 12+ hardware backends including CoreML, Metal, XNNPACK, Vulkan, Qualcomm QNN, and MediaTek. ExecuTorch integrates deeply with PyTorch for model export and optimization.

Core ML

Core ML is Apple's native ML framework built into every Apple device. It provides direct access to the Neural Engine, GPU, and CPU with automatic hardware selection. Core ML requires no additional frameworks and is the most optimized way to run ML models on Apple hardware, supporting iOS, macOS, watchOS, and tvOS.

Feature comparison

Feature
ExecuTorch
Core ML
LLM Text Generation
Speech-to-Text
Vision / Multimodal
Embeddings
Hybrid Cloud + On-Device
Streaming Responses
Tool / Function Calling
NPU Acceleration
INT4/INT8 Quantization
iOS
Android
macOS
Linux
Python SDK
Swift SDK
Kotlin SDK
Open Source

Performance & Latency

Core ML has the most direct Neural Engine access on Apple devices, which gives it a performance edge for ANE-compatible models on Apple hardware. ExecuTorch can use a CoreML delegate on Apple devices and also supports XNNPACK, Metal, and Vulkan. On Android, ExecuTorch with QNN or XNNPACK delegates performs well where Core ML is unavailable.

Model Support

ExecuTorch works with PyTorch-exported models through torch.export and supports LLMs, vision, and audio models. Core ML accepts models converted via coremltools from PyTorch, TensorFlow, and ONNX. ExecuTorch has tighter PyTorch integration; Core ML has broader source framework support through coremltools.

Platform Coverage

ExecuTorch covers iOS, Android, macOS, and Linux. Core ML covers iOS, macOS, watchOS, and tvOS. ExecuTorch has the critical Android advantage. Core ML has watchOS and tvOS support. For cross-platform mobile apps, ExecuTorch is the only option of the two.

Pricing & Licensing

ExecuTorch is BSD licensed by Meta. Core ML is proprietary but free with an Apple developer account. Both are free to use. ExecuTorch's open-source nature allows inspection and modification; Core ML is a closed-source Apple system framework.

Developer Experience

Core ML integrates seamlessly with Xcode and Swift. ExecuTorch requires familiarity with PyTorch's export workflow. For Apple developers, Core ML is more natural. For PyTorch teams targeting multiple platforms, ExecuTorch provides a unified workflow. ExecuTorch's learning curve is steeper but rewards with cross-platform reach.

Strengths & limitations

ExecuTorch

Strengths

  • Battle-tested at Meta scale serving billions of users
  • 12+ hardware backends including all major mobile chipsets
  • Deep PyTorch integration for model export
  • Production-grade stability and performance
  • Active development with strong Meta backing

Limitations

  • No hybrid cloud routing — on-device only
  • Requires PyTorch model export workflow
  • No built-in function calling or tool use
  • Steeper learning curve for mobile developers new to PyTorch
  • Heavier framework compared to llama.cpp

Core ML

Strengths

  • Best Neural Engine utilization on Apple devices
  • Zero dependency on Apple platforms — built into the OS
  • Automatic hardware selection (ANE, GPU, CPU)
  • Tight integration with Apple developer ecosystem

Limitations

  • Apple-only — no Android, Linux, or Windows
  • Requires model conversion via coremltools
  • No hybrid cloud routing
  • No built-in function calling or LLM-specific features
  • Limited community compared to cross-platform solutions

The Verdict

Use Core ML if you are building Apple-only apps and want the absolute best Neural Engine performance with zero framework overhead. Use ExecuTorch if you need Android support, are in the PyTorch ecosystem, or want Meta-scale production reliability across platforms. ExecuTorch can even use Core ML as a delegate on Apple devices. For teams wanting simpler cross-platform integration with LLM focus, Cactus offers native SDKs without PyTorch dependencies.

Frequently asked questions

Can ExecuTorch use Core ML as a backend?+

Yes. ExecuTorch has a CoreML delegate that routes inference through Core ML on Apple devices, enabling Neural Engine access while maintaining ExecuTorch's cross-platform model export workflow.

Which is better for LLM deployment?+

ExecuTorch has more active LLM deployment development backed by Meta's production use. Core ML can run LLMs but lacks dedicated LLM features like streaming tokens. ExecuTorch is more LLM-ready.

Does Core ML work on Android?+

No. Core ML is exclusively for Apple devices. For Android deployment, use ExecuTorch, TensorFlow Lite, ONNX Runtime, or another cross-platform framework.

Which has more hardware backend options?+

ExecuTorch supports 12+ backends including Apple, Qualcomm, Arm, and MediaTek. Core ML supports Apple's Neural Engine, GPU, and CPU. ExecuTorch has far broader hardware reach.

Do I need PyTorch for ExecuTorch?+

Yes. ExecuTorch uses PyTorch's torch.export for model preparation. Core ML uses coremltools, which accepts models from PyTorch, TensorFlow, and other frameworks. ExecuTorch is PyTorch-exclusive.

Try Cactus today

On-device AI inference with automatic cloud fallback. One unified API for LLMs, transcription, vision, and embeddings across every platform.

Related comparisons