Apple's Core AI Will Replace Core ML at WWDC 2026: What Developers Need to Know
Apple plans to introduce Core AI at WWDC 2026, replacing the decade-old Core ML framework with a modernized platform designed for today's AI ecosystem and third-party model integration.
Apple plans to introduce Core AI at WWDC 2026, replacing the decade-old Core ML framework with a modernized platform designed for today's AI ecosystem and third-party model integration.
Apple Signals a New Developer AI Strategy
On March 1, 2026, Bloomberg's Mark Gurman reported in his Power On newsletter that Apple is preparing to retire Core ML, its long-running machine learning framework for developers, and replace it with a modernized system called Core AI. The transition is expected to be announced at WWDC in June 2026 as part of the iOS 27 and macOS 27 rollout.
The move is primarily symbolic in name but substantive in intent. Core ML has served Apple developers since 2017, providing a standardized interface for integrating machine learning models into iOS and macOS applications. The rename to Core AI signals Apple's acknowledgment that the terminology and philosophy around on-device intelligence have fundamentally changed since Core ML was designed.
What Core AI Is and Why It Matters
The Naming Is Not Cosmetic
According to Gurman, Apple's decision to move from the "ML" label to "AI" is deliberate and reflects recognition that "machine learning" is a dated term that no longer resonates with developers or consumers. This is not simply a rebranding exercise. The shift accompanies expanded functionality and a new design focus that aligns with how the broader industry now thinks about AI integration in applications.
Core ML was built when the dominant use case for on-device ML was running Apple-provided or third-party pre-trained models for discrete tasks: image classification, natural language processing, object detection. Core AI is being designed for an era in which applications are expected to integrate large language models, agentic workflows, and multi-model pipelines.
Third-Party Model Integration as the Core Mandate
The most significant reported change in Core AI is its expanded emphasis on helping developers integrate third-party AI models into their apps. Core ML supported third-party model formats through conversion tools, but the process was manual and often friction-heavy. Core AI is reportedly being built with third-party integration as a primary design goal rather than an afterthought.
Reporting from Chinese technology media suggests that Apple may adopt technical standards compatible with the Model Context Protocol (MCP)—the interoperability standard developed by Anthropic and now widely adopted across the AI industry—to enable smoother cross-model collaboration. This has not been confirmed by Apple or Gurman, and should be treated as speculation until official documentation is released at WWDC.
Google Gemini as the Foundation
The Core AI announcement arrives in the context of Apple's broader strategy around its partnership with Google. Apple confirmed in January 2026 that Google Gemini will power the next generation of Siri, with Apple's Private Cloud Compute infrastructure providing privacy guarantees. Core AI is expected to reflect this partnership by making Apple Foundation Models—the updated models trained with Google Gemini—accessible to developers as first-class citizens of the framework.
Core ML and Core AI will reportedly coexist for some transition period, with Core ML remaining functional for developers who have existing integrations. Apple has historically maintained backward compatibility across major framework transitions, and there is no indication that Core ML will be immediately deprecated.
Developer Implications
Reduced Integration Friction
For iOS and macOS developers, the practical appeal of Core AI is reduced friction when adding AI capabilities to their applications. Currently, integrating a third-party LLM or AI service requires developers to build their own abstraction layers, manage model context manually, and handle the inconsistencies between different providers' APIs. Core AI is intended to provide a standardized interface that reduces the amount of tooling developers need to build from scratch.
Alignment With Modern AI Development Patterns
The shift to Core AI also signals Apple's intent to align its developer frameworks with patterns that have become standard in the broader AI ecosystem: tool use, agentic task completion, multi-turn context management, and structured output. These capabilities are present in frameworks like LangChain, the OpenAI Agents SDK, and Google's ADK, but they have no equivalent in Core ML's current feature set. Core AI appears designed to bring Apple's native frameworks into parity with the ecosystem developers are already using.
Enterprise and Regulated-Industry Readiness
Apple has historically positioned its privacy architecture—including on-device processing and Private Cloud Compute—as an advantage for enterprise deployments where data residency and confidentiality are requirements. Core AI, with its expanded third-party model support and modern framework design, is likely to be positioned as the pathway for enterprise developers who need AI capabilities within Apple's privacy guarantees.
What Is Not Yet Known
Gurman's reporting establishes that Core AI exists as a planned framework for WWDC 2026 but does not provide detailed technical specifications. Several critical questions remain unanswered until Apple makes the formal announcement in June:
- What model formats and providers will Core AI support natively at launch?
- How will Apple handle the privacy architecture for third-party models that do not run on Private Cloud Compute?
- Will Core AI provide APIs for agentic workflows, or will it remain focused on single-inference model calls?
- What is the deprecation timeline for Core ML, and how will Apple support developers during the transition?
- Will Core AI include any form of model fine-tuning or personalization on-device?
Pros and Cons
Strengths
The announcement signals Apple is taking developer AI tooling seriously as a strategic priority, not merely a branding update. Integrating third-party model support into a native framework could meaningfully reduce the effort required to build AI-capable iOS and macOS applications. Alignment with the Google Gemini partnership gives Core AI a foundation in one of the industry's most capable model families. Apple's privacy infrastructure provides a differentiated proposition for enterprise deployments.
Limitations
The transition from Core ML to Core AI creates short-term uncertainty for developers with existing Core ML integrations, even if Apple maintains backward compatibility. The framework is not expected to be available until iOS 27 launches in fall 2026, leaving months of speculation before developers can evaluate the actual capabilities. Reported MCP compatibility has not been confirmed by Apple and should be treated as unverified. Apple's historically controlled ecosystem may limit how extensively developers can integrate models and services outside Apple's approved partnerships.
Outlook
Core AI represents the clearest signal yet that Apple views the developer AI ecosystem as a competitive battleground distinct from its consumer AI features. While Apple Intelligence—the consumer-facing AI layer—has attracted the most public attention, Core AI is the infrastructure that will determine whether developers choose to build their most AI-intensive applications for Apple platforms or prefer the more permissive environments offered by Android or cross-platform frameworks.
WWDC 2026 in June will be the first complete public picture of what Core AI actually does. Given the strategic importance of AI to Apple's ecosystem at this moment, it is among the more consequential developer framework announcements Apple has made in recent years.
Conclusion
Apple's planned replacement of Core ML with Core AI at WWDC 2026 is a significant indicator of where Apple sees the competitive landscape for developer AI tooling. For developers building AI-capable iOS and macOS applications, the announcement is worth watching closely. For enterprise technology teams evaluating Apple platforms for AI deployment, Core AI's expanded third-party model integration and alignment with Apple's privacy architecture could materially affect platform selection decisions. Full technical details will not be available until WWDC in June 2026.
Pros
- First-class third-party AI model integration could significantly reduce the effort required for developers to build AI-capable iOS and macOS applications
- Alignment with Google Gemini partnership gives the framework a foundation in one of the industry's most capable model families
- Apple's Private Cloud Compute privacy architecture provides a differentiated proposition for enterprise deployments with data residency requirements
- Backward compatibility with Core ML during transition period reduces migration risk for developers with existing integrations
Cons
- Full technical specifications will not be available until WWDC in June 2026, creating months of planning uncertainty for developers
- MCP compatibility and other third-party integration details remain unconfirmed speculation ahead of the official announcement
- Apple's controlled ecosystem may limit the breadth of third-party model and service integrations compared to more permissive platforms
- Transition from Core ML to Core AI introduces short-term complexity for development teams managing existing machine learning integrations
References
Comments0
Key Features
Apple will introduce Core AI at WWDC 2026 (June) as a replacement for the decade-old Core ML framework. Reported by Bloomberg's Mark Gurman on March 1, 2026, Core AI is designed around third-party AI model integration, aligns with the Google Gemini partnership powering next-generation Siri, and targets the modern AI development patterns that Core ML was not designed to support. Both frameworks will coexist during a transition period. Full technical specifications will be available at WWDC in June 2026.
Key Insights
- The ML-to-AI naming change signals a fundamental design philosophy shift: from discrete model inference to integrated, multi-model AI workflows
- Third-party model integration as a primary design mandate distinguishes Core AI from Core ML, which treated external models as secondary use cases
- Potential MCP (Model Context Protocol) compatibility, if confirmed, would place Apple's native framework in the same interoperability ecosystem as most major AI providers
- The Google Gemini partnership shapes Core AI's foundation: Apple Foundation Models trained with Gemini will be first-class citizens of the framework
- WWDC 2026 in June will be the first opportunity for developers to evaluate actual capabilities—the gap between announcement and availability creates platform planning uncertainty
- Apple's Private Cloud Compute architecture provides a potential enterprise differentiation for Core AI: AI capabilities within Apple's privacy guarantees
- Core ML will coexist with Core AI during transition, following Apple's pattern of maintaining backward compatibility across major framework shifts
- The announcement positions Core AI as Apple's answer to frameworks like LangChain and Google ADK that have become standard in the broader AI developer ecosystem
Was this review helpful?
Share
Related AI Reviews
Nvidia Posts Record $68.1B Q4 Revenue as Jensen Huang Declares Agentic AI Inflection Point
Nvidia crushes estimates with $68.1B quarterly revenue, 73% year-over-year growth, and $78B Q1 guidance as data center segment drives 75% of total sales.
Nvidia Vera Rubin NVL72: First Hardware Samples Deliver 10x Cheaper Inference Than Blackwell
CNBC gets exclusive first look at Nvidia's Vera Rubin system with 72 GPUs delivering 3.6 EFLOPS, 288GB HBM4 per GPU, and 100% liquid cooling as first samples ship to partners.
Samsung Galaxy S26 Launches With Three AI Agents: Perplexity, Gemini, and Bixby
Samsung's Galaxy S26 debuts a multi-agent AI ecosystem with Perplexity, Google Gemini, and a revamped Bixby, letting users choose their AI assistant with dedicated wake words.
MatX Raises $500M to Build LLM-Specific Chips That Challenge Nvidia's Dominance
Former Google TPU engineers secure $500M Series B led by Jane Street and Leopold Aschenbrenner's fund to build a chip designed exclusively for large language model inference.
