Xcode 26.3 Lets AI Agents From Anthropic and OpenAI Build Apps Autonomously - MacRumorsOpen MenuShow RoundupsShow Forums menuVisit ForumsOpen Sidebar
Skip to Content

Xcode 26.3 Lets AI Agents From Anthropic and OpenAI Build Apps Autonomously

With Xcode 26.3, Apple is adding support for agentic coding, allowing developers to use tools like Anthropic's Claude Agent and OpenAI's Codex right in Xcode for app creation.

macOS 26 Xcode
Agentic coding will allow Xcode to complete more complex app development tasks autonomously. Claude, ChatGPT, and other AI models have been available for use in Xcode since Apple added intelligence features in Xcode 26, but until now, AI was limited and was not able to take action on its own. That will change with the option to use an AI coding assistant.

AI models can access more of Xcode's features to work toward a project goal, and Apple worked directly with Anthropic and OpenAI to configure their agents for use in Xcode. Agents can create new files, examine the structure of a project in Xcode, build a project directly and run tests, take image snapshots to double-check work, and access full Apple developer documentation that has been designed for AI agents.

Adding an agent to Xcode can be done with a single click in the Xcode settings, with agents able to be updated automatically as AI companies release updates. Developers will need to set up an Anthropic or OpenAI account to use those coding tools in Xcode, paying fees based on API usage.

Apple says that it aimed to ensure that Claude Agent and Codex run efficiently, with reduced token usage. It is simple to swap between agents in the same project, giving developers the flexibility to choose the agent best suited for a particular task.

While Apple worked with OpenAI and Anthropic for Xcode integration, the Xcode 26.3 features can be used with any agent or tool that uses the open standard Model Context Protocol. Apple is releasing documentation so that developers can configure and connect MCP agents to Xcode.

Using natural language commands, developers are able to instruct AI agents to complete a project, such as adding a new feature to an app. Xcode then works with the agent to break down the instructions into small tasks, and the agent is able to work on its own from there. Here's how the process works:

  • A developer asks an integrated agent to add a new feature to an app.
  • The agent looks at the current project to see how it's organized.
  • The agent checks all relevant documentation, looking at code snippets, code samples, and the latest APIs.
  • The agent begins working on the project, adding code as it goes.
  • The agent builds the project, then uses Xcode to verify its work.
  • If there are errors or warnings, the agent continues to work until all issues are addressed. It is able to access build logs and revise until a project is perfect.
  • The agent wraps up by providing a summary of everything that happened so developers have a clear view of the implementation.

In the sidebar of a project, developers can follow along with what the agent is doing using the transcript, and can click to see where code is added to keep track of what the agent is doing. At any point, developers can go back to before an agent or model made a modification, so there are options to undo unwanted results or try out multiple options for introducing a new feature.

Apple says that agentic coding will allow developers to simplify workflows, make changes quicker, and bring new ideas to life. Apple also sees it as a learning tool that provides developers with the opportunity to learn new ways to build something or to implement an API in an app.

"At Apple, our goal is to make tools that put industry-leading technologies directly in developers' hands so they can build the very best apps," said Susan Prescott, Apple's vice president of Worldwide Developer Relations. "Agentic coding supercharges productivity and creativity, streamlining the development workflow so developers can focus on innovation."

The release candidate of Xcode 26.3 is available for developers as of today, and a launch will likely follow in the next week or so.

Popular Stories

iPhone 18 Pro Deep Red Feature

iPhone 18 Pro Launching Later This Year With These 12 New Features

Wednesday March 18, 2026 7:39 am PDT by
While the iPhone 18 Pro and iPhone 18 Pro Max are not expected to launch for another six months or so, there are already plenty of rumors about the devices. It was initially reported that the iPhone 18 Pro models would have fully under-screen Face ID, with only a front camera visible in the top-left corner of the screen. However, the latest rumors indicate that only one Face ID component...
ios 26 4 yellow

Here Are Apple's Release Notes for iOS 26.4

Wednesday March 18, 2026 11:56 am PDT by
Apple provided developers and public beta testers with the release candidate versions of iOS 26.4 and iPadOS 26.4, which means we're going to see a public launch as soon as next week. The RC versions of the software include Apple's official release notes, giving us final details on what's included in the update. Apple Music - Playlist Playground (beta) generates a playlist from your...
Apple Logo Sketch Feature

Apple Has Now Unveiled Eight New Products This Month

Tuesday March 17, 2026 9:25 am PDT by
Apple has unveiled a whopping eight new products so far this March, including an iPhone 17e, iPad Air models with the M4 chip, MacBook Air models with the M5 chip, MacBook Pro models with M5 Pro and M5 Max chips, the all-new MacBook Neo, an updated Studio Display, a higher-end Studio Display XDR, and now the AirPods Max 2 this week. iPhone 17e features the same overall design as the iPhone...

Top Rated Comments

gimmesomemo Avatar
7 weeks ago
To be clear, I trust AI slop code, and therefore “agents” about as much as I trust Siri to turn on the correct light switch
Score: 25 Votes (Like | Disagree)
gimmesomemo Avatar
7 weeks ago
This whole agentic coding is giving strong 4 horseman of the apocalypse vibe.
Not in any kind of skynet way, more complete dumbification of good software. (Codebases, SWE skill, and user experience)
Score: 25 Votes (Like | Disagree)
verniesgarden Avatar
7 weeks ago

As someone who has been coding for 20 years and now does a lot of vibe coding, you couldn't possibly be more misinformed.

Edit: LOVE the downvotes by the people being left behind in the dust.
Learning to properly code is the first step to effectively vibe coding
Score: 20 Votes (Like | Disagree)
7 weeks ago

Now put AI in the hands of those skilled people, and you will watch them increase their productivity
Hi, I am one of those skilled people who uses the models and all I can say is I’m spending less time coding and more time screaming at the model for lying and more time testing for bugs. The overall production time is about the same, but with the stress of talking to a computer ghost.
Score: 15 Votes (Like | Disagree)
7 weeks ago

As someone who has been coding for 20 years and now does a lot of vibe coding, you couldn't possibly be more misinformed.

Edit: LOVE the downvotes by the people being left behind in the dust.
I think the downvotes are your use of the term vibe coding. The way I treat that term is if you perform ZERO checks on what the AI produces. Literally telling AI to write an app, then you sub,it that app for approval / check in the code to git as-is.

However, I hope you do what I do and ground the AI quite heavily and scrutinize the output heavily. That to me is not vibe coding. We have literal advertising career people doing vibe coding and have zero programming skills. That is what vibe coding truly means.
Score: 14 Votes (Like | Disagree)
BGPL Avatar
7 weeks ago
Embrace this, it's the future. I have coded with AI and it's undeniable. If you're not leveraging AI then you're going to be left behind.
Score: 13 Votes (Like | Disagree)
Related Apple News: Entertainment | Motoring | Technology | Sport | Ipad