Xcode is moving into agent-based coding with deeper integration of OpenAI and Anthropic

Apple brings agent coding to Xcode. On Tuesday, the company announced the release of Xcode 26.3 which will enable developers to use agent tools, including Anthropic Agent Claude and the OpenAI Codex, directly in the official Apple development suite.

Xcode 26.3 Release Candidate is available today to all Apple developers on the developer's website and will be available in the App Store later.

This latest update comes on the heels of last year's release of Xcode 26, which introduced support for ChatGPT and Claude in the Apple Integrated Development Environment (IDE) used by those building apps for iPhone, iPad, Mac, Apple Watch and other Apple hardware platforms.

Integrating agent coding tools allows AI models to use more Xcode features to perform their tasks and perform more complex automation.

Models will also have access to current Apple development documentation to ensure they are using the latest APIs and following best practices when building.

During startup, agents can help developers explore the project, understand its structure and metadata, then build the project and run tests to see if there are any bugs and fix them if so.

To prepare for this launch, Apple worked closely with Anthropic and OpenAI to design the new experience. In particular, the company said it has done a lot of work to optimize token usage and tool calling so that agents can run efficiently in Xcode.

Xcode uses MCP (Model Context Protocol) to expose its capabilities to agents and connect them to its tools. This means that Xcode can now work with any MCP-compatible external agent to discover projects, changes, manage files, previews and snippets, and access the latest documentation.

Developers who want to try out the agent coding feature should first download the agents they want to use from the Xcode settings. They can also connect their accounts to AI providers by logging in or adding their API key. An in-app drop-down menu allows developers to select the model version they want to use (e.g. GPT-5.2-Codex vs. GPT-5.1 mini).

In the prompt box on the left side of the screen, developers can tell the agent what type of project they want to build or change the code they want to create using natural language commands. For example, they might tell Xcode to add a feature to their app that uses one of the Apple-provided frameworks and shows how it should look and work.

Once the agent starts running, it breaks tasks down into smaller steps, making it easy to see what's happening and how the code is changing. It will also look for the documentation you need before you start coding. Changes are visually highlighted in the code, and a design transcript on the side of the screen lets developers know what's going on under the hood.

Apple believes this transparency can especially help new developers who are learning to code. For this purpose, the company provides hosting on Thursday, a “collaborative coding” workshop. on the developer site where users can watch and learn how to use agent coding tools while coding in real time with their own copy of Xcode.

At the end of the process, the AI ​​agent checks whether the code it created works as expected. Armed with the results of its tests in this area, the agent can continue working on the project if necessary to fix bugs or other problems. (Apple has noted that asking an agent to think through plans before writing code can sometimes help streamline the process because it forces the agent to pre-plan.)

Additionally, if developers are not satisfied with the results, they can easily revert their code to the original at any time because Xcode creates milestones every time an agent makes a change.

LEAVE A REPLY

Please enter your comment!
Please enter your name here