During the Thursday conference of inaugural developers, Anthropic introduced two new AI models, which claims that Start -up are one of the best in the industry, at least in terms of how they assess popular reference points.
Claude Opus 4 and Claude Sonnet 4, part of the new family of Claude 4 models, can analyze large data sets, perform long -term tasks and take complex actions according to the company. Anthropic says that both models have been tuned for good performance of programming tasks, thanks to which they are well suitable for writing and editing the code.
Both paying users and users of free Chatbot applications will receive access to Sonnet 4, but only paying users will gain access to Opus 4. In the case of API Antropic, via the Bedrock platform in Amazon and Vertex AI, Opus 4 will be valued at USD 15/75 for one million tokens (entry/output) $ 4 for a million tokens (position/exit).
Tokens are raw fragments of data with which AI models work. Million of tokens is equivalent to about 750,000 words – about 163,000 words longer than “war and peace”.
Anthropica Claude 4 models arrive when the company tries to significantly increase revenues. ApparentlyThe outfit, founded by former researchers, aims to obtain $ 12 billion in 2027, compared to $ 2.2 billion forecasted this year. Anthropic Recently closed Loan worth $ 2.5 billion and raised billions of dollars from Amazon and other investors in anticipation Rising costs related to the development of border models.
Rivals did not make it easier to keep the position of the pole in the AI ​​race. While Anthropic has launched a new flagship model of artificial intelligence at the beginning of this year, Claude Sonnet 3.7, along with the agency coding tool called Claude Code, competitors – including Opeli and Google – they raced to outdo the company with powerful models and their own tool.
Anthropic plays Keeps with Claude 4.
Anthropic says that more capable of two models introduced today, Opus 4, can maintain a “concentrated effort” in many steps in the flow of work. Meanwhile, Sonnet 4-designed as the “deputy” Sonnet 3.7-Poprawa coding and mathematics compared to previous Anthropica models and carefully observes the instructions, according to the company.
The Claude 4 family is also less inclined than Sonnet 3.7 is involved in “hacking the prize”, says Anthropic. Hanging prizes, also known as specific games, is a behavior in which models take short and gaps to perform tasks.
For clarity, these improvements did not bring the world the best Models according to each comparative test. For example, while Opus 4 overcomes Google's Gemini 2.5 PRO and OPENAI O3 and GPT-4.1 in verified benchie, which has been designed to assess the ability to cod the model, it cannot exceed O3 on the MMMMMI multimodal assessment or GPQa diamond, biology, physics and chemistry.

Despite this, anthropic releases Opus 4 under rawater security, including strengthened harmful content detectors and defense of cyber security. The company claims that internal tests have shown that Opus 4 can “significantly increase” the ability of a person with a background for obtaining, production or implementation of chemical, biological or nuclear weapons, and reasons Specification of the ASL-3 model Anthropik.
Anthropic says that both Opus 4 and Sonnet 4 are “hybrid” models-beyond the almost international and extended thinking because of deeper reasoning (in the scope of AI it can “reason” and “think”, as people understand these concepts). After turning on the reasoning mode, models can take more time to consider the possible solutions of a given problem before response.
Due to the models, they will show a “user -friendly” summary of their thought process, says Anthropic. Why not show the whole? Partly to protect the “competitive benefits”, the company admits that in the blog entry project provided to TechCrunch.
Opus 4 and Sonnet 4 can use many tools, such as search engines, in parallel and alternately between reasoning and tools to improve the quality of your answers. They can also extract and save facts in “memory” to reliably support tasks, building what Antropic describes as “silent knowledge” with time.
To make the models more friendly to programmers, Anthropic introduces improvements to the above -mentioned Claude code. Claude Code, which allows programmers to perform specific tasks via Anthropic models directly from the terminal, is now integrating with ides and offers SDK, which allows developers to combine it with third -party applications.
The Claude SDK code, announced at the beginning of this week, allows you to launch Claude code as a sub -support in supported operating systems, providing a way to build assistants and coding tools and tools that use the capabilities of Claude models.
Anthropic has released Claude code and connectors for Microsoft vs Code, Jetbrains and Github. The GitHub connector allows programmers to mark Claude code to respond to reviewers' opinions, as well as an attempt to fix errors in the code or in a different way of modification.
AI models still have difficulty in code quality software. AI-generous codes tends to introduce gaps in security AND errors because of weakness in areas such as the possibility of understanding programming logic. However, their promise to increase coding efficiency is pushing companies – and programmers – adopt them quickly.
Anthropic, very aware of this, promises more frequent models.
“Oh … we go to more frequent models, providing a constant stream of improvements that bring faster possibilities of groundbreaking possibilities,” wrote a startup in his post. “This approach maintains the most modern when we constantly improve and improve our models.”