|
The coding tool can now run multiple agents across applications on your computer.
| RELATED ARTICLES | | |
|
Anthropic has started rolling out identity verification on Claude "for a few use cases." The company didn't list out those use cases in its announcement, but we've asked it for details and will update this post when we hear back. Anthropic says you might see a verification prompt upon "accessing certain capabilities," asking you to verify your identity. You would have to show a valid and physical government-issued photo ID. You'd also have take a selfie with your phone or computer camera that the system will compare against the ID you present.
The news, as you'd expect, wasn't well-received. Many users are questioning the necessity of identity verification to be able to use an AI chatbot, especially if Anthropic already has their credit cards on file as paying subscribers. People are also criticizing Anthropic's decision to use Persona Identities, which also provides age verification services for OpenAI and Roblox. One of Persona's major investors is venture firm Founders Fund, which was co-founded by Peter Thiel, who's also the co-founder and chairman of surveillance company Palantir.
Palantir's customers are mostly federal agencies and government offices, including the FBI, the CIA and
|
|
Last month, following reporting from The Wall Street Journal, OpenAI confirmed it was working on a desktop super app that would combine ChatGPT, its Codex coding agent and Atlas web browser into one cohesive experience. OpenAI is not releasing that application today. Instead, it's pushing out a major update to Codex that significantly expands what that software can do. However, the new release offers a glimpse of what OpenAI hopes to build with its latest effort.
"We're building the super app out in the open," said Thibault Sottiaux, the head of Codex, during a press briefing held by OpenAI. "This release is about developers. In the future, we will broaden it up to a wider audience." Until then, the latest version of Codex offers developers multi-purpose AI agents that can work across a "larger surface area," while being more proactive. In practice, that translates to a host of new capabilities, starting with computer use.
The agents inside of Codex can interact with other apps on your PC. When prompting one of OpenAI's models, you can name a specific program or let it determine the best application for the job. Computer use is available in competing apps like Claude Cowork, but where OpenAI believes Codex offers an edge in that depart
|
|
A group of researchers from across the US and the UK have conducted a study on what AI does to our brains and the results are, in a word, grim. These results were published in a paper called "AI assistance reduces persistence and hurts independent performance" which kind of tells you everything you need to know.
"We find that AI assistance improves immediate performance, but it comes at a heavy cognitive cost," the study declares. Researchers went on to state that just ten minutes of using AI made people dependent on the technology, which led to worsening performance and burnout once the tools were removed.
The study followed people who use AI for "reasoning-intensive" cognitive labor. This refers to stuff like writing, coding and brainstorming new ideas, which are some of the most common use cases.
The researchers recruited 350 Americans, who were asked to complete some fraction-based equations. Half of the participants were randomly granted access to a specialized chatbot built on OpenAI's GPT-5 for help and the others had to go it alone. Halfway through the exam, the AI group had their access cut off.
This led to a steep decline in correct answers by the AI group and many instances of people simply giving up. This result, in which performance and perseverance both dropped, was repeated in a larger experiment with 670 people. Finally, the scientists performed one final experiment with reading comprehension questions, and not math. The results were more of the same.
"Once the AI is taken away from people, it's not that people are just giving wrong answers. They're also not willing to try without AI," Rachit Dubey, an assistant professor at the University of California and coauthor of the study,
|
|
The maker of ChatGPT announced the limited release of GPT-5.4-Cyber, a technology designed to find security holes in software.
|
|