|
I loved the Oppo Find X9 Pro in my full review, but I was still surprised at how well its camera performed against the Apple iPhone 17 Pro.
|
|
Two new models of Meta Ray-Ban AI glasses are on the way, and they're going to be catered towards those who use prescription lenses, according to a Bloomberg report. While these are supposed to be announced next week, Bloomberg noted that these won't be a "new generation" of Meta's smart glasses.
You can already add prescription lenses to Meta Ray-Ban's AI glasses, but the upcoming models will come in rectangular and rounded styles and will be sold through traditional prescription eyewear channels. Bloomberg didn't specify how these new glasses will differ from existing options, but noted that it's the first time Meta and Ray-Ban are releasing a pair of AI glasses specifically designed for this demographic.
The two models are likely the codenamed products Scriber and Blazer, which were first spotted by The Verge in filings with the Federal Communications Commission. The filings described the devices as production units, meaning Meta could be close to the actual product launch. Looking at the filings, it's unlikely these upcoming prescription AI glasses will have a display like the Meta Ray-Ban Displays.
Meta CEO Mark Zuckerb
|
|
Anthropic has begun previewing "auto mode" inside of Claude Code. The company describes the new feature as a middle path between the app's default behavior, which sees Claude request approval for every file write and bash command, and the "dangerously-skip-premissions" command some coders use to make the chatbot function more autonomously.
With auto mode enabled, a classifier system guides Claude, giving it permission to carry out actions it deems safe, while redirecting the chatbot to take a different approach when it determines Claude might do something risky. In designing the system, Anthropic's goal was to reduce the likelihood of Claude carrying out mass file deletions, extracting sensitive data or executing malicious code.
Of course, no system is perfect, and Anthropic warns as such. "The classifier may still allow some risky actions: for example, if user intent is ambiguous, or if Claude doesn't have enough context about your environment to know an action might create additional risk," the company writes.
Anthropic doesn't mention a specific incident as inspiration for auto mode, but the recent 13-hour AWS outage Amazon suffered after one of the company's AI tools reportedly deleted a hosting environment, was probably front of mind for the company. Amazon blamed that specific incident on human error, saying the staffer involved in the incident had "broader permissions than expected."
Team plan users can preview auto mode starting today, with the feature set to roll out to Enterprise and API users in the coming days.
This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-releases-safer-claude-code-auto-mode-to-avoid-mass
|
|