|
It's shaping up to be a packed year for Apple. Just over three months into 2026, Cupertino has already refreshed several Macs, introduced new devices, and launched updated iPads. But if industry reports are accurate, many more products are still on the way before the year wraps up. Below is a look at what Apple […]
The post Apple's 2026 Product Roadmap: New iPhones, Macs, and Apple Watch Are Coming appeared first on eWEEK.
|
|
The astronauts traveling in the Artemis II spacecraft were allowed to take smartphones with them. Sadly, they can't connect to the internet.
|
|
The new feature makes it easier to exchange media across the two mobile ecosystems.
|
|
This week was the launch of the AirPods Max 2, and Amazon has the first cash discount on these brand new headphones for launch week. Below, you'll also find great deals on the M5 MacBook Air, 2026 Studio Display, and M4 iPad Air.
|
|
Repair site iFixit today shared a teardown of Apple's new AirPods Max 2 headphones, and as expected, there are few changes. iFixit says the ?AirPods Max 2? are "basically the same" as the original AirPods Max headphones that came out in 2020.
|
|
When Google released Gemini 3 Pro at the end of last year, it was a significant step forward for the company's proprietary large language models. Now, the company is bringing some of the same technology and research that made those models possible to the open source community with the release of its new family of Gemma 4 open-weight models.
Google is offering four different versions of Gemma 4, differentiated by the number of parameters on offer. For edge devices, including smartphones, the company has the 2-billion and 4-billion "Effective" models. For more powerful machines, there's the 26-billion "Mixture of Experts" and 31-billion "Dense" systems. For the unfamiliar, parameters are the settings a large language model can tweak to generate an output. Typically, models with more parameters will deliver better answers than ones with less, but running them also requires more powerful hardware.
With Gemma 4, Google claims it's managed to engineer systems with "an unprecedented level of intelligence-per-parameter." To back up this claim, the company points to the performance of Gemma 4's 31-billion and 26-billion variants, which claimed the third and sixth spots respectively on Arena AI's text leaderboard, beating out models 20 times their size.
All of the models can process video and images, making them ideal for tasks like optical character recognition. The two smaller models are also capable of processing audio inputs and understanding speech. Separately, Google says the Gemma 4 family is capable of generating offline code, meaning you could use them to do vibe coding without an internet connection. Google has also trained the models in more than 140 languages.
|
|