|
When Google released Gemini 3 Pro at the end of last year, it was a significant step forward for the company's proprietary large language models. Now, the company is bringing some of the same technology and research that made those models possible to the open source community with the release of its new family of Gemma 4 open-weight models.
Google is offering four different versions of Gemma 4, differentiated by the number of parameters on offer. For edge devices, including smartphones, the company has the 2-billion and 4-billion "Effective" models. For more powerful machines, there's the 26-billion "Mixture of Experts" and 31-billion "Dense" systems. For the unfamiliar, parameters are the settings a large language model can tweak to generate an output. Typically, models with more parameters will deliver better answers than ones with less, but running them also requires more powerful hardware.
With Gemma 4, Google claims it's managed to engineer systems with "an unprecedented level of intelligence-per-parameter." To back up this claim, the company points to the performance of Gemma 4's 31-billion and 26-billion variants, which claimed the third and sixth spots respectively on Arena AI's text leaderboard, beating out models 20 times their size.
All of the models can process video and images, making them ideal for tasks like optical character recognition. The two smaller models are also capable of processing audio inputs and understanding speech. Separately, Google says the Gemma 4 family is capable of generating offline code, meaning you could use them to do vibe coding without an internet connection. Google has also trained the models in more than 140 languages.
|
|
For its 50th anniversary celebration, Apple invited The Wall Street Journal's Ben Cohen to Apple Park to meet up with Apple CEO Tim Cook.
|
|
What should have been a routine release has revealed some of the features Anthropic has been working on for Claude Code. As reported by Ars Technica, The Verge and others, after the company released Claude Code's 2.1.88 update on Tuesday, users found it contained a file that exposed the app's source code. Before Anthropic took action to plug the leak, the codebase was uploaded to a public GitHub repository, where it was subsequently copied more than 50,000 times. All told, the entire internet (and Anthropic's competitors) got a chance to examine more than 512,000 lines of code and 2,000 TypeScript files.
In the aftermath, some people claim to have found evidence of upcoming features Anthropic is working to develop. Over on X, Alex Finn, the founder of AI startup Creator Buddy, says he found a flag for a feature called Proactive mode that will see Claude Code work even when the user hasn't prompted it to do something. Finn claims he also found evidence of a crypto-based payment system that could potentially allow AI agents to make autonomous payments. In a Reddit post spotted by The Verge, another person found evidence that Anthropic might have been working on a Tamagotchi-like virtual companion that "reacts to your coding" as a kind of April Fools joke.
"A Claude Code release included some internal source c
|
|
The latest iteration of Meta's smart glasses has arrived and, as rumored, they are more customizable, particularly for people who need prescription lenses. Meta and Ray-Ban parent company EssilorLuxottica revealed two new styles of frames: the Ray-Ban Meta Blayzer Optics and Scriber Optics, which will start at $499 a pair.
The latest glasses are still considered to be part of the "Gen 2" Ray-Ban Meta glasses, but they do come with a few upgrades that make it easier to get a personalized fit. According to EssilorLuxottica, both styles have somewhat slimmer frames, swappable nosepads and adjustable temple tips so wearers can get a better fit. And, as the "optics" branding implies, the new frame styles are also compatible with a wider variety of prescription lenses, including progressive lenses and transition lenses.
The Blayzer style frames are more square, similar to the existing Wayfarer glasses, while the Scriber version is a little more rounded, like the "Headliner" style frames. Both come in a variety of colors including some translucent styles and are available now for pre-order on Meta's website and will be on sale April 14. The "optics" lineup will also be sold at more physical retail stores, including LensCrafters, Sunglass Hut, Salmoiraghi & Viganò, Apollo, Grand Vision Optical, Vision Express and other locations that are part of EssilorLuxottica's distribution network.
The round, "Scriber" frames.
|
|
Amazon announced that it is adding new capabilities for ordering food delivery with its Alexa artificial intelligence assistant. Users will be able to place orders using natural language on Alexa through the GrubHub or Uber Eats platforms, provided they have an Amazon device with a large screen. First, you'll need to connect an account for those delivery services to use the feature. You can ask to see restaurants with a specific cuisine or tell the assistant to go right to a favorite spot. Once you start an order, Alexa will also support natural language requests and, if you ask for something generic, the assistant will match it to the most similar item on the menu. It should also support more detailed queries like "what are kid-friendly options?" and be able to submit special requests like "no onions."
To start, this ordering capability will be available for Alexa customers using the Echo Show 8 or larger devices. The screen should reflect your order, with any changes shown in real time. Amazon made the Alexa subscription available to all US customers earlier this year.
This article originally appeared on Engadget at https://www.engadget.com/ai/amazon-adds-dynamic-food-delivery-ordering-to-alexa-130000065.html?src=rss
|
|
Apple accidentally started rolling out Apple Intelligence features in China before receiving regulatory approval, reports Bloomberg's Mark Gurman.
|
|