mobilemenu

Meta’s Llama 4 Models Are Bad for Rivals but Good for Enterprises, Experts Say

April 9, 2025

Meta’s latest open-source AI models are a shot across the bow to the more expensive closed models from OpenAI, Google, Anthropic and others. But it’s good news for businesses because they could potentially lower the cost of deploying artificial intelligence (AI), according to experts. The social media giant has released two models from its Llama family of models: Llama 4 Scout and Llama 4 Maverick. They are Meta’s first natively multimodal models — meaning they were built from the ground up to handle text and images; these capabilities were not bolted on. Llama 4 Scout’s unique proposition: It has a context window of up to 10 million tokens, which translates to around 7.5 million words. The record holder to date is Google’s Gemini 2.5 — at 1 million and going to 2. The bigger the context window — the area where users enter the prompt — the more data and documents one can upload to the AI chatbot.

Ilia Badeev, head of data science at Trevolution Group, told PYMNTS that his team was still marveling at Gemini 2.5’s 1 million context window when Llama 4 Scout comes along with 10 million. “This is an enormous number. With 17 billion active parameters, we get a ‘mini’ level model (super-fast and super-cheap) but with an astonishingly large context. And as we know, context is king,” Badeev said. “With enough context, Llama 4 Scout’s performance on specific applied tasks could be significantly better than many state-of-the-art models.” Continue reading here.