The new Llama 3.3 70B model has just dropped — here’s why it’s a big deal
New free open model outperforms GPT-4o
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Daily (Mon-Sun)
Tom's Guide Daily
Sign up to get the latest updates on all of your favorite content! From cutting-edge tech news and the hottest streaming buzz to unbeatable deals on the best products and in-depth reviews, we’ve got you covered.
Weekly on Thursday
Tom's AI Guide
Be AI savvy with your weekly newsletter summing up all the biggest AI news you need to know. Plus, analysis from our AI editor and tips on how to use the latest AI tools!
Weekly on Friday
Tom's iGuide
Unlock the vast world of Apple news straight to your inbox. With coverage on everything from exciting product launches to essential software updates, this is your go-to source for the latest updates on all the best Apple content.
Weekly on Monday
Tom's Streaming Guide
Our weekly newsletter is expertly crafted to immerse you in the world of streaming. Stay updated on the latest releases and our top recommendations across your favorite streaming platforms.
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
Meta has just dropped its Llama 3.3 70B model, providing further proof that open models continue to close the gap with proprietary rivals.
It’s only been released in a 70B parameter version but its benchmark performance puts it not far off the older Llama 3.1 405B, and it even ranks above OpenAI’s GPT-4o and Google’s Gemini Pro 1.5 in some ratings.
The new model is available for download and installation at Ollama, Hugging Face or at Meta’s official Llama site.
Why is Llama 3.3 70b a big deal?
For developers, and those who want to use AI models on their own computers instead of the cloud, this is a big deal. Every new Llama release shows how small open models can compete and even beat the best of the rest.
Meta has made no secret of the fact that it sees the open model paradigm as the best defense against potential abuse from proprietary products.
Smaller models mean that users can use cheaper smaller graphics cards with less VRAM and still receive a decent performance from the AI. The key to the usability of AI on desktop computers lies in getting snappy responses. The best AI in the world is useless if it takes an hour to deliver an answer.
By also giving users the ability to customize and enhance the base Llama models, there’s also every chance that open will continue to keep pace with closed in the long run.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
The plan seems to be working since Llama models were downloaded over 20 million times this August alone, which is a 10x increase over the same time last year. A big factor behind these numbers is that each release of these Llama versions comes with a significant decrease in cost and increase in performance and capability.
What's new in Llama 3.3 70b?
Today’s the one year anniversary of our first Gemini model releases! And it’s never looked better.Check out our newest release, Gemini-exp-1206, in Google AI Studio and the Gemini API!https://t.co/CCQwBxm8YuDecember 6, 2024
The new model supports eight languages, including Spanish, Hindi and Thai, and has been deliberately designed so developers can fine-tune and add on additional capabilities or languages as they need.
Two points stand out from the success of these open models. First, there is a demographic of large and small companies that prefer to retain a measure of control over their AI product integration. There is also a growing group of AI enthusiasts and specialists who are looking to run smaller models on more modest consumer-level hardware.
There are more than 60,000 derivative models on Hugging Face, showing the strength of demand for fine-tuning the Llama model. In addition, large enterprise users like Goldman Sachs, Accenture and Shopify are also using Llama internally.
A lot of the large enterprise use is based on the cloud versions, whereas Llama is also building a sizable fan base for its more powerful models. Companies like Zoom and DoorDash, for instance, are using Llama as part of their AI mix in a wide variety of tasks, including customer support, software engineering and data analysis.
Final thoughts
This growing Llama ecosystem is a clever play by Meta. Not only does it establish the company’s strength in general-purpose AI, but it also provides some strong marketing juice for its in-house Meta AI product.
With over 350 million downloads of Llama models across the world to date, Meta has now grabbed a firm spot as one of the world’s top AI companies. It’s AI assistant, Meta AI, has just topped 600 million monthly users. This number is likely to explode once Llama 4 is released early next year as expected.
More from Tom's Guide
- Gemini Live Voice mode now free for millions of Android users — how to try it right now
- Improve your writing with my 5 favorite AI writing apps
- Hailuo MiniMax AI video just got a major upgrade — 7 prompts to try it out

Nigel Powell is an author, columnist, and consultant with over 30 years of experience in the technology industry. He produced the weekly Don't Panic technology column in the Sunday Times newspaper for 16 years and is the author of the Sunday Times book of Computer Answers, published by Harper Collins. He has been a technology pundit on Sky Television's Global Village program and a regular contributor to BBC Radio Five's Men's Hour.
He has an Honours degree in law (LLB) and a Master's Degree in Business Administration (MBA), and his work has made him an expert in all things software, AI, security, privacy, mobile, and other tech innovations. Nigel currently lives in West London and enjoys spending time meditating and listening to music.










