In its latest blog post, Meta announced that it has developed Meta AI with many features. Meta AI, Meta’s versatile assistant, has expanded its availability to 22 countries, with the latest additions being Argentina, Chile, Colombia, Ecuador, Mexico, Peru, and Cameroon. Users in these regions can now access Meta AI across popular platforms like WhatsApp, Instagram, Messenger, and Facebook in new languages including French, German, Hindi, Hindi-Romanized Script, Italian, Portuguese, and Spanish, with more languages expected to be added soon.
How is Meta AI Enhancing User Creativity?
Meta AI isn’t just about providing answers and information; it’s also a creative partner. The new “Imagine me” feature, currently in beta in the U.S., allows users to generate personalized images. By simply typing “Imagine me” followed by a prompt like “Imagine me as royalty” or “Imagine me in a surrealist painting,” users can see themselves transformed in various imaginative scenarios. This feature uses state-of-the-art personalization models to create these unique images, which can be shared with friends and family to add a fun and creative twist to interactions.
What is Imagine me?
Meta’s ‘Imagine Me’ is a new feature of WhatsApp that allows users to produce AI-generated images of themselves. By uploading their own photos, users can get AI-generated images with various backgrounds and effects. This feature works by the user typing the command ‘Imagine me…’ in the Meta AI chat window and the images created in this way can be shared in chats. In addition, the ‘Imagine Me’ feature is still in beta and can only be tested by selected beta users for now. The feature is being developed to integrate with Meta’s other applications Facebook, Instagram and Messenger.
What Advanced Capabilities Does Meta AI Offer for Complex Tasks?
For those tackling complex tasks such as math and coding, Meta AI now features its most advanced open-source model, Llama 405B. Available on WhatsApp and meta.ai, Llama 405B excels in reasoning, enabling it to handle intricate questions. It can assist with math homework by providing step-by-step solutions, offer coding support with debugging and optimization suggestions, and help users master sophisticated technical and scientific concepts.
This powerful combination of coding expertise and image generation allows users to create new games or put a fresh spin on classic ones, potentially even inserting themselves into the gameplay. This feature opens new avenues for creativity and problem-solving, making complex tasks more manageable and accessible.
What is Llama 405B?
Llama 405B, officially Llama 3 405B, is the largest open source language model developed by Meta, with 405 billion parameters. Released on 23 July 2024, this model stands out with its text and image processing capabilities. Thanks to these features, it can handle a wide range of data, called multimodal, going beyond previous text-based models. The availability of Llama 3 405B as open source greatly increases the accessibility and usability of AI models, creating a significant opportunity for small developers and entrepreneurs. Unlike closed source giants such as OpenAI, this model allows the free use of model weights and parameters. This provides a great advantage in terms of developing custom AI models and accelerating innovative applications. In benchmark tests, Llama 3 405B achieved results comparable to leading models such as GPT-4 and Claude 3.5. The model shows superior performance on various tasks such as natural language processing, code generation and logic execution. It also offers new capabilities such as longer context windows and increased multilingualism.
How Can Users Benefit from Meta AI on Meta Quest and Other Devices?
Meta AI is not limited to apps and websites; it is also available on Ray-Ban Meta smart glasses and will soon be integrated into Meta Quest in the U.S. and Canada. In its experimental mode, Meta AI will replace current Voice Commands on Quest, providing hands-free control, real-time information, weather updates, and more. It even features Vision in Passthrough, which allows users to interact with their physical surroundings. For example, while packing for a trip, users can ask Meta AI for outfit advice or local restaurant recommendations.