FEATURED POST
Meta's three major AI updates in details
- Get link
- X
- Other Apps
Meta Connect 2023 is a two-day event that will take place on September 27 and 28. The event will be kicked off by Meta's CEO, Mark Zuckerberg, with his keynote at 10:30 PM PST. The conference will focus on Meta's developments
in the augmented and virtual reality spaces. You can join the event virtually and watch the live stream on Facebook and the Meta Horizon Worlds app if you prefer the VR experience with a Quest headset. For the first time since 2019, it will be held over two days and with an in-person presence.- AI Chatbots
- Backdrop & Restyle
- Generative AI
- Meta's exciting new Three AI features to WhatsApp, Messenger and Instagram is follows
- Meta's actively developing and expanding AI here are some of recent developments
The venue is also new, as Meta Connect 2023 will be held for the first time at Meta's headquarters in Menlo Park, California. You can explore the full agenda of the event on their official website. Meta has been actively developing and expanding its AI offerings. Here are some of their recent developments: Meta has been actively developing and expanding its AI offerings. Here are three of their recent developments:
Meta's exciting new Three AI features to WhatsApp, Messenger and Instagram is follows:
1. Llama 2:
This is an open-source large language model that is now free and available for research and commercial use. It competes directly with OpenAI's ChatGPT and Google's Bard. Llama 2 is a family of pre-trained and fine-tuned large language models (LLMs) developed by Meta AI, the AI group at Meta, the parent company of Facebook. The models range in scale from 7 billion to 70 billion parameters. The fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. They outperform open-source chat models on most benchmarks tested, and in human evaluations for helpfulness and safety, they are on par with some popular closed-source models like ChatGPT and PaLM.
Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. The models input text only and generate text only. The architecture of Llama 2 is an auto-regressive language optimized transformer. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. Llama 2 was trained between January 2023 and July 2023. It's a static model trained on an offline dataset. Future versions of the tuned models will be released as improvements are made in model safety with community feedback. Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
Code Llama:
Introduced on August 24, 2023, this is a state-of-the-art large language model specifically designed for coding1. Code Llama is a state-of-the-art large language model (LLM) developed by Meta AI, specifically designed for coding. It is built on top of Llama 2 and is capable of generating code, and natural language about code, from both code and natural language prompts.
Code Llama is available in three models:
- The foundational code model.
- ython: Specialized for Python.
- Instruct: Fine-tuned for understanding natural language instructions.
In benchmark testing, Code Llama outperformed state-of-the-art publicly available LLMs on code tasks. It has the potential to make workflows faster and more efficient for current developers and lower the barrier to entry for people who are learning to code. Code Llama can be used as a productivity and educational tool to help programmers write more robust, well-documented software
Code Llama works by further training Llama 2 on its code-specific datasets, sampling more data from that same dataset for longer. It can generate code, and natural language about code, from both code and natural language prompts. It can also be used for code completion and debugging. Code Llama supports many of the most popular languages being used today, including Python, C++, Java, PHP, Typescript (Javascript), C#, and Bash.It is released in three sizes with 7B, 13B, and 34B parameters respectively. Each of these models is trained with 500B tokens of code and code-related data.
The 7B and 13B base and instruct models have also been trained with
fill-in-the-middle (FIM) capability, allowing them to insert code into
existing code, meaning they can support tasks like code completion right out
of the box. The three models address different serving and latency
requirements. The 7B model can be served on a single GPU. The 34B model
returns the best results and allows for better coding assistance, but the
smaller 7B and 13B models are faster and more suitable for tasks that require
low latency, like real-time code completion.
2. Generative AI product group:
This new product group focuses on generative AI, which allows computers to generate text, draw pictures, and create other media that resemble human output. The new AI technologies will be applied to Instagram, WhatsApp, and Messenger.
3. SeamlessM4T:
This is a foundational speech/text translation and transcription model that overcomes the limitations of previous systems with state-of-the-art results. SeamlessM4T, developed by Meta AI, is the first all-in-one multimodal and multilingual AI translation and transcription model. It can perform multiple tasks across speech and text: speech-to-text, speech-to-speech, text-to-speech, text-to-text translation, and speech recognition. This single model can translate and transcribe languages simultaneously. It supports nearly 100 languages for input (speech + text), 100 languages for text output, and 35 languages (plus English) for speech output. SeamlessM4T is the first many-to-many direct speech-to-speech translation system. It implicitly recognizes the source language(s), without the need for a separate language identification model. As a unified model, it can reduce latency in comparison to cascaded systems. SeamlessM4T was thoroughly evaluated across all languages with both automatic metrics (ASR-BLEU, BLASER 2) and human evaluation.
It was also tested for robustness, bias, and added toxicity, where it significantly outperformed previous state-of-the-art models. SeamlessM4T draws on the findings and capabilities from Meta’s No Language Left Behind (NLLB), Universal Speech Translator, and Massively Multilingual Speech advances, all from a single model. This makes it a significant breakthrough in speech-to-speech and speech-to-text translation and transcription. The model is publicly released under a CC BY-NC 4.0 license, allowing researchers and developers to build on this work. It represents another step forward in removing language barriers, enabling more effortless communication between people of different linguistic backgrounds.
For instance, compared to strong cascaded models, SeamlessM4T improves the quality of into-English translation by 1.3 BLEU points in speech-to-text and by 2.6 ASR-BLEU points in speech-to-speech. When translating from English, SeamlessM4T-Large improves on the previous SOTA (XLS- R-2B-S2T [Babu et al., 2022]) by 2.8 BLEU points on CoVoST 2 [Wang et al., 2021c], and its performance is on par with cascaded systems on Fleurs. Tested for robustness, SeamlessM4T performs better against background noises and speaker variations in speech-to-text tasks compared to the current SOTA model. This makes it a highly effective tool for real-world applications where such variations are common.
To access SeamlessM4T, you can visit the Meta SeamlessM4T demo page
Here are the steps to use it:
- Click "Start Demo".
- Hit "Start Recording".
- Choose a translation language. You can select up to 3 languages.
- Click "Translate". That's it!
- You should now be able to use SeamlessM4T.
These offerings reflect Meta's commitment to advancing the state-of-the-art in Generative AI, Computer Vision, NLP, Infrastructure, and other areas of AI.
Meta's actively developing and expanding AI here are some of recent developments:
Meta AI: Now in beta, Meta AI is an assistant you can chat with 1-on-1 or message in group chats. It can make recommendations, make you laugh with a good joke, settle a debate in a group chat and generally be there to answer questions or teach you something new.
Restyle:This feature allows you to apply new visual styles to your photos by describing the effect you want applied. Just type in descriptors like "grunge" or "watercolor" and Restyle will apply the new look and feel to your image.
Backdrop:This feature allows you to change the scene or background of your photos. It's coming soon to Instagram.
AI Chatbots: Meta is launching artificial intelligence-driven persona chatbots across Instagram, Facebook, and WhatsApp.
This gives developers the power to create their own versions of AI assistants. These AI technologies are being applied to innovative, safe products, tools, and experiences across Meta's family of apps.
🔽 RELATED VIDEO: This is how Starlink works ↴
📢Like this Article or have something to say? Write to us in the comment section, or connect with us on Facebook Threads Twitter LinkedIn using #TechRecevent.
- Get link
- X
- Other Apps
Comments
Post a Comment
Your comments encourage us to work better.