Tech
Smartphones to Genius-Phones with Snapdragon 8 Gen 3
Smartphones to Genius-Phones with Snapdragon 8 Gen 3
I believed that the smartphone industry was beginning to shift from smartphones to “genius-phones” around ten years ago, when the first smartphones began utilizing on-board sensors like the camera and microphone to interact with the outside world.
With hindsight, it’s evident that those were only the beginning of that evolutionary journey. During the first day of the 2023 Snapdragon Summit last week, Qualcomm unveiled the next steps with the release of their mobile SoC, the Snapdragon 8 Gen 3.
The goal of this most recent mobile SoC is to enable generative AI on the device. Previously, Qualcomm used its earlier generation mobile SoC to demonstrate a Stable Diffusion text-to-image generative artificial intelligence (genAI) model with about 1 billion parameters. With over 10 billion parameters on phones and over 13 billion parameters on PCs, this most recent version supports genAI models.
READ OnePlus has released OxygenOS 14 Open Beta 1 for the Nord 3
The speed at which the device can return outputs from those models is another important metric to consider, in addition to model size. Depending on the model, the Snapdragon 8 Gen 3 delivers 15–30 tokens per second and produces images in less than a second.
Qualcomm’s most recent AI Engine and AI Stack leverage generational improvements to enable the aforementioned capabilities.
The newest Adreno GPU, Kryo CPU, Hexagon neural processing unit (NPU), Sensing Hub, and supporting memory architecture make up the Snapdragon 8 Gen 3’s AI Engine, which Qualcomm claims offers up to 98% faster performance and 40% more power efficiency than the previous generation.
The GPU has been upgraded to not only enable real-time hardware-accelerated ray tracing but now provides global illumination support, as well as various gaming and hardware-accelerated image and video encoding/decoding enhancements.
Based on a 64-bit Arm Cortex-X4 architecture, Qualcomm upgraded the CPU to have five performance cores and two efficiency cores instead of the previous 4+3 configuration. In conjunction with one primary core, which can be clocked up to 3.3 GHz, the performance cores are capable of running at up to 3.2 GHz, while the efficiency cores support a clock rate of up to 2.3 GHz.
The NPU consists of scalar, vector, and tensor accelerators and has been upgraded with an enhanced power delivery system and micro tile inferencing to help with performance versus power optimization. Micro tile inferencing is the technique Qualcomm uses in its Hexagon NPU’s scalar accelerator. Ignacio Contreras, Qualcomm’s senior director of product marketing, explained that with micro tile inferencing, they can “slice neural network layers up in even smaller micro tiles to speed up the inferencing process of deep and complex neural networks and achieve even better power savings.”
Additionally, to enable multiple model modalities, as well as optimizing model size versus accuracy and latency, the NPU supports INT4, INT8, INT16 and FP16 data types.
Last but certainly not least, given the critical role the on-board sensors like the cameras and microphones play in personalizing the on-device generative AI experience, the Sensing Hub has also been updated to yield up to 3.5× the AI performance compared to the previous generation with two micro NPUs, 30% more memory and two always-sensing image sensor processors (ISPs). The Snapdragon 8 Gen 3 is also equipped with a 12-layer cognitive ISP.
Upgraded intelligence
Ultimately, it’s not just about performance or increased intelligence. It’s about what you can really do with it. Support for increased model parameter sizes, the Sensing Hub and ISP improvements, and upgraded memory architecture allow the SoC to support one of the most critical features for ensuring maximum AI capability and the most natural way of interacting with those capabilities: multi-modal model support.
Just as humans interact with each other and the world around them through a combination of speech, sight, feel and hearing, interactions with on-device AI should also support voice/audio, text, image, and physical sensor sampling, such as with infrared sensors and video. Multi-modal model capability allows a device to ingest all these types of input prompts, as well as output different types of content from the written or spoken word to pictures and video clips.
For example, increased performance and multi-modal model support enables on-device photo expansion. This feature comes in handy when the user wants to resize an image without distorting it or reducing the image resolution. Take, for instance, a user who shot a picture in portrait mode for use in a social media post.
Using the same image as a banner advertisement now requires nothing more than a finger tap or even a speech prompt to direct the device to expand the image with a new aspect ratio and fill the empty expanded space with new content that seamlessly matches the existing background through the use of generative AI.
ALSO Xiaomi To Equip Its Cars With An Internal Combustion Engine
The digital versions of speech, sight and hearing are relatively straightforward. However, how can a device use ‘feel’ as an input? The answer is through the use of different sensors, such as time-of-flight and infrared sensors.
One example Qualcomm demonstrated at the recent Snapdragon Summit was the use of a time-of-flight sensor to sample the number of particles present in the air and, using AI, determine the air quality to see if it’s safe to exercise outside. Another demonstration used infrared sensors with an AI model running on a phone that determined an individual’s hydration level with a touch of their skin or determined a cookie’s freshness after it had been left out.
Next steps
The first wave of devices based on the new Snapdragon 8 gen 3 SoC will appear this quarter from smartphone OEMs. However, this is just the beginning of the transition from smartphones to genius-phones. Ongoing research and development, as well as the competitive landscape, will deliver even more performance, efficiency and ultimately, AI capability over the next few years.
As the industry and use cases mature and become more sophisticated, the AI experience will also be refined. We will carry personalized AI models, that we have fine-tuned by using them, from device to device. It’s not that difficult to imagine a situation where the AI models on our devices will learn about our preferences and interests and we won’t want to start from scratch when we move between phone, laptop and car.
As we continue down this path, it’s clear that the next generation of genius, AI-powered phones is an evolutionary step towards a whole new world of use cases and experiences. It will be exciting to see by this time next year how big the next step will be.
Tech
Remaker AI : Swap Your Face Online For Free
Remaker AI : Swap Your Face Online For Free
Introduction
We all know AI of now and it’s capabilities but the problem is where do we get these AI tools to use or experiment with. Most of them came with free and paid features.
In this blogpost we will talk about Remaker AI, a free Ai website that allows content creators to ease their content creating with a face swap.
Face swap is simply an AI doing all the work with your face and even your voice prompts. An example is let’s say you create tutorials on YouTube on any other platform and we all know the stress involved.
READ: oraimo FreePods Pro Plus Review
So the Remaker AI is here to do everything for you. Just write your script and the AI will scan your face and create a full video with your voice and face without you doing anything. Juss relax and watch it do the work.
How To Use Remaker AI
As the name sounds remaker, it does everything as you wish but we all know AI has limits and not all the features are free. Just visit their website and explore their various features and become more creative in this space
Conclusion
Kindly share your thoughts and experience with this AI platform and let us know what you think about it.
Tech
Google Pixel Buds Are Now Just $69
Google Pixel Buds Are Now Just $69
The reasonably priced Pixel Buds A-Series Google earbuds are available for purchase for Android users.
READ Google Pixel Devices Facing Storage Issues After Update
While you’re on the phone, the buds can cut down on background noise, and the sound quality is pretty damn good. According to Google, you may use the earbuds for up to five hours of listening time and 2.5 hours of talking before having to put them back in their case.
It looks like you can listen for up to 24 hours straight before charging the case. After just 15 minutes of charging, you may extend the listening time by three hours with the use of rapid charging.
There isn’t actual active noise cancellation present, but there is an adaptive sound feature that lets you set the volume automatically.
Tech
My Everyday Tech Essentials 2024 (EDC)
My Everyday Tech Essentials 2024 (EDC)
As a young and beginner tech content creator, i always want to keep my followers and friends updated on how i shoot and do my stuff online.
ALSO oraimo Discount Code
In this video, i made reviews on the various tech gadgets i carry along anytime i go out to create content.
From my iPhone to my favorite oraimo BoomPop2 and many others,
Watch Video below: