"Caution: Geek Zone! We're Amazon Associates Affiliates, turning your clicks into tech treasures. Your support keeps the jokes (and deals) rolling!"

Press ESC to close

Geek Nerd
2 Min Read

Intel Core Ultra PCs Get More Optimized AI Models, Including Stable Diffusion

Intel says there are now 500 generative AI models optimized for Core Ultra processors.

The generative AI revolution has mostly been focused on running large and complex AI models in server datacenters. Some AI models are optimized enough to run on typical computers, though, and Intel is making some progress there.

Intel announced today that there are now over 500 AI models optimized for its new Intel Core Ultra processors, which were revealed in December and have started to appear in new PC laptops. That list likely includes many experimental and testing models that don’t serve a practical purpose for most applications, but there are a few big ones: Phi-2, Meta’s Lllama model, Mistral, Bert, Whisper, and Stable Diffusion 1.5.

Intel said in a press release, “Models form the backbone of AI-enhanced software features like object removal, image super resolution or text summarization. There is a direct link between the number of enabled/optimized models and the breadth of user-facing AI features that can be brought to market. Without a model, the feature cannot be designed. Without runtime optimization, the feature cannot reach its best performance.”

Most (if not all) of those AI models can run on non-Intel hardware, but adding support for the newer hardware features specific to Intel’s latest chips makes them more practical for real-world use. For example, Intel said the OpenVINO AI model’s optimization process included “load-balancing across all the compute units, compressing the models to run efficiently in an AI PC, and optimizing the runtime to take advantage of memory bandwidth and core architecture within Intel Core Ultra.”

Machine learning and AI models that run locally on computers is nothing new, but running newer generative AI models locally on PCs has a few interesting use cases. You could have something like ChatGPT and Microsoft Copilot running entirely on your own PC, potentially eliminating the privacy concerns and network connectivity requirements that come with sending prompt data to external servers. NVIDIA’s ChatRTX local chatbot is a step in that direction, but it’s still experimental and requires a PC with a powerful RTX 30 or 40-series graphics card.

Intel is hoping that software using these optimized models might push people to buy newer computers with Core Ultra processors. For now, though, cloud-based AI tools like ChatGPT and Copilot aren’t going anywhere.

Source: Intel

Source

“Tech Bargains Galore: Where Innovation Meets Affordability!”