StableDiffusion can Generate Images on Apple Silicon Macs in 18 Sec


Apple announced its enthusiastic support for the StableDiffusion project on its machine learning blog. This includes updates to the recently released macOS 13.1 beta 4 and iOS 16.2 beta 4 to improve performance on Apple Silicon chips.

Apple also provided extensive documentation and sample code for converting source StableDiffusion models into native Core ML format. This announcement is Apple’s most formal endorsement of the recent emergence of AI image generators.

To summarise, machine learning-based image generation techniques rose to prominence as a result of the DALL-E model’s unexpected results. These AI image generators take a string of text as a prompt and attempt to create the image you requested.

StableDiffusion, a variant, launched in August 2022 and has already seen a lot of community investment. The Core ML StableDiffusion models fully utilize the Neural Engine and Apple GPU architectures found in the M-series chips, thanks to new hardware optimizations in Apple OS releases.

This results in some impressively fast generators. Apple claims that a baseline M2 MacBook Air can generate an image in under 18 seconds using a 50-iteration StableDiffusion model. Even an M1 iPad Pro could complete the same task in less than 30 seconds.

Apple hopes that this work will encourage developers to incorporate StableDiffusion into their client-side apps rather than relying on backend cloud services. In contrast to cloud implementations, running on devices is “free” and protects privacy.

Previous Article

Apple has Released iOS 16.1.2, Which Includes Improvements to Crash Detection and Other Features

Next Article

Apple's Upcoming Mixed Reality Headset will Run on 'xrOS' Platform

Related Posts