Apple Intelligence’s Clean Up Tool Cannot Recognize Faces In The Background, Leading To The Stuff Of Nightmares When Removing Images From Foreground
Two major Apple Intelligence features arrived with iOS 18.1 beta 3, with one of them being the company’s Clean Up tool, which behaves similarly to Google’s Magic Eraser. While we expect the company’s generative AI features to improve with time, one example of the Clean Up tool being absolutely horrendous at its job is detecting faces in the background. As it removes images from the foreground, it needs to fill the removed space with relevant imaging data. Unfortunately, the end result is something out of your worst nightmare, as you will soon find out.
Samsung’s Galaxy AI does a far superior job of filling in the missing pixels, but it is important to note that Apple Intelligence has not yet officially launched
While Apple Intelligence does an impressive job in proofreading and rewriting thousands of characters on-device, it struggles with filling up facial data when an image is removed from the foreground. An example of this was posted by Mukul Sharma on X, with the caption reading ‘Peak Apple Intelligence moment.’ The content creator attempted to remove what appears to be Samsung’s Galaxy Z Fold 5 that he is holding with his right hand, with the Clean Up option initiating a crisp animation, showing the user what exactly is being removed from the image.
Related Story Apple Now Wants To Invest In OpenAI, As iOS 18 Release With ChatGPT Support Is Right Around The Corner
The smartphone successfully ‘cleans up’ from Mukul’s hand, but what is left behind is a grotesque mess, with the person’s right side looking like it was completely melted. It is unfortunate that Apple Intelligence cannot fill up the missing area with accurate detail, but it is important to note that it is not the fault of the feature. After all, these generative AI tools are yet to launch officially, which obviously means that there will be millions of parameters that need to be studied and tweaked before we get to see a more refined result in action.
It is also difficult to read blurred image data,