>Shortly after reports today that Apple will start scanning iPhones for child-abuse images, the company confirmed its plan and provided details in a news release and technical summary. > >"Apple's method of detecting known CSAM (child sexual abuse material) is designed with user privacy in mind," Apple's announcement said. "Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC (National Center for Missing and Exploited Children) and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users' devices." > >Apple provided more detail on the CSAM detection system in a technical summary and said its system uses a threshold "set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account." > >The changes will roll out "later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey," Apple said. Apple will also deploy software that can analyze images in the Messages application for a new system that will "warn children and their parents when receiving or sending sexually explicit photos." >...
Related: Apple's Plan to "Think Different" About Encryption Opens a Backdoor to Your Private Life Electronic Frontier Foundation https://nu.federati.net/url/282292
Also, phone pics usually have unwanted metadata. It is especially unwanted if you're taking illegal photos, because that info can lead police to you. It makes me suspect that only people that are brand new to such things would use a mobile computing device to take those photos.
>“Sometimes it comes up with a desert and it thinks its an indecent image or pornography,” says Met digital forensics head Mark Stokes. “For some reason, lots of people have screen-savers of deserts and it picks it up thinking it is skin color.”
snip
Have heard the same thing about AI's and some fruits.
apple learns from their MAFIAA business partners how useful it can be to pretend to think of the children and to fight abuse to push abusive practices down customers' throats
>Well, that didn’t take long. Online researchers say they have found flaws in Apple’s new child abuse detection tool that could allow bad actors to target iOS users. However, Apple has denied these claims, arguing that it has intentionally built-in safeguards against such exploitation. > >It’s just the latest bump in the road for the rollout of the company’s new features, which have been roundly criticized by privacy and civil liberties advocates since they were initially announced two weeks ago. Many critics view the updates—which are built to scour iPhones and other iOS products for signs of child sexual abuse material (CSAM)—as a slippery slope towards broader surveillance. > >The most recent criticism centers around allegations that Apple’s “NeuralHash” technology—which scans for the bad images—can be exploited and tricked to potentially target users. This started because online researchers dug up and subsequently shared code for NeuralHash as a way to better understand it. One Github user, AsuharietYgvar, claims to have reverse-engineered the scanning tech’s algorithm and published the code to his page. Ygvar wrote in a Reddit post that the algorithm was basically available in iOS 14.3 as obfuscated code and that he had taken the code and rebuilt it in a Python script to assemble a clearer picture of how it worked. > >Problematically, within a couple of hours, another researcher said they were able to use the posted code to trick the system into misidentifying an image, creating what is called a “hash collision.” >...