Privacy flaws in Apple’s data collection system revealed | Imperial News

0





Imperial researchers have demonstrated how Apple’s use of a widely adopted data protection model could expose individuals to privacy attacks.

Investigating Apple’s use of the model, called Local Differential Privacy (LDP), researchers found that individuals’ preferred emoji skin tone and political leanings could be inferred from the company’s data.

Companies collect behavioral data generated by users’ devices on a large scale to improve applications and services. However, this data contains precise records and may reveal sensitive information about individual users.

“Our findings underscore the need for further research into how to effectively apply these safeguards in practice to protect user data.” Dr. Yves-Alexandre de Montjoye Computer Science department

Companies such as Apple and Microsoft use LDP to collect user data without knowing private information that could be traced back to individuals. However, the new paper, presented at the peer-reviewed USENIX Security Symposium, reveals how emoji and website usage patterns collected using LDP can be used to gather usage information. by an individual of emoji skin tones and political affiliations.

The Imperial College London researchers say this violates the safeguards LDP is supposed to offer, and that more needs to be done to protect Apple customer data.

Lead author Dr Yves-Alexandre de Montjoye from the Computer Science Department said: “Knowing users’ sensitive information, such as the skin tone emoji used in chat or the political orientation of their sites most visited news web, would constitute a concrete violation of privacy.

“Our paper shows that Apple’s current LDP implementation leaves user data vulnerable to privacy attacks. Our findings underscore the need for further research on how to effectively apply these safeguards in practice to protect user data. »

Backup Concerns

To protect user data, Apple uses LDP on its iOS and macOS devices when collecting certain types of data. They say it helps the company discover the usage patterns of large numbers of users without compromising individual privacy.

LDP works by adding “noise” to a user’s data locally on the user’s device to produce scrambled recordings, which are then sent to Apple’s servers.

Dr de Montjoye said: “If implemented strictly, LDP ensures that anyone who collects noisy recordings from users – including Apple itself – will never be able to learn anything sensitive about individual users, regardless of whatever their efforts.

However, questions have already been raised about how the company chooses to implement LDP and whether it could be attacked in practice.

Now researchers have discovered that even noisy recordings can reveal sensitive information about individual users in the event of a new type of attack called pool inference.

“LDP is a powerful technology for collecting data while maintaining privacy, but it must be implemented carefully to provide strong privacy guarantees.” Andrea Gadotti Computer Science department

They modeled two types of attacks, looking at emoji usage and website visits. They found that users were vulnerable to both attacks despite current privacy safeguards.

The attack has proven to be very effective against users whose phone usage is particularly revealing, such as those who visit news websites most often and those with strong opinions who might tend to visit news websites mostly of the same political orientation.

Researchers say Apple needs to do more to ensure LDP is properly implemented.

Lead author Andrea Gadotti, also from the Department of Computing, said: “LDP is a powerful technology for collecting data while maintaining privacy, but it must be implemented carefully to provide strong privacy safeguards. . Currently, Apple’s implementation is vulnerable to attacks that could be used to infer an individual’s political leanings and other sensitive information with the potential for abuse and discrimination.

“While Apple puts in place certain measures that, in theory, would mitigate our attack, these measures rely entirely on trust that Apple will enforce them, which defeats the purpose of using LDP as a technical protection that is not based on trust.”

‘Pool Inference Attacks on Local Differential Privacy: Quantifying the Privacy Guarantees of Apple’s Count Mean Sketch in Practice’ by Gadotti et al., published in August 2022 on USENIX Security Symposium.

Main image credit: Shutterstock.

See the press release for this article

Share.

Comments are closed.