Click here to Skip to main content
15,896,269 members
Please Sign up or sign in to vote.
1.00/5 (2 votes)
See more:
Group based anonymization is the most widely studied approach for privacy preserving data
publishing. Privacy models/definitions using group based anonymization includes k-anonymity,
l-diversity, and t-closeness, to name a few. The goal of this paper is to raise a fundamental
issue on the privacy exposure of the approaches using group based anonymization. This has been
overlooked in the past. The group based anonymization approach by bucketization basically hides
each individual record behind a group to preserve data privacy. If not properly anonymized,
patterns can actually be derived from the published data and be used by the adversary to breach
individual privacy. For example, from the medical records released, if patterns such as people
from certain countries rarely suffer from some disease can be derived, then the information can
be used to imply linkage of other people in an anonymized group with this disease with higher
likelihood. We call the derived patterns from the published data the foreground knowledge. This
is in contrast to the background knowledge that the adversary may obtain from other channels as
studied in some previous work. Finally, our experimental results show that the attack is realistic
in the privacy benchmark dataset under the traditional group based anonymization approach.
Posted
Comments
OriginalGriff 24-Oct-11 6:01am    
Reason for my vote of one: not a question.
Amir Mahfoozi 24-Oct-11 6:44am    
"The goal of this paper" -> paper? :o

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900