Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: identifying duplicate records
From
"Dimitriy V. Masterov" <[email protected]>
To
[email protected]
Subject
Re: st: identifying duplicate records
Date
Fri, 10 Feb 2012 13:33:05 -0500
Nick is right. Aggregating several pairwise uses of duplicates will
not always work, even for tagging the problematic cases. Stgroup is a
better way.
Here's some sample code with comments:
#delimit;
clear all;
set more off;
/* Fake Data */
input
dob nhs str10 surname;
1979 1234 "Cox";
1979 1234 "Coxx";
1997 1234 "Cox";
1997 1243 "Cox";
1979 5417 "Box";
1979 4517 "Box";
1822 1234 "Galton";
1822 1234 "Galton";
1979 5768 "Masterov";
1997 5786 "Masterob";
2011 9999 "Singleton";
end;
/* (1) Failed Way */
// This will only tag all problematic observations, with no way to
group them into clusters */
// It will miss Mastero?
duplicates tag dob nhs surname, gen(dups123);
duplicates tag dob nhs, gen(dups12);
duplicates tag nhs surname, gen(dups23);
duplicates tag dob surname, gen(dups13);
egen possible_dups=rowmax(dups*);
/* (2) Better Way: adjust matching threshold based on the level of
mistakes in your data */
gen new_id=string(dob) + "-" + string(nhs) + "-" + surname;
strgroup new_id, gen(group) threshold(.35);
sort group;
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/