id author title date pages extension mime words sentences flesch summary cache txt work_nkxqva5efrcg7litapywmuo6fm Anestis Sitas Duplicate detection algorithms of bibliographic descriptions 2008 15 .pdf application/pdf 6278 831 69 Purpose – The purpose of this paper is to focus on duplicate record detection algorithms used for The algorithms for detection of duplicate records use matching keys, which are strings In every effort of duplicate record detection the matching process may bring about the refer to the detection of duplicate bibliographic records of monographs, serials detected duplicate records (deletion or merging), as well as whether this process is done This refers to the final stage of the process of detecting duplicate records. For duplicate record detection the keys were sorted in many and various fields. after the absolute matching of all compared fields, duplicate records were deleted. In 1990, OCLC created a new algorithm for duplicate record detection. deals with the process of detection of duplicate records that is applied only in one part All processes of algorithm applications for duplicate record detection aim primarily at ./cache/work_nkxqva5efrcg7litapywmuo6fm.pdf ./txt/work_nkxqva5efrcg7litapywmuo6fm.txt