In today's world, many tests are conducted and completed on computers, leveraging the benefits of information technology. Computer-based testing enables continuous assessment, which is considerably more convenient to test takers compared to the traditional paper-and-pencil test. However, it also raises concerns about the test security. For instance, examinees who took the test earlier might share the encountered items with other test takers, resulting in item bank leakage and endangering the test's validity and fairness. Although strategies to detect and solve the issue of compromised items have been proposed and investigated, most solutions are computationally intensive and hence challenging to implement in real-time monitoring. To address this challenge, we present two novel models for detecting compromised items in this dissertation: one using response data, and the other using response time data. Our proposed models consider the leakage rate to cover a broader range of scenarios, as opposed to many existing methods that assume an abrupt leakage after an item is compromised. As a result, our models not only flag the potentially compromised items but also provide an estimate of when they were compromised. We evaluate the effectiveness of our detection models using a simulation study and a real dataset from a large-scale operational computer-based test, and the results indicate that our methods achieve a high detection power while maintaining a nominal level of type-I error. Additionally, we also develop an application for test practitioners to use the estimated leakage rate to safeguard test integrity, which significantly enhances test takers' ability estimation. Overall, our proposed models provide a more comprehensive and efficient approach to detect compromised items, which can improve test security and fairness.