We examine existing methods of measuring agreement/disagreement between instruments and propose a method using mutual information to quantitatively measure the amount of information shared by the results of several instruments of the health survey. We also provide examples of work to explain the approach. Health researchers often use several health assessment tools to study a specific health syndrome. The use of multiple instruments can raise a number of interesting research issues, for example. B the agreement of the results of the different instruments. We address this problem using information theory and focus on mutual information to compare the results of several instruments of the health survey. The layout of this document is as follows. In the Methods section, we examine existing measures to quantitatively measure agreements/disagreements in the results of several instruments and propose reciprocal information as a possible measure to this effect. To this end, we propose a procedure to compare with several instruments using mutual information. In the “Results” section, we use the proposed framework to compare and analyze a number of instruments used in a pilot study for the detection of delirium. In addition, we apply the proposed approach to other studies that compare several instruments of the health survey. Finally, we compare, through several illustrations, mutual information with other competing measures used in conventional studies.
The “Discussion” section discusses the validity of the FAM-CAM used in the pilot study and proposes a debate on some of the benefits of mutual information, which examines the applicability and limitations of the proposed approach. In the comparisons, we looked separately at the amount of information from each section of the contract and the difference of opinion, using the proposed approach. In some cases, we found that there is a weak consensus among the instruments compared. A “weak” agreement can occur when there is a small amount of reciprocal local information from either contract sections or disagreements. For example, CAM and FAM-CAM (pair 1) showed a high level of reciprocal local information on the sections of the contract, compared to those of the disagreement sections (0.629 of the sections of the contract versus -0.137 for the disagreement sections). However, the comparison between CAM and FAM-CAM for the “inattention” function (pair 3) measures the amount of reciprocal local information from the disagreement sections at $0.148, while the comparison of the contract tranches is measured only 0.196, which gives 0.048 as mutual information, which represents a small degree of agreement compared to pair 1. Therefore, we assume that the level of the agreement of pair 3 leads to a relatively small agreement, due to the low level of local reciprocal information from its parts of the contract. In other words, FAM-CAM did not back up enough information to explain the compliance with the MAC with respect to the “inattention” function, which led to the need for further clarification of FAM-CAM issues related to this function. Although pair 8 (comparison WITH CAM and FAM-CAM) and pair 10 (compared with CAM and DRS) have similar levels of reciprocal local information from disagreement sections (0.173 and 0.0 176), pair 8 shows greater agreement due to the greater amount of reciprocal local information (0.397) of chord sections compared to those of couple 10 (0.275).