In the process of conducting a meta-analysis regarding the effectiveness of field instruction in social work (Holden, Barker, Rosenberg, Kuppens, & Ferrel, 2011), our research group encountered an issue that has not been widely discussed in the social work literature. This issue has interesting implications for the use of kappa ([kappa]) in many areas of research beyond systematic reviews and meta-analyses. Our struggle with this issue resulted in the writing of this research note. Early in the meta-analytic process, after searching 25 databases, we compiled a set of 1,680 records to review (Holden et al., 2011). Two team members independently reviewed this set of records, each making an exclude/include decision. Next, the level of agreement was determined (even though a vote for inclusion of a record by either team member resulted in the record going into the final set for full review).The results of this process are displayed in Table 1. Although the percentage of agreement was 95.5 (94.8 + 0.7), the associated [kappa] value was .22. Recall that a value for [kappa] "is one when perfect agreement occurs, zero when observed agreement equals chance agreement, and less than zero when observed agreement is less than chance agreement. Therefore, [kappa] is interpreted as agreement in excess of chance agreement" (Orme & Gillespie, 1986, p. 166).