In the context of statistics and machine learning, the term "class kappa" often refers to Cohen's kappa coefficient, which is a statistical measure used to assess the level of agreement or reliability between two raters or classifications. The kappa statistic takes into account the agreement that could happen by chance, providing a more robust measure of inter-rater reliability than a simple percentage agreement.
New to topics? Read the docs here!