Show simple item record

dc.contributor.authorWere, Mildred
dc.date.accessioned2021-02-02T11:12:33Z
dc.date.available2021-02-02T11:12:33Z
dc.date.issued2020
dc.identifier.urihttp://erepository.uonbi.ac.ke/handle/11295/154574
dc.description.abstractA number of study findings have indicated teacher biases in performance based assessment. This causes inconsistencies in the scores assigned by teachers in such assessments posing serious reliability concerns. The current study sought to establish the relationship between type of assessment procedure of agriculture project (assessment by subject teacher or inter-rater) and reliability of students’ scores in theory examinations in Matungu Sub-county. Correlation design was employed and survey was conducted to collect both quantitative data from a clustered sample of 12 schools implying 12 subject teachers of agriculture and 2 inter-raters who were purposively sampled. A total of 380 agriculture project work samples for all registered students across the sampled schools were awarded a score by the subject teacher and another set of score by the two inter-raters. The teachers and inter-raters completed a self administered questionnaire to collect data on their demographic factors. The subject teacher, inter-rater and theory examination scores for each student were captured/entered into the students’ score sheet for collection of students’ scores. Data on teachers’ demographic factors was analyzed using descriptive statistics. The measures of central tendencies and measures of dispersion for the continuous confounding variables were also presented. The data on scores was analyzed using descriptive statistics and STATA version 16 software for paired t-tests, regression analysis, Fleiss Kappa inter-rater concordance, correlation coefficients, Pearson’s moment coefficients and chi square test. The study revealed a statistically significant relationship between inter-rater scores and theory examination scores (β=.5237, t=4.14, p=0.000). While the relationship between subject teacher and theory examinations was (β=.4280, t=3.18, p=0.002). The regression analysis on the influence of teacher and inter-rater scores yielded results of R2 =0.3271. The inter-rater concordance of the scores generated by the inter-rater established a Fleiss Kappa concordance of .40004 implying moderate agreement between the two inter-raters’ scores. The Pearson’s moment coefficient between the subject teacher and inter-rater score was .8535 indicating a strong positive correlation between the variables. Chi square test and Cramer’s V statistics on the strength of association between subject teacher, inter-rater and theory examination scores yielded results that indicated statistically significant (χ2=1300, p=0.007, Φ=0.3577) while that between subject teacher and theory examination was statistically insignificant at (χ2=1540, p=0.285, Φ=0.3437). The statistically significant association between the inter-rater and theory scores revealed that the scores were highly reliable.en_US
dc.language.isoenen_US
dc.publisherUniversity of Nairobien_US
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/*
dc.subjectRelationship between type of Assessment Procedure of Agriculture Project and the Reliability of Student scores in Agriculture in Matungu Sub-Countyen_US
dc.titleRelationship between type of Assessment Procedure of Agriculture Project and the Reliability of Student scores in Agriculture in Matungu Sub-Countyen_US
dc.typeThesisen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 United States
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States