Show simple item record

dc.contributor.authorWambui, Mukunga Catherine
dc.date.accessioned2014-11-13T06:09:01Z
dc.date.available2014-11-13T06:09:01Z
dc.date.issued2014
dc.identifier.citationMasters in Computer Scienceen_US
dc.identifier.urihttp://hdl.handle.net/11295/74716
dc.description.abstractMany Kenyan learning institutions offer ICT training and computer programming is one of the key courses. The programming course with the highest number of students in this institution is Visual basic.net.Currently; the instructors in the institution are forced to set questions in multiple choice format to make their work easier when it comes to marking. This applies to programming examinations and has greatly affected the students’ performance negatively. The multiple choice questions do not test the coding skills of the student and neither is the student’s programming skill improved because most of them guess the answers. The main objective of this project was to develop an online code assessment system capable of assessing correctness of visual basic.net programs and providing instant feedback. This was implemented at the learning institution by the programming students taking vb.net course at basic, intermediate and expert levels. The software development life cycle (SDLC) methodology was used in the development of the proposed system and case study research design was used to conduct research. The online code assessment system was tested using various testing strategies to ensure that it was working correctly. System effectiveness testing results showed that over 80% of the students and instructors found the system to be effective on exam marking, score computation and feedback. Usability testing was conducted and 93.1% of the students, 100% of the instructors and 66% of the administrators accepted to use the system. Exam marking was carried out using character matching strategy which is one of the assessment methods under static analysis. Students’ answers were marked manually and the results compared to those generated by the system. The results were analysed and the difference was less that 0.5%. The conclusion was that the system was reliable and had acceptable accuracy levels in code assessment since the difference in the manual results and system results was very minimal.en_US
dc.language.isoenen_US
dc.publisherUniversity of Nairobien_US
dc.titleAn online code assessment system for visual basic.Net programs case study of a learning institution in Kenyaen_US
dc.typeThesisen_US
dc.type.materialen_USen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record