Show simple item record

dc.contributor.authorAngano, Walter
dc.date.accessioned2021-11-30T06:32:42Z
dc.date.available2021-11-30T06:32:42Z
dc.date.issued2021
dc.identifier.urihttp://erepository.uonbi.ac.ke/handle/11295/155698
dc.description.abstractGrowth in energy demand stimulates a need to meet this demand which may be achieved either through wired solutions involving infrastructural investment in generation, transmission and distribution systems or non-wired solutions such as demand response (dr). Dr is a grid load reduction measure in response to supply constraints where consumers voluntarily participate in shifting their energy usage during peak periods in response to a time or price-based incentive. In kenya, residential consumers constitute approximately 33 while 30-40 percent on a global scale which demonstrates an essential fraction for their participation in dr. This research aimed at reviewing smart home energy management systems, reinforcement learning (rl) techniques such as q-learning, designing and testing a single agent q-learning algorithm to objectively determine an optimal policy from a set of load management strategies. The study sought to address the performance of the algorithm by reducing the learning speed of the agent. This was achieved by introducing a continuous knowledge base that updated fuzzy logic rules and setting up a definite state-action space. The algorithm was implemented in matlab and interfaced with the physical environment using the arduino uno kit while adopting serial communication between simulation and physical environment. A graphical user interface developed using the app designer tool in matlab created a provision for integrating consumer feedback which was critical in communicating with the knowledge base to update fuzzy rules. The time of use (tou) tariff plan constituted three major segments which were off-peak, mid-peak and peak tariffs, developed by benchmarking public historical residential tariff data with tou trends for other countries. Load profiles generated from appliance and tou data were used to test the algorithm. The designed algorithm showed an improvement in learning within 500 episodes and net energy savings ranging between 8 and 11 %.en_US
dc.language.isoenen_US
dc.publisherUniversity of Nairobien_US
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/*
dc.subjectsmart home energy management systemen_US
dc.titleDesign and testing of a demand response q-learning algorithm for a smart home energy management systemen_US
dc.typeThesisen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 United States
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States