Category Archives: Approved research projects

Automatic Load Balancing for Smart Grids

Project PI: Prof. Yasser Gaber Dessouky
Project Team: Dr. Hossam ElDin Mostafa, Dr. Ahmed Abou ElFarag, and Eng. Ahmed Gouda
Research area: Power Systems/ Smart Grids


 Smart Electrical Grids have acquired nowadays a large interest in the electrical load distribution balancing problem. In case the loads connected to the three phases are unbalanced, load reconnection to other phases is performed manually, which is a tedious and expensive task to perform that requires qualified personnel to perform the task in addition to load disconnection and hence electric supply service interruption.

The proposed solution in this project treats the phase unbalance and the large value of neutral conductor current. Subsequently all over- and/or under-voltage problems are then greatly reduced. In addition to the power switching unit, which is to be designed in order to transfer the load from one specific phase to another, automatically and instantly with no delays, depending on the actual case of loading in all the neighboring loads.

The advantages of the proposed automatic system for smart load balancing problem, include but are not limited to:

  1. Provide an up-to-date balanced operation of three-phase systems
  2. Help to continuously avoid unbalance problems and losses.
  3. Easy to install on any system with almost no significant changes in the distribution system.
  4. Allow the user to monitor the load current and its phase angle at any time.
  5. Allow the central computer to monitor all the individual single-phase loads at any time, and to generate an alarm when there exists a fault in any single-phase load.
  6. Provide a database for the historical values of each load at the central PC.
  7. Provide remote monitoring of the system status and history using a web interface.
  8. Power factor correction can be achieved.


Personalized Microblogs Corpus Recommendation

PI: Dr. Hesham El Mongy
Funding Agency:Microsoft ATLC
Duration: 12 Months

Project Abstract

Microblogs are special virtual social network web-based applications. Users of microblogs are allowed to post relatively short messages (corpuses) compared to regular blogs. This encouraged many users to become more active, as the effort they need to put to post a message is very small. On the other hand, following the microblogs is becoming more challenging as users can receive thousands of corpus updates every day. Going through all the corpuses updates is a time consuming process and affects the user’s productivity in real life, especially for the users who have a lot of followees and thousands of tweets arriving at their timelines every day. In this project, we propose a personalized recommendation system that will aim to give the user a summary of all received corpuses. Considering the fact that the user interests change over time (and location), this summary should be based on the user’s level of interest in the topic of the corpus at the time of reception. Our method considers three major elements: users’ dynamic level of interest in a topic, user’s social relationship such as the number of followers, their real geographical neighborhood, and other explicit features related to the publishers’ authority and the tweet’s content.

Sign-Language Recognition from RGBD Data

PI: Dr. Mohamed Elsayed
Team members: Dr. Marwan Torky
Funding Agency: Microsoft ATLc
Duration: 12 months


Project Abstract

One of the most challenging problems in computer vision research is visual recognition and its related tasks, such as object classification, localization, activity, scene and event classification, etc. However, such challenging problems have benefited a lot from recent advances in sensing technologies, such as cheap RGBD sensors (e.g. Microsoft Kinect). The merit of using depth sensors is straight forward. While the original image capturing is a projection of the 3-D world into a 2-D image plane (which results in ambiguities), the RGBD data aims to reduce ambiguity by giving an easily-calibrated depth data to the captured pixels in the 2-D image.

Currently, many indoor applications such as 3-D reconstruction of indoor scenes, robot navigation, and activity recognition have started using Kinect-like sensory data. In this research project, we address the problems of action recognition (sign language, in particular) using RGBD data. The particular aims are the following: First, we collect a dataset for isolated-word sign language using a Kinect sensor. Second, we develop and test algorithms that apply machine learning on the collected dataset to recognize sign language from the user’s skeleton movement and hand and face shapes.