<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Smart Critical Infrastructure &#187; Omnia Balbaa</title>
	<atom:link href="http://smartci.alexu.edu.eg/?author=2&#038;feed=rss2" rel="self" type="application/rss+xml" />
	<link>http://smartci.alexu.edu.eg</link>
	<description></description>
	<lastBuildDate>Thu, 26 Sep 2019 12:45:28 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.7.1</generator>
	<item>
		<title>Sign-Language Recognition from RGBD Data</title>
		<link>http://smartci.alexu.edu.eg/?p=320</link>
		<comments>http://smartci.alexu.edu.eg/?p=320#comments</comments>
		<pubDate>Thu, 19 Dec 2013 09:05:58 +0000</pubDate>
		<dc:creator><![CDATA[Omnia Balbaa]]></dc:creator>
				<category><![CDATA[Previous Funded Research Projects]]></category>

		<guid isPermaLink="false">http://smartci.alexu.edu.eg/?p=320</guid>
		<description><![CDATA[PI: Dr. Mohamed Elsayed Co-PI: Dr. Marwan Torky Funding Agency: Microsoft ATLc Duration: 12 months &#160; Project Abstract One of the most challenging problems in computer vision research is visual recognition and its related tasks, such as object classification, localization, activity, scene and event classification, etc. However, such challenging problems have benefited a lot from [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><b><span style="color: #ff0000">PI:</span> </b>Dr. Mohamed Elsayed<br />
<span style="color: #ff0000"><b>Co-PI: </b></span>Dr. Marwan Torky</p>
<p><strong><span style="color: #ff0000">Funding Agency:</span> </strong>Microsoft ATLc<br />
<span style="color: #ff0000"><strong>Duration:</strong> </span>12 months<strong><br />
</strong></p>
<p>&nbsp;</p>
<p style="text-align: center"><span style="color: #ff0000"><strong>Project Abstract</strong></span></p>
<p>One of the most challenging problems in computer vision research is visual recognition and its related tasks, such as object classification, localization, activity, scene and event classification, etc. However, such challenging problems have benefited a lot from recent advances in sensing technologies, such as cheap RGBD sensors (e.g. Microsoft Kinect). The merit of using depth sensors is straight forward. While the original image capturing is a projection of the 3-D world into a 2-D image plane (which results in ambiguities), the RGBD data aims to reduce ambiguity by giving an easily-calibrated depth data to the captured pixels in the 2-D image.</p>
<p>Currently, many indoor applications such as 3-D reconstruction of indoor scenes, robot navigation, and activity recognition have started using Kinect-like sensory data. In this research project, we address the problems of action recognition (sign language, in particular) using RGBD data. The particular aims are the following: First, we collect a dataset for isolated-word sign language using a Kinect sensor. Second, we develop and test algorithms that apply machine learning on the collected dataset to recognize sign language from the user&#8217;s skeleton movement and hand and face shapes.</p>
]]></content:encoded>
			<wfw:commentRss>http://smartci.alexu.edu.eg/?feed=rss2&#038;p=320</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Nano-enriched, Autonomous, and Trustworthy Sensing Framework for Water-pollution Detection</title>
		<link>http://smartci.alexu.edu.eg/?p=227</link>
		<comments>http://smartci.alexu.edu.eg/?p=227#comments</comments>
		<pubDate>Thu, 21 Nov 2013 08:09:18 +0000</pubDate>
		<dc:creator><![CDATA[Omnia Balbaa]]></dc:creator>
				<category><![CDATA[Approved research projects]]></category>

		<guid isPermaLink="false">http://smartci.alexu.edu.eg/?p=227</guid>
		<description><![CDATA[PI: Prof. Mohamad Rizk Team members: Dr. Mohamed Azab, and Dr. Nader Shehata Industrial partner: PULSE Funding Agency: Information Technology Industry Development Agency (ITIDA &#8211; ITAC program) Duration: 30 months &#160;]]></description>
				<content:encoded><![CDATA[<p><b><span style="color: #ff0000;">PI:</span> </b>Prof. Mohamad Rizk<br />
<strong><span style="color: #ff0000;">Team members:</span> </strong>Dr. Mohamed Azab, and Dr. Nader Shehata<br />
<span style="color: #ff0000;"><b>Industrial partner:</b> </span>PULSE<br />
<span style="color: #ff0000;"><span style="color: #000000;"><span style="color: #ff0000;"><strong>Funding Agency:</strong> <span style="color: #000000;">Information Technology Industry Development Agency</span> </span>(ITIDA &#8211; ITAC program)<br />
<span style="color: #ff0000;"><strong>Duration:</strong> </span>30 months</span><strong><span style="color: #ff0000;"><strong><br />
</strong></span></strong></span></p>
<p>&nbsp;</p>
<p><span style="color: #ff0000;"><b><br />
</b></span></p>
<p style="text-align: center;">
]]></content:encoded>
			<wfw:commentRss>http://smartci.alexu.edu.eg/?feed=rss2&#038;p=227</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>DecoAR: An Indoor-Based Augmented Reality Solution for Interior Design and Real Estate Marketing</title>
		<link>http://smartci.alexu.edu.eg/?p=226</link>
		<comments>http://smartci.alexu.edu.eg/?p=226#comments</comments>
		<pubDate>Thu, 21 Nov 2013 08:06:53 +0000</pubDate>
		<dc:creator><![CDATA[Omnia Balbaa]]></dc:creator>
				<category><![CDATA[Approved research projects]]></category>

		<guid isPermaLink="false">http://smartci.alexu.edu.eg/?p=226</guid>
		<description><![CDATA[PI: Dr. Dina Sameh Taha       Co-PI: Dr. Mustafa Y. ElNainay Industrial partner: ITQAN for Smart Solutions Funding Agency:  Information Technology Industry Development Agency (ITIDA &#8211; ITAC Program) Duration: 12 months]]></description>
				<content:encoded><![CDATA[<p><span style="color: #ff0000;"><b>PI: </b><span style="color: #000000;">Dr.<b> </b></span><span style="color: #000000;">Dina Sameh Taha</span><b><span style="color: #000000;">      </span> </b></span><br />
<span style="color: #ff0000;"> <b>Co-PI: </b><span style="color: #000000;">Dr.<b> </b></span><span style="color: #000000;">Mustafa Y. ElNainay</span><br />
<strong><b>Industrial partner</b>: </strong></span>ITQAN for Smart Solutions<br />
<span style="color: #ff0000;"><strong>Funding Agency:</strong></span>  Information Technology Industry Development Agency (ITIDA &#8211; ITAC Program)<br />
<strong><span style="color: #ff0000;">Duration:</span></strong> 12 months<span style="color: #ff0000;"><strong><br />
</strong></span></p>
]]></content:encoded>
			<wfw:commentRss>http://smartci.alexu.edu.eg/?feed=rss2&#038;p=226</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Sign-Language Recognition from RGBD Data</title>
		<link>http://smartci.alexu.edu.eg/?p=219</link>
		<comments>http://smartci.alexu.edu.eg/?p=219#comments</comments>
		<pubDate>Wed, 20 Nov 2013 11:21:19 +0000</pubDate>
		<dc:creator><![CDATA[Omnia Balbaa]]></dc:creator>
				<category><![CDATA[Approved research projects]]></category>

		<guid isPermaLink="false">http://smartci.alexu.edu.eg/?p=219</guid>
		<description><![CDATA[PI: Dr. Mohamed Elsayed Team members: Dr. Marwan Torky Funding Agency: Microsoft ATLc Duration: 12 months &#160; Project Abstract One of the most challenging problems in computer vision research is visual recognition and its related tasks, such as object classification, localization, activity, scene and event classification, etc. However, such challenging problems have benefited a lot [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><b><span style="color: #ff0000;">PI:</span> </b>Dr. Mohamed Elsayed<br />
<span style="color: #ff0000;"><b>Team members: </b></span>Dr. Marwan Torky<br />
<span style="color: #ff0000;"><strong>Funding Agency: </strong></span>Microsoft ATLc<br />
<span style="color: #ff0000;"><strong>Duration:</strong> </span>12 months</p>
<p>&nbsp;</p>
<p style="text-align: center;"><strong><span style="color: #ff0000;">Project Abstract</span></strong></p>
<p style="text-align: justify;">One of the most challenging problems in computer vision research is visual recognition and its related tasks, such as object classification, localization, activity, scene and event classification, etc. However, such challenging problems have benefited a lot from recent advances in sensing technologies, such as cheap RGBD sensors (e.g. Microsoft Kinect). The merit of using depth sensors is straight forward. While the original image capturing is a projection of the 3-D world into a 2-D image plane (which results in ambiguities), the RGBD data aims to reduce ambiguity by giving an easily-calibrated depth data to the captured pixels in the 2-D image.</p>
<p style="text-align: justify;">Currently, many indoor applications such as 3-D reconstruction of indoor scenes, robot navigation, and activity recognition have started using Kinect-like sensory data. In this research project, we address the problems of action recognition (sign language, in particular) using RGBD data. The particular aims are the following: First, we collect a dataset for isolated-word sign language using a Kinect sensor. Second, we develop and test algorithms that apply machine learning on the collected dataset to recognize sign language from the user&#8217;s skeleton movement and hand and face shapes.</p>
]]></content:encoded>
			<wfw:commentRss>http://smartci.alexu.edu.eg/?feed=rss2&#038;p=219</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
