<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Smart Critical Infrastructure &#187; Previous Funded Research Projects</title>
	<atom:link href="http://smartci.alexu.edu.eg/?cat=11&#038;feed=rss2" rel="self" type="application/rss+xml" />
	<link>http://smartci.alexu.edu.eg</link>
	<description></description>
	<lastBuildDate>Thu, 26 Sep 2019 12:45:28 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.7.1</generator>
	<item>
		<title>Sign-Language Recognition from RGBD Data</title>
		<link>http://smartci.alexu.edu.eg/?p=320</link>
		<comments>http://smartci.alexu.edu.eg/?p=320#comments</comments>
		<pubDate>Thu, 19 Dec 2013 09:05:58 +0000</pubDate>
		<dc:creator><![CDATA[Omnia Balbaa]]></dc:creator>
				<category><![CDATA[Previous Funded Research Projects]]></category>

		<guid isPermaLink="false">http://smartci.alexu.edu.eg/?p=320</guid>
		<description><![CDATA[PI: Dr. Mohamed Elsayed Co-PI: Dr. Marwan Torky Funding Agency: Microsoft ATLc Duration: 12 months &#160; Project Abstract One of the most challenging problems in computer vision research is visual recognition and its related tasks, such as object classification, localization, activity, scene and event classification, etc. However, such challenging problems have benefited a lot from [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><b><span style="color: #ff0000">PI:</span> </b>Dr. Mohamed Elsayed<br />
<span style="color: #ff0000"><b>Co-PI: </b></span>Dr. Marwan Torky</p>
<p><strong><span style="color: #ff0000">Funding Agency:</span> </strong>Microsoft ATLc<br />
<span style="color: #ff0000"><strong>Duration:</strong> </span>12 months<strong><br />
</strong></p>
<p>&nbsp;</p>
<p style="text-align: center"><span style="color: #ff0000"><strong>Project Abstract</strong></span></p>
<p>One of the most challenging problems in computer vision research is visual recognition and its related tasks, such as object classification, localization, activity, scene and event classification, etc. However, such challenging problems have benefited a lot from recent advances in sensing technologies, such as cheap RGBD sensors (e.g. Microsoft Kinect). The merit of using depth sensors is straight forward. While the original image capturing is a projection of the 3-D world into a 2-D image plane (which results in ambiguities), the RGBD data aims to reduce ambiguity by giving an easily-calibrated depth data to the captured pixels in the 2-D image.</p>
<p>Currently, many indoor applications such as 3-D reconstruction of indoor scenes, robot navigation, and activity recognition have started using Kinect-like sensory data. In this research project, we address the problems of action recognition (sign language, in particular) using RGBD data. The particular aims are the following: First, we collect a dataset for isolated-word sign language using a Kinect sensor. Second, we develop and test algorithms that apply machine learning on the collected dataset to recognize sign language from the user&#8217;s skeleton movement and hand and face shapes.</p>
]]></content:encoded>
			<wfw:commentRss>http://smartci.alexu.edu.eg/?feed=rss2&#038;p=320</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
