Introduction to Programming (IUT dpt Info, Polytech GE3)
Database (ITU-dpt STID)
OMGL (IUT dpt Info)
ASR3 (IUT dpt Info)
Tutored project first year of DUT
Recognition of sign language
PFE fifth year Polytech'Nice
goal of the project is to develop a search engine in large image
databases, for which the interaction between the user and the motor is
controlled using a Microsoft Kinect. Inspired
by what is done for the handling of new flat screens
developments have allowed to reproduce these features with a Kinect
(http :/ / www.youtube.com/watch?v=bCuetTWdaJQ).
this project we will adopt the same type of man-machine interface for a
user to search a database of images with simple gestures and find all
pictures of cats or bus or other that are in the database .
do this, it will implement machine learning algorithms to classify
permettronnt image database (we will use it for the study of SVMs made
progress but we can of course also consider the Randoms Forests). It will extract visual features (color, texture, edges ...) to be input SVMs or Random Forests.
Finally, it will implement different strategies for interactive otpimiser search based on user interactions with the engine. We hope to have a close result http://retin.ensea.fr/ but driven by a kinect.
extensions are possible, such as determining a region in an image with
your fingers to find all the images that have similar regions, draw a
shape found in the images ...
The goal of the project is to develop a system for the recognition of human activity. Whether
kept at home for the elderly, assistance to people with disabilities,
or for analysis and indexing of video data, this area of research and
industrial development is booming. Evidenced
by the summer school in Sophia Antipolis in early October "Human
Activity and Vision Summer School" which brought together international
researchers in the field (http://www.multitel.be/events/human- activity-and-vision-summer-school/home.php)
or "testing" the Multimedia Grand Challenge for 4 years
the last 4 years, some very interesting work has been done by
experienced researchers (http://www.di.ens.fr/ ~ Laptev / actions /)
with results often bluffing on media data, the problem is compounded
when we attack the recognition of action "live".
exploiting the information potential of the Kinect, it is definitely
possible to improve the current results, it still must be able to learn
from heterogeneous data provided by the Kinect (video, 3D skeleton ...)
to extract a relevant global information. This is the goal of this project.
do this, it will implement machine learning algorithms that will
classify the acquired video (we rather rely on approaches Randoms
Forests that will merge the intermediate decisions and combine the
decisions of heterogeneous data). It will extract visual features (motion, body parts, 3D info ...) to be input Random Forests.
extensions are possible, such as creating a software dance classes that
can recognize if the sequences have not been observed or performed ...
Then you can give free rein to your imagination.