|
You can find here links to the different modules I am currently
doing some teaching for.
Here are ideas of CS39440
projects I would be interested in supervising:
-
Mobile robot automatic road following:
-
Having a robot automatically follow a road is an attractive
prospect. In this project, I want to use a panoramic camera
that provides in one image front and back view of the road
therefore allowing precise positioning of the robot on the
road. I also want to use a colour based model of the road
that will adapt to a changing road (i.e. when going from
tarmac to dirt). Finally, the system should at the end of the
project run on one of our outdoor robots.
-
Vision-based tracking of moving robots:
-
Tracking moving objects is used in many applications (security
springs to mind, but sports is another one). Here, I am interested
in tracking a mobile robot using its appearance (the way it looks)
as seen from a camera. The camera can be static or moving
(pan-tilt-zoom and/or mounted on a mobile robot). I have one
application in mind (but there can be many more): making a robot
follow an object (another robot, a car, a person) at a distance using
a Pan-Tilt-Zoom camera.
-
Stabilisation of a Pan-and-Tilt Unit holding a camera:
-
Our new large all-terrain rover is equipped with a panoramic camera
mounted on a Pan-and-Tilt Unit (PTU) that is stabilised using
gyroscopes. One of the problems of these is that they tend to
drift and therefore need to be regularly corrected to ensure the
camera stays vertical. The project would be about building a small
hardware module and the software to drive it that would interface
tilt sensors (accelerometers) with the PTU to ensure that
"verticality". Having done CS25710: Mobile, Embedded and Wearable
Technology before would be of great help for this project.
-
Remote teleoperation of a robot with live display of its
panoramic vision:
-
Most of our robots are equipped with panoramic cameras giving
in one image a 360deg view of the surroundings of the robot.
This view is very useful for teleoperation (amongst other
things). This live image could be displayed in the
hemispherium of the visualisation centre along with an
integrated teleoperation interface that would allow efficient
remote teleoperation of the robot. This could be used to
teleoperate some of our big outdoors robots (such as Idris).
-
Body pose estimation:
-
Estimating the pose of a body in a single image is a tricky but
important problem with a huge number of applications. One approach
would be to iteratively match a 3D model of a body onto the image
of a body using a combination of graphics and vision.
-
Virtual robot lab:
-
This project is on building and displaying a particular virtual
environment: the Intelligent Systems Lab (ISL), or Robot Lab, with
its mobile robots. As such, a model of the ISL will need to be
built and the final system will display the model. On top of that,
the position of the mobile robots will be obtained in real time
using the VICON system and the robots will be displayed in real
time in the virtual Lab. Technologies such as OpenGL, Java3D or
XNA (possibly linked to an XBox 360) can be used.
-
Virtual Computer Science department:
-
This project is on building and displaying virtual environments.
Different aspects of the environment can be included in such a
project. We will here only consider visual and audible aspects of
the environment. The application that you will build must be
interactive. This includes being able to navigate in the
environment, but also to interact with the environment (direct
interaction like opening doors or indirect interaction like the
perceived sound depending on the position in the environment and/or
the state of the environment). Technologies such as OpenGL, Java3D or
XNA (possibly linked to an XBox 360) can be used.
-
Dancing robots:
-
Country dancing involves having dancers (often paired as couples)
positioning relative to other dancers to form, first, static
figures and, second, to change the figures in some orderly fashion.
One way to realise that kind of dance is to use absolute geometric
positioning of the dancers. However, and despite its apparent
simplicity, this is not a nice way (and not a very interesting way
either). It happens that such dances are usually specified for
each dancer relative to the others, the specification containing
different kinds of "rules" having different levels of priority.
The project is about implementing such dances using, at least
virtual, robots. The robots must use data coming from sensors
about the other "dancers" to position themselves relative to the
others and thus implement the dance.
But this list is not exhaustive, will probably grow and I might also
be interested in your own ideas!
|