My main research stems from an interest in spatial and temporal modelling in computer vision. This is applied in many domains - I'm an interdisciplinary researcher who's interested in all sorts of application areas. A broad indication of the kinds of research domains and the relevant questions to those domains is given below. If you'd like to see some of my outputs, the publications page has links to peer reviewed journal and conference papers.
Questions.... How can we model change in plants? How can we measure development (growth, spatial organisation)? What kinds of imaging platforms are appropriate for this? Are there low cost solutions to the phenotyping problem? (aka "how far can we get with a £50 webcam?") How can we register images in different modalities (infra-red, spectroscopy, visible spectrum) when objects grow? How can we deal with occlusion?
On this research strand I'm collaborating heavily with the UK's National Plant Phenomics Centre (in Aberystwyth), have an EPSRC first grant looking at 2.5d modelling of plant structure (employing Jon Bell as an RA), and have a PhD student Shishen Wang looking in particular at imaging of Arabidopsis plants.
Questions... How do we navigate through space? How do we perceive space? How does geography constrain the way we move around? What cues do we use when navigating? How can we incorporate this into artificial systems (either static video analysis systems, or active perceivers like robots)? How can we deal with occlusions and shadows?
On this research strand I'm working with robots, including Idris (a 400kg autonomous wheeled vehicle), and a robotic boat platform with aerial blimp-mounted camera. There are two PhD students in this domain - I'm first supervisor for Max Walker (robot boats, raspberry pis, and blimps!); and I'm second supervisor on Juan Cao (who has submitted a thesis on visual robot navigation). I've also got an ongoing project looking at spatial reasoning and shadows, working with Paulo Santos in FEI Sao Bernado do Campo, Brazil; this was supported by a British Council Research Exchange in 2006 and continues to this day.
Questions... How can we model change over time? How can we register images in different modalities? Can we use groupwise methods? How do we deal with 2D and 3D datasets at the same time?
Applications in this area are currently in prostate cancer research (I am second supervisor for Jonathan Roscoe, looking at 2D/3D imaging for prostates) and in skin quality assessment (I am second supervisor for Alasanne Seck who's using a lightstage to determine skin quality from high definition 3D images).
Questions... Can we work out how an artist's style changes over time? Can we locate a painting in geographical space? What can we tell about location and style from a photograph of an artwork?
This work is in collaboration with people at the National Library of Wales, namely Lloyd Roderick, a PhD student taking a digital humanities approach to the landscape painter Sir John "Kyffin" Williams. This work also involves Lorna Hughes, at the University of London's School of Advanced Study.
I'm also interested in general computer vision problems and applications, particularly ones which involve the analysis of difficult video (low frame-rate, low resolution, underwater, too close to the object, multiple occlusions...), such as data from webcams, camera phones and so on. If you're interested in collaborating on this sort of stuff, get in touch on email@example.com.