Wearable camera technology has evolved to the point whereby small unobtrusive cameras are now readily available, e.g. the Vicon Revue (formerly known as Microsoft Research’s SenseCam). This has allowed research effort to focus on analysis and interpretation of the data that such devices provide.
Even in the absence of platforms such as the Vicon Revue, any smart phone can be turned into a wearable camera. The Campaignr2 configurable micro-publishing platform has demonstrated the capability of mobile platforms to act as WLAN (and more general sensor) data gathering hubs.
Researchers in Dublin City University are developing a device using an Android-based smart-phone worn on a lanyard around the neck that in addition to image capture also senses a variety of other modalities (e.g.motion, GPS, Bluetooth, WLAN.)
The motivation for this is to use this platform in a variety of ambient assisted living applications as well as assistive technology for the memory and visually impaired. Although this platform does not exist commercially at the moment it is likely that such devices will start to appear in the near future.
Novel technologies such as robust indoor localization will help drive this. These platforms allow users to regularly collect data at many (indoor) locations. This large collection of data needs some means of structuring it to make it understandable and searchable.
In this work, the problem of structuring the data is addressed by examining the automatic identification of indoor locations. The indoor localization problem is complicated by a number of factors. Although GPS has become synonymous with user localizations, indoors its signals are weak or non-existent.
Using WLAN as a solution has given promising results, but its performance is subject to change due to multipath propagation and changes in the environment, such as number of persons present in a given location, variable orientation, temporary changes to building layout, etc.
To address these issues, the contributions made in this work include the following:
1) A novel image-based localisation method is proposed based on fine tuning the cluster centers of the hierarchical vocabulary tree of the SURF feature descriptors. Cluster centers were calculated recursively using the previously calculated cluster centers. This shows great robustness over the simpler approach where the centers are calculated only once.
2) A novel fusion function is presented which takes localizations results from both sensing modalities simultaneously to create a new ranking of the locations. It uses weighted linear combination of confidences of both modalities and together with adaptively calculated thresholds obtains better accuracy/precision than when using any single modality. The proposed fusion approach is very general and thus potentially applicable to various sensing modalities.
3) A novel tracking method is employed when using an image-based, WLAN-based or fusion-based approach only. The method represents a simple Viterbi-based multiple state model using simple Hidden Markov Model (HMM) states. An approach for converting times (between the two consecutive locations) into probabilities in order to construct the most likely route traversed by the user is proposed.
4) a novel approach for localizing the user any- where within space defined with a rectangular grid of known locations of size 3 × 5m2. The novel interpolation algorithm is based on the specification of robust range and angle-dependent likelihood functions that describe the probability of the user being in the vicinity of known, pre-selected locations in space. This approximation showed the best trade-off between complexity and accuracy and moreover introduced flexibility into the system. Contribution is the ability to reduce the number of calibration points(CPs) needed.
To read this this external content in full, download the complete document from the author online archives.