Abstract:
Autonomous navigation in a stochastic
environment using monocular vision algorithms is a
challenging task. This requires generation of depth
information related to various obstacles in a changing
environment. Since these algorithms depend on specific
environment constraints, it is required to employee
several such algorithms and select the best algorithm
according to the present environment. As such modeling
of monocular vision based algorithms for navigation in
stochastic environments into low-end smart computing
devices turns out to be a research challenge. This paper
discusses a novel approach to integrate several
monocular vision algorithms and to select the best
algorithm among them according to the current
environment conditions based on environment sensitive
Software Agents. The system is implemented on an
Android based mobile phone and given a sample
scenario, it was able to gain a 66.6% improvement of
detecting obstacles than using a single monocular vision
algorithm. The CPU load was reduced by 10% when
the depth perception algorithms were implemented as
environment sensitive agents, in contrast to running
them as separate algorithms in different threads.