The technological advancements in the areas of mobile and edge computing provide nowadays the required levels of performance to handheld commodity devices to carry out computational intensive tasks which are required for an effective human-computer interaction. This enables advanced Augmented Reality - AR and Computer Vision approaches to operate on edge and mobile devices offering an immersive experience to the users by augmenting the foreground scene without the need of additional and expensive hardware. In this work we demonstrate the capabilities of AR and Computer Vision technologies for object and scene identification when deployed in a hybrid cloud-mobile environment. The prototype addresses the requirements of a real-world usage scenario for improving the accessibility for hearing and mild vision impaired visitors in museums and exhibition areas. This project is part of the implementation of the SignGuide project, an interactive museum guide system for deaf visitors, which can automatically recognize an exhibit, and create an interactive experience including the provision of content in sign language content using an avatar or video. The proposed system introduces a novel Multitenant Cloud AR-based platform and a client library for mobile apps capable of effectively identifying points of interest and creating new opportunities of interactivity between the real and digital worlds. |
*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.