Part 1 of this post covered some examples of VR and AR examples from the social services/human services and suggested that we need to broaden our thinking to move beyond the “usual suspects” of Industry, acute medicine and management training simulations.
In Part 2, I want to look at some potential applications of AR/VR tech in human services. I’m a bit of an ideas magpie and constantly look for ideas from any source. So, all of these ideas are inspired by existing applications in unrelated fields. I’ll also share some thoughts on the rapidly changing development tools landscape and on how we might see AR/VR tech going mainstream.
Ideas and inspiration can, and in my case, usually do, come from the most unexpected places. People who know me well, know that a trip around a furniture store is never, ever top of my ‘top ten things to do this weekend’ list. But, Ikea produced one of the first AR applications using Apple’s ARKit and it’s proved to have lasting appeal as (a) it meets a customer need (ie what will **** look like in my home?) and, (b) it’s straightforward to use.
My first thought when I saw it was, wouldn’t it be great if someone made a version of this to help services when they’re planning to install assistive technology (like stair lifts, bath/bed hoists etc) into people’s homes? Helping the professional, and the citizen to visualise the impact of adaptations to their home would certainly not only save time and money on installation and adaptation, but also help the user decide on the best option to meet their needs.
I then began to think about other scenarios where social service workers carry out assessments of peoples’ living environments. We have a series of eBooks focused on Dementia and in one of them there is a section on the impact of the living environment on dementia. One of the elements of this is an HTML-based interactive kitchen (click here to try it ) and it occurred to me that workers could benefit from an AR application whilst would do the same job, straight from their mobile phone or, even better use AR to scan a room and receive dynamic feedback on aspects of the decor/layout etc which could be altered to make the room more “dementia-friendly”. From there it’s easy to see how similar functionality and design patterns could be used to leverage AR in a ranges of related uses (eg assessments of living environments for physical impairment or visual impairment etc).
Navigation is often seen as a killer application for AR and numerous examples exist (eg Google Maps AR). But with the trend towards adding Ultra Wideband Radio capability to smartphones, indoor navigation becomes an viable application of AR. Imagine arriving at a large service facility (such as a hospital) and being able to easily find your way to where you need to go – without the stress!
The theme I’m rather clumsily trying to highlight here, is that the thoughtful application of these technologies can not only extend their reach, but can also remove much of the friction from common activities in human services.
The potential of AR as a training tool for practical care tasks is already in use in the sporting world. Both of my sons played ice hockey growing up and like most other athletes, found it difficult at times to interpret feedback on some of the improvements they could make, in terms of technique, as they were learning. Enter Coach’s Eye and other similar apps.
These apps allow the athlete to video themselves performing a physical task and have the trainer (who may not be present) to provide feedback on their performance and annotate the video to illustrate their feedback. This enables useable, clearly illustrated feedback which the athlete can then more easily apply.
In the world of social care, in residential and home care services, staff have to be trained to safely lift and/or move service users with impaired mobility. This is commonly referred to as moving and handling training. But, with time and service pressures (never mind the current restrictions on in-person training events due to the COVID-19 pandemic), attending such training is challenging to say the least – as is the ability to attend refresher events.
However, these sport focused apps provide us with a functionality template and design pattern which could easily be applied to the care scenario described above.
Moving away from specialist, domain-focused AR applications, at the time I was planning this presentation in 2019, I came across this AR application from iTRA in Western Australia – sort of augmented reality sticky notes:
Although this is focused on heavy industry/engineering, it made me reflect again on the CareStory app referenced in Part 1 of this post, and it struck me that this basic framework and design could be applied to any environment. I could see this being useful in a range of care settings when making sure information is passed along can be vital for the safety and well-being of staff and service users.
Finally, in the VR space, I wanted to highlight a project which has been over a year in planning and is just being launched, lead by Barrie Wilson with the support of Alicia Rogers from Scottish Social Services Council working alongside Dr Matthew Poyade and his team at the Glasgow School of Art’s School of Simulation and Visualisation (SimVis) – “BeMe”.
BeMe builds on the research findings outlined in Part 1 of this post regarding the power of VR to boost empathy in leaners, and will give learners the ability to step into the shoes of people receiving services in a range of situations (eg older person’s care; physical disability etc). The immersive VR experiences will be delivered via a smartphone app (housed in a passive headset) and the goal is to build empathy in staff and students new to the care sector and enhance their understanding of, and commitment to, user-centred practice.
Going Mainstream – development tools
Let’s turn our attention for moving AR & VR tech into the mainstream. As with all emerging technologies there are a several elements to moving something form the new/niche to the mainstream. A key first step is, being able to create useable applications of the technology at an affordable cost.
In common with most emerging technologies, VR and AR have gone through the growing pains of expensive tools requiring highly specialised skills to more openly accessible tools (in terms of cost and ease of use). Many of these services appear, shine bright, and disappear quickly (anyone remember Aurasma?).
Keeping up top date can be difficult but, lists like this one can help. Also, there are some development tools that have been around for a while (if you are creating marker based AR) – like Zappar and Blippar. If VR is more you’re thing, then services like Viar360, may fit the bill. So experimenting with creating AR & VR applications is now much easier than it was a few years ago.
However, if you want to dive into something more serious, it’s always worthwhile paying attention to the big tech companies. When then commit significant resources to a new or emerging technology, it’s often a sign of building momentum.
If you are a MacOS/iOS user, Apple has developed free development tools for AR called Reality Composer. Reality Composer is intended to make it easy for you to create interactive augmented reality experiences with no prior 3D experience. and it works in conjunction with Apple’s tools and on all AR enabled iPhone and iPad devices.
The other big player in this area is the Adobe Corporation. They have ramped up their interest in AR and have developed the Aero development tools (on iOS and desktop PCs) which, when used alongside Photoshop and Adobe Dimension, can create sophisticated AR experiences.
Adobe are also working on exploiting the LiDAR scanners being introduced to smartphones to allow creatives to scan any object for use in Aero:
Adobe hasn’t neglected VR, including the capability to develop limited VR experiences (such as guided tours, etc) in Captivate 2019.
All of this investment provides an affordable and accessible entry point for L&D professionals and learning technologists to engage with and create AR & VR based learning experiences and should provide a platform for increasing the use of these technologies.
Going mainstream – delivery tech
But there is another key requirement for the tech to go mainstream – the delivery platforms. In the past few years, there’s no doubt that the AR capabilities built into smartphones at OS level via ARKit and ARCore have been a significant driver for the adoption of AR. But there are situations (especially in a learning and development environment) where having to use a handheld device can be a hindrance.
So I’d suggest we need to see a shift from handheld delivery to hands-free. The obvious solution is smart glasses. But something needs to be done about design! (can’t see myself being out in public wearing HoloLens, can you?)
Perhaps TV fiction will i help us get to where we need to go? After all, there are precedents,
Back in 2019, I was fascinated by the Hulu programme “The First” – not because of the plot (which was painfully dull), but because of their depiction of mainstream use of AR
If the tech companies can pull a design solution like that (even with a wireless tether to a smartphone), I think we might just see AR reach its tipping point for mass adoption.
A couple of final thoughts
Advances in mobile technology continue at a pace and we’ve seen previously expensive an cumbersome process become easier and easier to access and a hugely reduced cost. Even in highly specialised areas like motion capture, we’re seeing smartphone technology beginning to have an impact (eg Reallusion’s iPhone Face Mocap).
But finally, I just wanted to acknowledge that every example I’ve given across these two posts, has been visual in nature. What about people with visual impairments?
Thankfully this has not been overlooked. Augmented Reality is not solely a visual technology. AR audio is alive and well and has actually been with us for a long time (eg audio guided museum tours etc).
LookTel has made a a range of apps available over the past 10+ years to address the needs of people with visual impairments and in 2017 Microsoft released their SeeingAI app. According to Microsoft, the app was developed with, and tested by people with visual impairments around the world and uses AI technology + smartphone cameras to provide audio descriptions elements of the world around the user.
But, I think this is just the beginning. With the development of spacial audio in wireless ear buds, the inclusion of Ultra Wideband Radio and LiDAR sensors in newer smartphones, It seems to me we are not far away from situation where people with visual impairments will be able to scan any novel environment (with their AR smart glasses?) and have an audio description of that environment delivered via wireless ear buds in real time.
And that’s it for this post. I’ll be interested to see how quickly this dates …