The Future of Accessibility

Craig Smith
16 min readMar 17, 2018

Last week I gave a talk at a Newcastle Futurist meet-up on the ‘Future of Accessibility’. The brief for the talk was to review the current state of the field of accessibility, to then consider where accessibility might be heading in the next few years, and then to make a few predictive leaps into future decades of progress. I was excited by the potentials of the brief — aware of course of the folly of predicting the future, but still — and I enjoyed sharing some of the work I’ve been doing in this space over the past couple of years. While I did record the talk, the positioning of the camera and the sound did not turn out very well. Hence I thought the best way to share the talk would be through the following recount. Think of it as an index of the key concepts that were discussed on the night, with links to further read up on any ideas that take your fancy. I hope it is of interest.

Photo above is of me, a man in a buisness style shirt, looking at a digital projection of a slide containing Apple iOS icons for Photos, Health Kit, Calendar and CoreML. An equation reads Photos = Health Kit (Sleep, Exercise, Emotional Health) + Calendar (Exams, Events, Etc). Photo taken at the Newcastle Futurist event.

In the afternoon, while I was putting finishing touches on the presentation, by chance on Twitter I came across the photo below of Angel Guiffria at SXSW. She posted the image with a note about charging her bionic arm up in a power outlet at the back of the room she was in. It is a very powerful, cheeky image about where we are regarding the future of accessibility today. It provided a terrific talking point to open the event with.

Photo of Angel Guiffria, a young woman, standing in the corner of a room with a bionic arm with bright LED lights shining from it. The arm is plugged into an outlet in the wall to charge up. From https://twitter.com/aannggeellll/status/972610483495886848

I next shared three stories — the work of Christopher Hills, talking about Switch Control accessibility across the Apple ecosystem; of David Woodbridge, talking about VoiceOver across the same; and of Owen Suskind, a young man on the spectrum who, through his love of Disney movies, was able to develop his social communication skills.

You can see a terrific and very funny video that Chris put together using Switch Control and a drone operated from his iPhone here (https://www.youtube.com/watch?v=Pw6eVF3ilVA) and a good CNET piece on David and his use of VoiceOver, as well as his thoughts on accessibility and universal design, here (https://www.youtube.com/watch?v=AlbIF7IOb_Q). The trailer for a movie about Owen, named ‘Life, Animated’, is available here: https://www.youtube.com/watch?v=4n7fosK9UyY

Book cover for ‘The End of Average’ with subtitle text ‘How we succeed in a world that values sameness’ by Todd Rose. From https://www.harpercollins.com/9780062358363/the-end-of-average

To set a theoritical foundation for the talk, I then moved on to the work of Todd Rose and his book ‘The End of Average’. The focus of the book is about the conceptual move beyond seeing society as a set of averages, and to rather authentically see and understand individuals in our society for what they are — completely unique beings that cannot and should not have measures of an expected average applied to them. There is a terrific quote in the book by Peter Molenaar, distinguished professor of Human Development and Family Studies at Penn State, in which he describes an individual as “a high dimensional system evolving over space and time”. That is such a beautiful way of describing the many complicated factors that compose our ever shifting selves as we grow and develop in all manner of non-linear ways.

This is what is at the heart of accessibility, the voice of universal design and of personalisation — the non-repeatable and never-before or since observed versions of ourselves that, as a consequence, requires a mode of design thinking to support who we are as individuals and all the ways that we will continue to change. We want to have the right supports and tools at the ready to help us achieve what we desire, through kind, inclusive designs and clever innovations.

A keynote slide with text ‘The Future of Accessibility’ at the top and an icon of a calendar reading 2018 followed by a photo of an iPhone, a graphic of Pokémon Go and a photo of a lady smiling and showing her Apple Watch. Text beneath reads ‘Accessibility Now’.

In talking about the accessibility of today, I discussed three broad areas: lifestyle, through iOS accessibility options; learning, with examples of how we use the strengths and interests of our students to achieve big results; and locality, sharing examples of inclusion across our cities. I started by giving a live iOS demonstration on iPhone X, showing the powerful accessibility tools that are in our pockets. One of my favourite activities to do with iOS to demonstrate the power of features like VoiceOver is to show how to take a selfie and then tweet it out, all with your iPhone screen turned off. It is a powerful activity to do to get a sense of the clever accessibility workflows involved in iOS, I have typed up instructions so you can do it yourself here: http://www.autismpedagogy.com/voiceover

Graphic of four coloured boxes, Blue, Green, Yellow and Red, with a Pokémon under each. Psyduck is under the Blue, Pikachu is under the Yellow, Oshawott beneath the Yellow, and Charizard beneath the Red.

I then moved into the learning territory, sharing some of the lesson plans and ideas from my special interest work in autism education over the years. My continuing thesis is that through recognising and empowering the strengths and interests that our children have, we can better understand what drives them and how to connect this drive with goals that will take them further into the future. Minecraft was of focus for a number of the examples, using lessons developed by my colleague Heath Wild and I in our ‘Minecraft in your Classroom’ textbook. As well, I shared similar work that we accomplished with ‘Pokémon Go’. Particularly, the way in which we were able to use the fascination that many children had with the game and how we were able to connect this fascination with other areas of academic and personal development, from exploring local history through to better understanding personal emotional regulation. You can read specific examples of this Pokémon Go work here: ‘Explore Everything with Pokémon Go’.

Image above details three steps in a diagram. Title text reads GOCASI Model, Smith and Wild, 2017. Text within the diagram reads: Phase 1 — Gaming. Virtual reality, open world gaming that process opportunities for social skill practice within. Phase 2 — Online Community. Online communities that are formed out of the gaming experience, allowing for evolving social relationships and offline content creation. Phase 3 — Augmented Social Implementation. After the game experience has been established and enjoyed within virtual parameters, an augmented reality version of the game opens up for real world social skill practice.

This discussion of Pokémon Go lead into an exploration of the potentials of Augmented Reality in this space. Heath and I developed the GOCASI Model, above, in 2017 to help explore the potentials of augmented reality and gaming in assisting with social inclusion strategies. In summary, it is a three phase model — in the first phase, the potential of virtual reality online gaming is explored. Think Roblox, Minecraft, VRChat and similar environments that can be cultivated appropriately to model and practice appropriate social skills within. Phase two focuses on the development of community within these games, of the forums and discussion boards that spring out of these gaming experiences, the content that is created and shared between fans online, moving the social experience out of the game and into other online forums. The final phase is in connecting these social opportunities with real world experience, be they in conventions where games can be played with peers or art can be shared and expressed, or in social spaces like those developed by Pokémon Go where every park and town square became a blank canvas for social experiences that, when scaffolded and supported appropriately, can allow for authentically positive personal outcomes.

Photo of a young lady showing her Apple Watch to the camera. From http://www.mollywatt.com/blog/entry/my-apple-watch-after-5-days

Moving onto locality, I covered ideas such as universally designed playgrounds and cities, showing examples of playgrounds designed for elderly citizen participation, and big cities designed specifically for children to play in. Locality accessibility is of particular relevance in Newcastle currently as our city is in the midst of major construction works, with half the city shut down for the installation of rail works, and the other half of the city managing redevelopment projects that have disrupted access to most every street.

To discuss another angle on how accessibility connects with locality I shared the story told in a blog post by Molly Watts, a UK based accessibility advocate who was born deaf and registered blind at age 14, with Usher Syndrome Type 2a. She wrote a powerful blog post a couple of days after she started wearing an Apple Watch when they first came out. In the post she discusses many ways in which she benefited from the accessibility features of the Watch, and with regards to locality she mentions the particular benefit of using Maps to help navigate her town. Through the haptic feedback of the Watch, it was able to guide her around the city using different patterns of physical taps — she mentions how three pairs of two taps on the Watch means to turn left, while a straight run of twelve taps meant turn right at a street junction, and so on. I know that David Woodbridge, who I mentioned earlier, uses his Watch to give him haptic taps in a covert manner to silently tell him what the time is if he’s in a dull meeting that he’s counting the minutes down in!

A keynote slide with text ‘The Future of Accessibility’ at the top and an icon of a calendar reading 2025 followed by a photo of an iPhone scanning a ladies face, a graphic of ARKit and a photo of a hallway with graphics overlaid on top. Text beneath reads ‘Accessibility Next’.

Stepping a few years into the future for Accessibility Next, I discussed how there are accessibility technologies that are available right now that are absolutely incredible, but they mostly in the domain of early adopters rather than fluent mainstream utility. For example, augmented reality. The potentials of, for example, Apple’s ARKit in allowing developers to create mind-blowing augmented reality experiences within the accessibility space is so great, and yet we know the most important apps here are still a short reach in front of us. I wrote about Autism and ARKit when the software architecture had just been released, with my predictions as to what could be achieved through its implementation. The soil here is so fertile with potential, I cannot wait to see what grows from it, particularly as it builds on what I discussed earlier with the GOCASI model — the transition of virtual experiences into real-world implementation. Take the example below, a simple one, whereby through an augmented reality window we could look at a kitchen bench and be shown visual guides and a sequence of prompts to help us make all manner of meals. In the link above I show around twelve more prototype ideas for this kind of app development within the ARKit space.

Photo of a kitchen bench with text ‘Cheese Sandwich’ underneath and photos with captions overlaid on the bench as if seen in augmented reality. The photos and captions are of ‘Two Slides of Bread’, ‘Butter’ and ‘Cheese Slices’, displayed as if reading a sequence in a recipe.

We then moved into machine learning territory with the potentials of CoreML, Apple’s framework to integrate trained machine learning models into apps. My favourite consideration for the potentials here at the moment is within the Photos app and the way it presents you with Memories. You know how every so often you will be presented with a Memory from your photos app? It could be a collection of photos from a particular date, like New Years Day, or from a location, like when you last when to Sydney, or a memory collection of your Fluffy Friends in which a selection of your dog photos have been curated. I think there could be some very interesting potentials here.

Say you have an exam on this Friday. You have it in your calendar. You get a bit stressed when you have an exam coming up. And you know what you like to do when you start feeling a bit stressed? You go through your Photos to look at cat photos. It makes you feel good. Now, say your iPhone knows this, because it picks up on your emotional health patterns, your sleep and exercise patterns, in relation to what you do when you are feeling good and in control compared to what you do when you are feeling out of sorts. And with machine learning whirring away in the background, what if your Photos app reads your calendar and sees that you have an exam on Friday, and in preperation to help you feel as good as possible for it, early Friday morning it presents you with a new memory, full of cat photos. It anticipates what will help make you feel good, and prompts you with this helpful curated memory selection of photos that help you to regulate before your big exam. And maybe after the exam you get a reminder to go for a run with your friends, because you did this once before and it made you feel really good. Like a private coach sitting in the background, cheering you on and giving you little prompts and reminders when you need it most. This is the potential of what good machine learning could bring to our lives.

A keynote slide containing Apple iOS icons for Photos, Health Kit, Calendar and CoreML. An equation reads Photos = Health Kit (Sleep, Exercise, Emotional Health) + Calendar (Exams, Events, Etc).

Discussion then turned to inclusive developments we hope to see across our local environments in the next couple of years, particularly with regards to the double edge sword of technologies like touch screens being used in commercial spaces such as fast food restaurants and ETFPOS machines. From one perspective the use of touch screens in restaurants for example has been a powerful inclusive tool for many — I can think of many individuals who are non-verbal and others who have significant communication anxieties who have noted they like selecting their food from a touch screen environment in the same manner that everybody else at the restaurant was, rather than the different challenges associated with the counter experience. However, we also know that from a vision perspective many of these touch screens do not have the accessibility features required for inclusive use if you cannot see the screen, and from a physical access perspective they are often out of reach or range for many with these associated needs. This is where a design thinking perspective sourced straight out of classic universal design principles is needed from the very start — don’t build and install a touch screen device that only serves as a quick hit commercial benefit, instead build a device that renders the commercial process accessible to absolutely everybody and witness what sort of experiences will follow. This is why personal technologies like iPhones are leading the way here in accessibility, but as amazing as it is to be able to order food from an app with all the best accessibility options in place, we don’t want to take the impetus away from public and private spaces that we access across society from continuing to design and implement forward thinking inclusive solutions for everybody.

A keynote slide with text ‘The Future of Accessibility’ at the top and an icon of a calendar reading 2050 followed by a photo of a lady wearing an EEG hairnet, a graphic of a fictional Babel Fish, and a photo of a white transport pod. Text beneath reads ‘Accessibility Future’.

This brings us to the big leap forward, what might be hope to see in accessibility in future decades. The scope for this kind of thinking is, and should be, absolutely immense. I covered a few ideas that built on previous talking points throughout the event. The one thing we don’t want to do is put the brakes on imagination in this area — we want completely unfettered and unparalleled ways of understanding inclusivity and the potentials of universal design across all aspects of our lives. For me, I focused on devices that automatically recognise your accessibility needs and cognitive translation devices.

Photo of a lady with another version of her face appearing in front of her with scanned dots composing its form.

A colleague mentioned a few weeks ago that after she had some recent eye surgery she felt lucky to already understand all the accessibility features she could turn on within her iPad to be able to work and operate effectively across the day. And it’s true, there is nobody I know who understands more about the different accessibility features of the device, so she was able to turn on Speak Screen, to increase text size, to adjust colour filters and white balance, all the handy tools that made life more accessible for her. However, I thought at the time, what would have happened if she didn’t know these accessibility features, or if she didn’t think to ask others about them or research them online? This got me thinking about the future potentials of Face ID.

We all know Face ID as the technology present in the iPhone X to scan your face and grant you access to your device. It is composed of a combination of clever cameras and sensors that can map your face and recognise your individual physical features, even as they change subtly in time. Now, what if this technology was able to scan your face for the way you were receptively processing the screen in front of you — that is, what if the cameras and sensors in Face ID were able to scan your eyes and recognise how they were trying to read text on the screen, for example. And, what if your device was then able to adjust its settings automatically based on what it recognises your eyes need — like a behavioural optometrist in your phone, it could track how you were focusing on the screen and adjust itself accordingly, like ambient light adjustment settings currently do for the warmth and brightness of the screen. This is just one example of what automatic accessibility could do, to respond to the needs of the user without the user needing to adjust or access any accessibility settings. That feels like a very good goal to work towards.

Keynote slide with text ‘Babel Fish Prototype #3’ with a graphic beneath of a head thinking of a bike, followed by a graphic that says the letter A followed by a wiggly line that leads to the letter B, followed by a graphic of a Babel Fish, followed by a graphic that says the letter A followed by a straight line that leads to the letter B, followed by a graphic of another head thinking of a bike.

I then moved into cognitive translation territory, using our old friend the Babel Fish from Douglas Adam’s ‘The Hitchhiker’s Guide to the Galaxy’. The babel fish is described as an organic universal translator that, when you stick it in your ear, allows you to understand anything that anybody else is saying in any form of language, “the speech you hear decodes the brain wave matrix” of your communication partner. Now, there is much being done in this translation field at the moment, to develop headphones and technologies that can pick up another language and translate it into your own in a natural and automatic manner, but let’s think bigger than this, let’s think into a little further into the future.

Prototype 1: What if a Babel Fish technology, which I’ll refer to from now as Cognitive Translation Technology (CTT) could pick up the thoughts that someone was having and translate them purposefully into communicable language. For example, an individual who is unable to verbally communicate and who is currently using other means of communication could choose to use CTT, which may be an electroencephalography (EEG) device that one wears on the head like current versions of these devices that read and translate brain waves into functions to operate prosthetic limbs and similar. The CTT would recognise the thoughts and the symbolic language the individual desires to share with their communication partner and would translate this into expressive language.

Prototype 2: But then, does this language need to be expressed and received audibly? What if the communication partner was able to receive this language through their own CTT device, so the ideas from one mind don’t need to be translated into verbal language before they reach the other mind, it can rather remain as some form of symbolic internal language that does not need to go through the process of verbalisation to be expressed and received successfully.

Prototype 3: This is the one I’m really interested in. Let’s say the communication process is not about verbal capacity, but rather about cognitive capacity. One day soon I am going to write up a whole piece just on cognitive spectrums and the ways I hope this spectrum will be further understand and intepreted in the future, but for now, let’s say we are observing two individuals wanting to communicate with each other. Let’s say it is in a work environment — one of the individuals has a moderate intellectual disability and they are being asked a question that contain multiple steps, abstract concepts and a vocabulary of terms that they have not encountered previously in this situation. Now, let’s say a CTT could take those questions, take the complicated language and symbolic understandings within, and smooth it all out into a more readily understood version that is then received by the individual with the moderate intellectual disability.

This is really what I am envisioning when I think of a cognitive translator, something that is able to make all concepts understood in the most accessible ways possible. It could do the same with social concepts, it could be the same with absolutely any concept — take two individuals who are discussing environmental policy, one who is well versed in policy and the other who is not. The individual who understands the policy well but does not recognise the gap in understanding that their communication partner has on the subject matter, a CTT could help translate the vocabulary, the concepts and the context and history of the subject matter and translate it into a simpler and more accessible version that is readily received and understood.

Keynote slide with a graphic of a head thinking of a bike, followed by a graphic that says the letter A followed by a wiggly line that leads to the letter B, followed by a graphic of a Babel Fish. Text above and beneath the wiggly line reads ‘Multi-Step Instructions’, ‘Abstract Concepts’, ‘Advanced Vocabulary’ and ‘Social Understanding’
Keynote slide with a graphic of a babel fish, followed by a graphic that says the letter A followed by a straight line that leads to the letter B, followed by a graphic of a head thinking of a bike. Text above and beneath the wiggly line reads ‘Simple Directions’, ‘Concrete Visualisations’, ‘Foundational Language’ and ‘Emotional Cues’.

It is fascinating to consider the potentials of this kind of thinking — think about forums online like ‘Change My View’ and similar that encourage participants to share a view on a topic, such as a political or ideological view or similar, and then request that others help them to see the view from such a different perspective that they are able to change their view and appreciate the alternative. What sort of philosophical architecture would a CTT require to be able to do this, to have somebody communicate a concept that is completely foreign to the communication partner, a concept that they could not otherwise empathise with or see from the perspective of their partner, yet through CTT could translate enough of the ideas, the context, the experiences and understanding so as to be properly and empathically understood by the receiving partner. And for this process to then go back and forth between them in ever evolving and recognisable ways. I get the feeling this is the kind of thing that Jaron Lanier was thinking about with the potentials of virtual reality to help connect people in an empathy machine environment. It feels like this might be a critical tool not just for accessibility but for the survival of civilisation as we navigate the challenges that differing viewpoints and global connectivity provide us on a daily basis.

Slide that says ‘The Future of Accessibility’, Craig Smith, with a graphic of a rectangle with many smaller squares inside, spiraling into each other in a Fibbonaci sequence.

Thank you for reading this recount of my talk on this topic. I hope there has been something interesting for you to consider above — as I noted earlier in the piece, the accessibility field is so huge a talk like this must in some ways be framed by the experiences and focus areas of the person giving the talk, hence my own interest in autism, cognition, strengths based education and technology guided a lot of the concepts I covered. I’m always eager to connect with others who have interests in these areas and others, you can always link up with me on twitter at @wrenasmir or you can find out more about my work at http://about.me/craig-smith

--

--

Craig Smith

Project Manager, Autism Educator, Learning Designer, Sound Artist, Author + Creator.