With increasing talk of human assistants, shop staff, and manual workers soon becoming obsolete to sleek, omnipresent applications of artificial intelligence, on March 12 an expert panel convened to bring a fresh perspective to the debate. Simply, what will be the implications of this augmentation to everyday life when 15% of the population do not think and perceive in the ways that AI is taught they should?
Hosted by FND founder Lucy Hobbs, NDs from the worlds of data science, entrepreneurship, tech and neurodiversity each shared their thoughts and forecasts, as well as taking questions from an engaged audience.
“As we head into the fourth industrial revolution, AI, and new unknown territories we will need people who are born to think differently, to solve the most difficult problems” – Lucy Hobbs, FND founder
Data analyst Michael Barton began by describing himself as a ‘stereotypical autistic’ – a middle-class white man who was non-verbal in his early childhood. After giving a brief history of how this image has been created by the narrow scope of research into autism, he explained how previous faults in AI programs have been caused by the creators’ unintentional biases; for example, a facial-recognition algorithm that failed to recognize black faces had this difficulty because it had been ‘fed’ very few non-white faces in the data that was used to create it. Therefore, increased use of AI risks disadvantaging the neurodivergent community in areas such as the job market, as the small percentage of neurodivergents in paid employment would lead a hiring or testing AI to discriminate against them and search for candidates who fit the current data set.
“It’s easy to hire someone that you want to go to the pub with… rather than someone who may be the best fit” – Michael Barton, Data Analyst and Autism Speaker
Marton Gasper, a product manager and enthusiast on the AI chatbot scene, stressed the importance of not disregarding the usefulness of a solution over its ‘coolness’. Too often, he said, companies want a chatbot and get a functional but not truly helpful result – because there is no human at the center of the project. Rather than relying purely on machine learning, a chatbot can be effectively taught by having a human staff member communicate with some of the company’s users through a chat interface. This allows the bot to learn how people want to communicate and adapt to that, rather than just how they ‘should’ communicate. When helping to meet a client’s needs, especially in forward-thinking tech and creative spaces, it was agreed that neurodivergents’ tendency to be good at breaking down big pictures into smaller, actionable tasks is a valuable asset.
“I want to inspire people to own whoever they are and whatever they are with pride. People should feel safe and secure in their work and they are not more or less, they just are” – Marton Gasper, Product Management Consultant and Neurodiversity Advocate
Also at the ‘bleeding edge’ of data technology sits Alex Loveless, who by his own admission “spend[s] a lot of time thinking about thinking”. As was discussed previously, Alex highlighted how an AI program (which the average person has hundreds of on their smartphone) can ultimately only be as good as what is used to develop it. Since most AIs are developed for specialised tasks, and neurodivergents are characteristically individuals of intense focus and specialism, it would make sense for AIs of the future to be modeled on an appropriate type of human brain, such as an ADHD brain for programs that need to be able to react quickly to environmental stimulation. He also reminded the audience that, due to the likely larger amount of neurodivergent representation in IT and computing careers, there is a “reasonable” chance that a future, advanced, ‘general’ AI might just end up as a reflection of being ND anyway.
“We’re a very long way from having to worry about [AI programs] taking over our lives and our brains. In the meantime, they’re bloody useful tools and we need to put them in the right environment, in the same way, that we need to be put in the right environment” – Alex Loveless, Lead Data Scientist and Artist
Closing the webinar was the renowned Professor Amanda Kirby, a neurodiversity expert, doctor, and all-around positive thinker. With waiting lists for diagnosis and support of ND conditions sometimes stretching into thousands of people, she said that while technologies might bring opportunities for progression, it is so important that organisations avoid over-engineering their practices and risk excluding people with lower levels of technical literacy. On the other hand, she called for more scrutiny and flexibility within workplaces when it comes to established practices – does a dyslexic applying for promotion really need to write and give a presentation when the job doesn’t call for it?
“Challenge ‘facts’. Challenge the ‘reality of the data” – Professor Amanda Kirby, CEO DO-IT Solutions and Neurodiversity Campaigner
Words by Bonny Hazelwood