Brought to you by BCS Hampshire, this talk will articulate the challenges of testing AI as it reaches or surpasses human capability, especially with increasing self managing and adapting capabilities (eg, self-configuring, self-healing, self-optimizing, self-protecting capabilities). The talk draws upon current practices and example of Self* activity, such as driverless vehicles, to bring out the challenges
Speaker: Dr Carl Adams, Mobi Publishing Ltd, Chichester, UK
Programme of events
6pm Hampshire Branch AGM, then presentation
7.30pm Estimated end
AI offers huge potential across all of human activity from personal support, to helping business run efficiently and deliver products and services, to helping governments and societies manage key resources and infrastructure. The last decade or so has seen many increases in AI capabilities covering breadth and depth of human activity. The ‘AI-Human singularities’, that of AI reaching or surpassing human capability, has been reached or is on the near horizon for many attributes of human capability.
One interesting aspect of AI activity is the emergence and development of sophisticated autonomic systems which have the ability to operate autonomously in remote dynamic environments with limited intervention from human operators. Effectively systems doing more of the activity that is usually associated with the abilities of humans. Ganek and Corbi discussed the ‘dawning of the autonomic computing era’ describing the main attributes of autonomic computing systems as being self managing systems with self-configuring, self-healing, self-optimizing and self-protecting capabilities, collectively the Self* capabilities of emerging AI. These Self* capabilities are pushing AI into areas beyond the capabilities, knowledge and comprehension of humans (a poignant example being two Facebook AI chatbots talking to each other in a language created by themselves back in mid 2017).
An area of AI activity that has been lagging behind in research and practice is in testing. The focus of AI testing has predominantly been on getting a specific AI function ‘to work’, such as showing the new generated algorithm to be better than previous versions. When stacked up against general software testing principles and standards (such as ISO/IEC/IEEE 29119 Software Testing) AI testing seems a little myopic and lacking the rigor and depth usually associated in complex and mission critical software development. This talk will articulate the challenges of testing AI with Self* and self adaption capabilities. The talk draws upon current practices and example of Self* activity, such as driverless vehicles, to bring out the challenges.
Dr Carl Adams is currently the CEO, Lead Researcher and Editor for Mobi Publishing Ltd, a research and digital innovation SME. Some of the significant current projects are working on DataCubes (part of the multi-million pound project CommonSensing) to support risk reduction in remote island states in the Pacific, a related project applying SPRISM to help articulate and capture perceptions of risk, and a project to apply SPRISM back to the health care sector. His expertise covers digital innovation and the impact of digital and intelligent technologies. Previous to working in Mobi Publishing Ltd, Carl worked in HE for 20+ years, where he had an active teaching and researching career. He has over 130 peer reviewed publications and several book chapters and a few books. His current research interests cover the ‘AI-Human singularities’, that of AI reaching or surpassing human capability, and how to bring such developments within a robust testing frame.