Artificial intelligence system uses transparent, human-like reasoning to solve problems

Author: 
Kylie Foy

This article describes a method by which a computer can recognise objects using Transparency by Design Network (TbD-Net) developed at the MIT Lincoln Laboratory. Researchers have used human-like reasoning to develop an algorithm which they claim can outperform other visual recognition software and algorithms because humans can view its reasoning process to determine where and how it is making mistakes. The researchers claim that other deep neural networks have progressed so far that it is practically impossible to trace how they have transformed an input into an output – they have become “black boxes”. By making the inner workings of TbD-Net transparent, humans are able to interpret the AI’s results and understand its reasoning process. This feature is particularly valuable if AI is to be employed alongside humans to help solve complex real-world problems. 

Read article here.