Artificial Intelligence (AI) is the top factor at the back of most technology nowadays. Once considered science fiction, it is now a fantastic reality! Even though computer science is not new, today’s revolutionary Deep Learning using artificial neural networks, is the result of all those achievements scientists have made all through years.
Far back on time (1637) scientists and philosophers like Descartes, foresaw robot machines which would be able to think and make decisions; robots that could one day learn to carry out specific tasks, even able to adapt to all sort of jobs. However, they couldn’t visualise the possibility that such machines would communicate and talk like humans. By all means, these scientists made an enormous contribution and set the pace for further scientific generations to go deeper into these fundamental researches.
The term Artificial Intelligence was coined by John McCarthy back in 1956. He initiated an academic exploration and development of thinking machines. Natural language processing, neural networks, computer vision, saw the light early in those days.
The experimental steps of chatbots
As the years went by AI evolved. Robots like Alexa and Siri in the 60s were initial personal assistants that encompassed different way of communicating. Alexa is capable of voice communication, and able to connect with several smart devices, using itself as a home-automation system. Whereas Siri is available for a wide-ranging of user commands. The software adjusts to users' language, searches, and preferences, with constant use.
However, it was not until Eliza arrived in 1966 –a computer program experimental example of uncomplicated natural language processing. Eliza used a “pattern matching” and replacement methodology that gave users the impression that the computer program was understanding their conversation. Could it be possible?
What Eliza had to do was listening to you and what you had to say, break down your sentence in a fundamental way, and then asked you a question in some way related to the sentence you had said,” but she wasn’t capable of learning from her conversations with humans. However, she paved the way for further efforts to break down the communication barrier between people and machines.
Machines also learn through a number of repetitions as humans do. Most functions of devices today use algorithms to solve problems. Machine Learning started to take an essential place in technology competence; although it is more about data processing than programming. Machines haven’t been as successful at learning as we are; however, faster computers that enable smarter algorithms are revealing a promising future nowadays for ML.
Back in 1988 came the statistical approach to an automated translation between languages – French and English. The target was now on designing programs to determine the probability of various outcomes based on information (data), in terms of mimicking the cognitive processes of the human brain: These programs form the basis of the current ML.
The beginning of the Internet
In 1991 researcher Tim Berners-Lee placed the world's first website online and made public the Hypertext Transfer Protocol (HTTP) capabilities. It is an application protocol for distributed-collaborative-hypermedia information systems, the basis of data communication for the World Wide Web, where hypertext documents include hyperlinks that allow users to log on other sources easily.
No doubt the Worldwide Web was the trigger for society to plug up into the online world. Quite soon, masses of people all around the world were linked, producing and sharing data at an incredible velocity, which is one of the competences of AI, calculating every possible option at high speed. Computers were evolving very quickly and growing into highly-competent machines where once humans dominate all the way long.
Autonomous vehicles started in 2005, and by 2007, mimicked-urban-conditions were built to test and prove they could manage traffic regulations and other in-motion vehicles.
In 2011, appeared the cognitive-computing engine IBM Watson. It took part in a contest where the computer defeated a human, to the surprise of everyone.
Deep Learning (DL) came next in 2015 coming from researchers at Stanford and Google, including high-level features and large-scale-unsupervised learning, built on multilayer neural nets which get rid of the expensive and time-consuming task of manually labelling data. It would speed up the tempo of AI maturity and open the door for future possibilities of building machines to do the work which could only be done by humans until then. They considered their system to be highly competent at identifying photos.
The model would enable an artificial network enclosing around one billion connections to identify an object in visual data; machines were then outperforming humans. Although it was a significant step towards building an "artificial brain," there was still some way to go, as human brain neurons network is thought to be around 10 trillion connectors.
No doubt 2018 denoted a critical breakthrough, with the launch of Google self-driving-taxi service in Phoenix, Arizona. The first commercial-autonomous-vehicle-hire service! Many members of the community pay to be transported to their schools or workplaces, as long they are within a 100 square mile zone.
Some further comments: Technology competence to store and process data drives visibility, speed and accuracy, the most significant improvements in ML.
So far, there’s the need for human operators to ride with every autonomous vehicle, whether it is to take the control if an emergency, or to monitor the performance of it. Self-driving cars will be pretty soon a fantastic reality for ordinary people. The future is here!
Dave Food
Prophetic Technology
Subscribe to our emails & exclusive free content.