A brief history of chatbots
The origins of modern chatbots can be traced back to 1964 when Joseph Weizenbaum at Massachusetts Institute of Technology (MIT) developed a chatbot called Eliza. It used simple rules of conversation and rephrased most of what the users said to simulate a Rogerian therapist. While it showed that naive users may be fooled into thinking that they are talking to an actual therapist, the system itself did not understand the user's problem. Following this, in 1991, the Loebner prize was instituted to encourage AI researchers to build chatbots that can beat the Turing test and advance the state of AI. Although no chatbots beat the test until 2014, many notable chatbots won prizes for winning other constrained challenges. These include ALICE, JabberWacky, Rose, and Mitsuku. However, in 2014, in a Turing test competition to mark the 60th anniversary of Alan Turing's death, a chatbot called Eugene Goostman, portraying a 13 year old kid, managed to fool 33% of the judges—thereby beating the test. Artificial Intelligence Markup Language (AIML) and ChatScript were developed as a way to script the knowledge and conversational content for most of these chatbots. Scripts developed using these scripting languages can then be fed into interpreters to create conversational behavior. Chatbots developed to beat the Turing test were largely chatty with just one objective—to beat the Turing test. This was not considered by many as advancement in AI or toward building useful conversational assistants.
On the other hand, research in artificial intelligence, specifically in machine learning and natural language processing, gave rise to various conversational interfaces such as question answering systems, natural language interfaces to databases, and spoken dialogue systems. Unlike chatbots built to beat the Turing test, these systems had very clear objectives. Question answering systems processed natural language questions and found answers in unstructured text datasets. Natural Language Interfaces to Database Systems (NLIDBS) were interfaces to large SQL databases that interpreted database queries posed in a natural language such as English, converted them into SQL, and returned the hits as response. Spoken Dialogue Systems (SDS) were systems that could maintain contextful conversations with users to handle conversational tasks such as booking tickets, controlling other systems, and tutoring learners. These were the precursors of modern chatbots and conversational interfaces.