Fools readily declare themselves by opening their mouths
There has been a lot of discussion of chatbots recently, thanks to technological advances in Natural Language Processing and Deep Learning. However, there are still a great many limitations and caveats, and the experience of interaction with bots outside of a narrow context can often be underwhelming.
Chatbots can use a repository of predefined responses, and can attempt to choose an appropriate respond based upon analysis of input and context. This pattern matching retrieval method is limited to the contents of its corpus of interactions.
Generative conversational models however are capable of creating original content organically. These are based upon technology developed for machine translation, in order to better cope with situations where the meaning of a colloquial phrase may be figurative instead of literal. However, the translation is between and input and output, rather than languages. Such systems are generally more able to understand the semantic similarties between the questions such as 'what age are you?', and 'how old are you?"
Generative techniques are much more challenging, and are not yet in general commercial deployment, though generative techniques are the main focus of current research. They require huge amounts of traning data.
Retrieval methods are easier to develop, and though less sophisticated in the range of potential responses, they are less likely to make grammatical errors. However relational methods cannot follow the context of a conversation easily, or refer back to previously mentioned information, or recall physical and demographic context of the conversation partner.
Chatbots are also challenged differently by the length and scope of a conversation. Generally, the longer a conversation continues, the more likelihood of it breaking down. Longer conversations also require more info to be remembered from earlier, or even from previous sessions. Customer support typically involves long, branching conversations, whereas fast commerical interactions may require only one or two messages.
The scope or domain of a conversation may be limited to a particular task or topic, or may be broader or open, where there is no particular goal or intention. Taking a conversation in potentially any direction (open domain) is naturally very challenging. Generally commercial chatbots are focussed exclusively on providing assistive interactions in a narrowly defined area.
Personality is another aspect, whereby conversational choices may be weighted according to virtual personality vectors, such as agreeableness or openness to experience. This is still a very experimental area, and there is not much sophistication in these processes at present, particularly as training data is normally compiled from a wide range of human personalities.
Replika has an approach that is somewhat different, in that they are attempting to collate a variety of information from a single user, in order to generate a virtual personality based upon conversation style and typical vocabulary. However, it is possible to compile a virtual personality from information created by a person who is now deceased, creating a digital echo of them that one may yet converse with. The benefits versus harm of such use of the technology is open to interpretation and is quite controversial. On one hand, having a virtual avatar of a loved one could assist with the grieving process as one learns to let go, on the other it could lead to complications or obsessions for some people.
In conclusion, chatbots are useful tools when applied in a short conversation of limited scope. However, truly open domain conversations of length would require something akin to Artificial General Intelligence to handle all of the possible scenarios. Some of the latest developments in areas such as Generative Conversational Models may lead to some exciting developments towards this in the mid-future however.