This is the part II of our previous post, where we opened up on the methods and the theory behind the the comparison of learning algorithms. In this post, we are going to dive in to our experiment results and conclusions. As most of the models are made and optimised for English, we started our testing with English data. In our English experiment, we tested the models of BERT, DistilBERT, RoBERTa, XLM, XLNet, ULMFit, MLP. When we increased the input data (i.e. 90:10 vs 10:90 training:test ratio), the prediction accuracy, on average, improved by 20%. The most accurate models RoBERTa
The last decades have been generous for the development of AI, machine learning and all things computer related. We know that the hardware capabilities approximately double every two years. Recent findings show that the pace is even quicker and is continuously speeding up. But how about the software side of it? We have been in the conversational AI business and research for 5 years now. With the rise of AI utilization, the field of machine learning algorithms have grown more diverse. To see how does the algorithms side of the AI have evolved, we decided to test out all of
Excited to share news that we’re now one of the first to have integrated AI virtual assistants and live chat seamlessly into a coherent solution. We now have two products for our enterprise customers. First, there is AlphaAI – our own NLP engine and virtual assistant building product. And the second product we just launched is AlphaChat. AlphaChat is a live chat platform (akin to Intercom, LiveChat, Zopim) where customer service teams can answer incoming chats from customers. Why is this exciting? AlphaAI and AlphaChat work in tandem. They fully automate customer experience and journey into one flow – from
One of the recent product features we’re proud of at AlphaBlues is the ability to authenticate users and provide detailed answers for the users. It is a new and exciting feature that allows our customers to offer a personalized level of support to their users through virtual assistants much like human customer support agents can. In the previous post we highlighted how the lifecycle of virtual assistants is evolving throughout time. Those companies that have high chat conversations volume on a monthly basis and innovation capacity are seeking more and more for virtual assistants to actually conduct activities out on
One of the toughest problems in building bots is the scarcity of available training data. As all machine learning systems, chatbots and virtual customer assistants function best when they have been trained on good quality training data. In the beginning all bots have to be taught what things mean. In the context of understanding what customers want when they contact customer service, this means teaching bots what topics (i.e. intents) do certain questions refer to. As a simple example, people ask from a bank “I have not received my salary” or “Where is my transfer”. These are seemingly two different
Over the past months during this summer we were hard at work on advancing our AI capabilities. We worked with NVIDIA’s latest technology to develop semantic enrichment for chatbots and our work got recently covered in their official blog https://blogs.nvidia.com/blog/2017/08/29/ai-chatbot/ Typically, a computer processes free text as a sequence of symbols with no apparent relationships apart from the order in which they appear in a sentence. A human, however, understands the semantics. For example, a person knows that in the sentence “my older brother rides the bike,” the brother is a human being, the bike is an inanimate object, the bike
Excited to announce that AlphaBlues is joining the NVIDIA Inception Program. The program nurtures dedicated and exceptional startups who are revolutionizing industries with advances in Artificial Intelligence and data science. This virtual accelerator program helps startups during critical stages of product development, prototyping, and deployment. NVIDIA is a major player in the advancement of AI and we’re excited to take part of that journey. We’ll be working with some advanced text ontology based intent detection algorithms during the course of the program to enhance our Virtual Customer Assistants. Expect an update on this progress in the near future!
Last week we presented at the largest gathering of Machine Learning professionals in Estonia, the ML Estonia meetup. The event gathered more the 200 people and Hendrik and I talked about what we are doing and what we are doing on the tech front with AI/ML as applied to text, images and also videos. The slides are available here.
Last week we took part in Slush, the gathering of 15,000 tech-heads as the slogan goes in the far North. Helsinki, Finland to be more exact. As always for this time of the year, the weather was lovely with sufficient amounts of snow, rain, wind and occasional sunshine to sweeten the deal. Having attended the first Slush many years back (a cozy gathering of 300 people), the event has now grown to 17,000 visitors making it one of the largest tech conferences in Europe. What happened and what did we learn at Slush this year? Here’s the rundown: CartSkill reached
One of the things our tech team is always fascinated by is the way in which neural networks can be deployed to solve a variety of use cases. And even though we work with conversational text and chatbots to improve customer service for our customers, we are inherently curious and tackle various challenges through our experimental work. So it is no wonder that a few months ago, during one of our casual Friday afternoon brainstorming sessions, our talk drifted to the poor quality of video calls with an iPhone. It is well known that the popularity in various entertainment and