Excited to share news that we’re now one of the first to have integrated AI virtual assistants and live chat seamlessly into a coherent solution. We now have two products for our enterprise customers. First, there is AlphaAI – our own NLP engine and virtual assistant building product. And the second product we just launched is AlphaChat. AlphaChat is a live chat platform (akin to Intercom, LiveChat, Zopim) where customer service teams can answer incoming chats from customers. Why is this exciting? AlphaAI and AlphaChat work in tandem. They fully automate customer experience and journey into one flow – from
One of the recent product features we’re proud of at AlphaBlues is the ability to authenticate users and provide detailed answers for the users. It is a new and exciting feature that allows our customers to offer a personalized level of support to their users through virtual assistants much like human customer support agents can. In the previous post we highlighted how the lifecycle of virtual assistants is evolving throughout time. Those companies that have high chat conversations volume on a monthly basis and innovation capacity are seeking more and more for virtual assistants to actually conduct activities out on
One of the toughest problems in building bots is the scarcity of available training data. As all machine learning systems, chatbots and virtual customer assistants function best when they have been trained on good quality training data. In the beginning all bots have to be taught what things mean. In the context of understanding what customers want when they contact customer service, this means teaching bots what topics (i.e. intents) do certain questions refer to. As a simple example, people ask from a bank “I have not received my salary” or “Where is my transfer”. These are seemingly two different
Over the past months during this summer we were hard at work on advancing our AI capabilities. We worked with NVIDIA’s latest technology to develop semantic enrichment for chatbots and our work got recently covered in their official blog https://blogs.nvidia.com/blog/2017/08/29/ai-chatbot/ Typically, a computer processes free text as a sequence of symbols with no apparent relationships apart from the order in which they appear in a sentence. A human, however, understands the semantics. For example, a person knows that in the sentence “my older brother rides the bike,” the brother is a human being, the bike is an inanimate object, the bike
Excited to announce that AlphaBlues is joining the NVIDIA Inception Program. The program nurtures dedicated and exceptional startups who are revolutionizing industries with advances in Artificial Intelligence and data science. This virtual accelerator program helps startups during critical stages of product development, prototyping, and deployment. NVIDIA is a major player in the advancement of AI and we’re excited to take part of that journey. We’ll be working with some advanced text ontology based intent detection algorithms during the course of the program to enhance our Virtual Customer Assistants. Expect an update on this progress in the near future!
Last week we presented at the largest gathering of Machine Learning professionals in Estonia, the ML Estonia meetup. The event gathered more the 200 people and Hendrik and I talked about what we are doing and what we are doing on the tech front with AI/ML as applied to text, images and also videos. The slides are available here.
Last week we took part in Slush, the gathering of 15,000 tech-heads as the slogan goes in the far North. Helsinki, Finland to be more exact. As always for this time of the year, the weather was lovely with sufficient amounts of snow, rain, wind and occasional sunshine to sweeten the deal. Having attended the first Slush many years back (a cozy gathering of 300 people), the event has now grown to 17,000 visitors making it one of the largest tech conferences in Europe. What happened and what did we learn at Slush this year? Here’s the rundown: CartSkill reached
One of the things our tech team is always fascinated by is the way in which neural networks can be deployed to solve a variety of use cases. And even though we work with conversational text and chatbots to improve customer service for our customers, we are inherently curious and tackle various challenges through our experimental work. So it is no wonder that a few months ago, during one of our casual Friday afternoon brainstorming sessions, our talk drifted to the poor quality of video calls with an iPhone. It is well known that the popularity in various entertainment and
The brave new world of bots is marching onward with each passing day to the steady drumbeat of growth in messaging apps worldwide. People love to text and messaging has emerged as the preferred communication mode for many. With the growth in Facebook Messenger and its platform, talking to companies and brands through messaging is slowly starting to gain ground. As with any breakout thing, there are opinions on both sides of the spectrum of whether it is hype or whether it is something tangible that will stick around. Whatever the case may be (I’ll leave my crystal ball post
As we have made a good number of improvements in our visual recommendation technology we decided to benchmark it against other popular deep learning networks. The aim was to compare our recommendation against AlexNet (Caffe implementation) and ResNet (Torch implementation) and to outline errors in product similarity identification. The setup for the comparison was such that with each network we fed in 3 different product categories and identified visually similar products based on a single thumbnail image. The errors are outlined in the images below in red circles. For the product categories we used 3260 different bags, 7028 different shoes