Deep Learning Summit San Francisco recap

We’ve been travelling quite a bit the beginning of this year. One of the best events thus-far has been the Re-Work Deep Learning and AI Assistant Summit that took place in San Francisco in January 2018. It hosted a variety of exhibitors (including AlphaBlues 👍) and close to 1,000 participants. The talks were of high quality and there were several interesting exhibiting companies. I put down some thoughts that we observed throughout the whole trip in Silicon Valley as they reflect the state of the AI in the NLP & intent understanding space that we are working in.

  • There were plenty of tech talks from highly technical people but the underlying impression was that majority of the tech is still in research phase when it comes to understanding conversational dialogue. It is a tough problem mainly because of the limitation that we currently have in context understanding. There are approaches focusing on previous steps in the dialogues but a common theme from the talks was that the lack of labelled training data is a major bottleneck.


  • Several large tech companies are now in the voice assistant game with their home control devices. Voice is an upcoming trend and as Google, Amazon and Apple fight for the prime spot in the living room, understanding meaning from voice becomes important. Large companies get billions of queries each month (that is some sizeable training data) and it can be understood how with more and more use, they come in possession of the largest training sets. And as such can improve their systems the best.


  • NLU is not only limited to understanding real-time human-machine communication but also used in searching through logs of messaging content in team collaboration tools. This is an interesting angle as there is ton of message based content left behind in various workplace tools (i.e. Slack). Make those queries intelligently requires the understanding of meaning in messages and can help in information retrieval.

    At the expo with the always helpful Re-Work team member.

AlphaBlues partners with PwC

PwC Startup Collider has chosen 12 technological startups from the European region and happy to announce AlphaBlues is one of them. The companies were selected out of 300 applicants to partner with PwC in Poland to expand their product offering in Central Europe. “As part of the Startup Collider, we are looking for innovative startups that will become our business partners with whom we will really develop a new joint product” says Beata Cichocka-Tylman, PwC’s Director responsible for innovation and R&D. During the process AlphaBlues will receive support in business development and an opportunity for international expansion alongside one of the strongest global brands. More details are available at this link

NVIDIA highlights AlphaBlues as one of 8 startups globally to check out at Deep Learning Summit


The upcoming Deep Learning Summit in San Francisco is already picking up steam. Not only does it cover an excellent lineup of speakers and industry professionals but also gathers a great selection of exhibitors. In their official blog, NVIDIA has highlighted AlphaBlues as one of 8 startups globally to check out in the summit. If you’re attending the event, stop by and have a chat in our booth 😊

You can check out the full blog post on NVIDIA’s blog here

New customer – Telia

Glad to announce another new customer as we started working with Telia. Telia is the leading telecom company in Estonia and is part of the Telia Company Group. In Estonia Telia provides a variety of services including mobile, home internet and TV, online store and business services. Responsible Business Forum in Estonia rewarded Telia Eesti with the highest quality label – the golden level – recognising the company’s fair, informed and environmentally friendly activity. In 2016 Telia Estonia revenues reached 289 million euros and it served close to 900,000 mobile customers in Estonia. Telia Company Group is one of Europe’s largest telecommunications companies, which is increasingly operating as a single company for the benefit of its customers.

Meet us at these events in January/February

Beginning of 2018 will see us continue where we left off in 2017. After rapid advances on the tech side of our product and new customers we’re off to several events in January / February to meet new customers, partners and people to share ideas with. Here is the schedule for our travels for next month:

  • January 25-26, San Francisco, US. Exhibiting at the AI Assistant and Deep Learning Summit.
  • February 5, Helsinki, Finland. Speaking at the Machine Learning event By HEL Tech and Aalto Entrepreneurship Society.
  • February 7-8, Riga, Latvia. Exhibiting at TechChill tech conference that draws more than 2,500 people in the region.

If you’re at any of these events, stop by and say hi 👋

Our talk at StartupDay 2017 [VIDEO]


StartupDay this year was a great success with more than 2,500 people attending. Gave a talk on the main stage about how we have built our company and approach AI. Title of the talk “Applied AI – are we there yet?”. For those who are impatient, the answer is yes, we’re already here in certain fields we’re tackling 😃

New customer – Tallink

Glad to announce that we have started working with Tallink on a customer service chat automation project. Tallink (listed on the Nasdaq Tallinn exchange as TAL1T) is the leading provider of high-quality mini-cruise and passenger transport services in the northern Baltic Sea region, as well as the leading provider of cargo services on selected routes.

In 2016 the company had over 9 million passengers, achieved revenue of €938 million and employed more than 7,000 people.

As a result of their recent investment and fleet renewal program, Tallink currently deploys some of the most advanced cruise ferries on the Baltic Sea. The company’s vision is to be the market pioneer in Europe by offering excellence in leisure and business travel and sea transportation services.

Lack of labeled training data for chatbots – our solution for the problem

One of the toughest problems in building bots is the scarcity of available training data. As all machine learning systems, chatbots and virtual customer assistants function best when they have been trained on good quality training data. In the beginning all bots have to be taught what things mean. In the context of understanding what customers want when they contact customer service, this means teaching bots what topics (i.e. intents) do certain questions refer to.

As a simple example, people ask from a bank “I have not received my salary” or “Where is my transfer”. These are seemingly two different types of questions but they refer to the same intent i.e. how long does it take for the transfer to reach from bank A to bank B. To understand those sentences a chatbot needs to learn from data that has been labeled. Meaning a human has to label some questions with the correct intent (“How long does it take …”) and feed it to the bot. With that the bot can learn to generalize based on different questions and match the question with the correct intent.

The challenge is that there are literally a million different ways to ask for the same intent. Language is highly nuanced and personal. To build chatbots with high accuracy one would need tens of thousands of labeled questions to train the bot. This is time consuming and expensive.

To overcome the lack of labeled training data, we have created deep learning based networks that are capable of achieving satisfactory levels of question-intent-matching accuracy even with a few sentences of training data per each intent. It is a way forward for deploying accurate chatbots in a more efficient manner by reducing the need for labeled training data. The video above shows how it works. If you have more interest and questions about the solution, reach out to hello [ at ] alphablues [ dot ] com

Nordic Data Science and Machine Learning Summit Recap

Yesterday our CTO Hendrik Luuk had the honours of presenting at the Nordic Data Science and Machine Learning Summit in Stockholm 🇸🇪 The conference was sold out, which gives a hint to the level of interest in the topic of machine learning in this region. Our talk was titled “Automating customer service chat via AI-based natural language understanding” and in a nutshell we gave an overview and key lessons from building up a customer service chat automation platform with built-in natural language understanding system from ground zero. The key takeaways of the talk were:

  • Human-level language understanding is much more complicated than matching token patterns.
  • Training data is scarce.
  • Customization is the key to good user experience.
  • Automation pays off when volumes are large.

The conference had good technical depth and there were several great presentations. King, the makers of the mobile game Candy Crush, had a very interesting presentation on applying AI in game testing. The approach they have taken is using neural networks to understand human behaviour in gameplay and then run new levels through such trained networks which essentially are able to test human performance in new levels.

It is an interesting approach as most networks, like AlphaGo Zero, are built towards beating humans but with mobile games the goal is to make the game challenging yet enjoyable to humans. In addition, Peltarion had an insightful talk about segmenting brain tumour even with small datasets. It gave insight into how to create algorithms that are able to perform on par with humans.

What I Learned From One Of The Most Interesting Panels On AI I Have Been To

Last week enjoyed an excellent panel discussion at Nordic Digital Business Summit with Alexander Khaytin from Yandex Data Factory, Ville Hulkko from Silo.AI and Olli Ventä from VTT. I have been on numerous panels but this one was one of the most enjoyable ones as the discussion was about where we stand with Artificial Intelligence and where are we headed. Couple of points that are worth sharing:

  • There is a lot of AI hype. Everyone agreed on that. In some ways it is good as the topic rises to consciousness of a lot of people. But mostly it just raises expectations that are difficult to meet. It was agreed that there needs to be a real representation of what is possible and what is not with AI. Especially considering that we have had several AI winters in the past.


  • Who is buying AI products. In large companies this today is mostly the Business Development Manager, Innovation Officer or a strategic Product Manager. These people are the visionaries within their companies and are looking for new and beneficial solutions. Target them.


  • AI apocalypse. We did a poll in the audience and for the next 15 years no one thought there will be an AI dominated world. So we’re good 🙂 Mostly the understanding was that instead of worrying about what ifs in the next 30 years we need to focus on the current problems and how we tackle those.


  • AND MY FAVOURITE TAKEAWAY – there are three important elements that are needed for a successful AI product offering. 1) There needs to be data available. Without data there is not much AI can do. 2) There needs to be enough money to be made or saved with the solution. If you cannot attach a monetary value to your solution then the solution is a nice-to-have thing. 3) There problem needs to be complex enough that simple solutions on the market already today cannot solve it.