|Follow on Twitter as @LaurieSullivan|
Amazon has made three of its artificial intelligence tools available to developers within its Web Services group, marking the first time that the company has allowed outside inventors to build apps and services on its technology.
Building AI capabilities into apps requires access to large amounts of data and expertise in machine learning and neural networks. The deep and machine-learning algorithms require access to automatic speech recognition, natural-language understanding and classification to collect and train the networks to recognize phrases, speech inflections, objects, and keywords.
The group, Amazon AI, features three services: Amazon Lex, Amazon Polly, and Amazon Rekognition. Lex, the technology that powers Amazon Alexa, allows any developer to build conversational experiences for Web, mobile and connected devices. Capital One, OhioHealth, HubSpot and Twillo have used Amazon Lex to build out chatbots for their respective companies.
The Washington Post and GoAnimate used Amazon Polly to turn text into speech, enabling their respective apps to talk with 47 lifelike voices in multiple languages.
The third tool, Rekognition -- which Redfin, and SmugMug applied to apps -- allows app users to sort through images. The application uses deep learning-based image and face recognition.
Google also has been pushing self-serve tools, trying hard to convince developers to adopt technology that allows them to create bots for Google Assistant through Actions on Google, which the company should release earlier this month. Venture Beat reported in October that a software development kit (SDK) that brings Google Assistant into device not made by Google should become available next year.
Source: MediaPost Communications