It's a different breed of audio-visual recognition
Can our machine learning model detect an animal type from just a few snippets of audio or an image? IBM Watson has a go at predicting whether it identifies a cat or a dog.
If you play a bird sound, Watson will have a go at telling you what kind of bird it is, and where it comes from.
Come and try it yourself by picking untrained audio and images to run through the model, and see the results. It showcases how you can integrate machine learning with visual recognition to make better decisions. It's built on Node-RED, and uses machine learning models created in IBM Watson Studio with IBM Watson Visual Recognition.
Watson Studio is an integrated environment designed to make it easy to develop, train, manage models and deploy AI-powered applications. It is a Software as a Service (SaaS) solution delivered on the IBM Cloud.
Visual Recognition uses deep learning algorithms to quickly and accurately tag, classify and train visual content using machine learning, to give you insights into your visual content. You can organise image libraries, understand an individual image, recognise food, detect faces, and create custom classifiers for specific results that are tailored to your needs.
Developed in Hursley, Node-RED is a programming tool for wiring together hardware devices, APIs and online services in new and interesting ways.
Next step: sort the sheep from the goats.