We've recently been experimenting with Artificial Intelligence (AI) on a couple of exciting new projects and hope to grow this area of our business over the coming months. It's too early to share the details here yet but I can, however, share some impressive work we did late last year on a client project called Elefriends.
For those that don't know, Elefriends is an online community platform we co-designed and built with Mind to provide peer support for people experiencing mental health problems. Now the UK's most successful online peer support community, Elefriends has an in-house team of trained moderators. It has grown to over 60,000 members, with around 60 new people joining every day.
In a recent survey, 86% of people who use Elefriends said they now felt better able to manage their mental health. Many said they felt more conﬁdent talking about their experiences and getting access to the right support.
Scaling up such a large community comes with significant risks. Sadly, last year the Elefriends community was targetted by a member wanting to cause widespread damage and upset by posting a lot of very, very disturbing images – including those of dead bodies and graphic violence. In a community of vulnerable people supporting each other with mental health issues the effect was almost catastrophic, as there weren't enough moderators to keep up with the rate of inappropriate posts.
So Yoomee was tasked with finding a tech solution to the problem. We began to research and experiment with various solutions to ensure the community was safe for users to continue sharing images, which are normally such a welcome part of the community's activity. Our solution came via Google Vision API which uses Artificial Intelligence and machine learning to automatically recognise the contents of images. (Any of you who use Google Photos to store your personal photos will be familiar with the clever search function which uses the same technology so you can search using phrases such as: “Show me photos of the beach." and so on).
Google Vision API
Using Google Vision API we can understand the meaning of each image submitted before allowing it to be displayed to the community. For example, Google Vision API identifies the percentage probability that the image is violent or sexual, and using this probability we can temporarily remove the image, until moderators deal with it.
You can have some fun and try Google Vision API yourself on their website. It will return a set of keywords identifying what's in the image. For example, here's a photo of our studio at Park Hill. According to Google Vision API, it's 84% window, 71% glass and recognised as the facade of a building with a door. So it's pretty accurate!
Here's another example which demonstrates the power of Google Vision API; a photo of Donald Trump shows with 86% confidence that anger is probable and so is headwear (!)
Respecting the user experience
Usually a problem, such as the one Elefriends experienced, is solved by implementing pre-moderation – that is forcing all posted content to be first approved by a human moderator. This simply isn't practical with Elefriends handling over 150,000 posts per month — as members post new content every few seconds. It also fragments the user experience and reduces engagement with the platform. So, by using Artificial Intelligence to automatically reject violent and sexual content we allowed the majority of users to continue happily posting images instantly without interrupting their experience.
Artificial Intelligence can seem scary and has the potential to be abused by big business. But we believe it presents a new opportunity for Yoomee to harness these futuristic technologies for social good. We're looking forward to using AI in future projects in supporting our mission to use digital to help the vulnerable and those in most need in society. If you'd like to know more, then follow our new AI project on Twitter at @yoomee_ai. Watch this space!
Here are some links to pieces that explain more about the platform and how it was built.