Deep Lens: Recognising 1,780 faces in seven days
For the AWS Summit Sydney 2019 the CMD Solutions team developed a Deep Lens showcase using Amazon Rekognition to predict age range and gender. How well did it work, and what did we learn?
In the month prior to the AWS Summit, a core team of CMD consultants spent a number of evenings and weekends planning, building and testing the AWS Deep Lens showcase so it would be as accurate as possible.
The idea was to surprise people with the technology’s capability to provide an accurate estimation (because misjudging either age or gender always ends with awkward silence). We also realised that although we can excuse mistakes by each other because the human brain isn’t perfect, people expect machines to be spot on.
We’re happy to report that there were less than a handful of errors where Amazon Rekognition wrongly assigned gender or slightly missed age range. Thankfully, those people who were mistaken were good humoured and gracious about it (in a real-world application though we’re not sure people would be so forgiving).
How it worked
The showcase worked like this: AWS Summit delegates stood in front of our booth and hit the big red IoT button. This sent a signal by wifi to trigger the AWS Deep Lens camera to capture an image and send it to Rekognition via AWS S3.
We had AWS Polly hooked up to provide some witty one liners (“Handsome crowd, or at least one of you are – not telling you which one” and “Is anyone there, I can’t see anyone. Reminds me of my 21st birthday”) while we waited for the image to be processed. For privacy, software pixelated the faces for display on screen.
Rekognition identified the elements from the image it used to estimate gender and age range on screen.
Then the data was sent to Blockchain (why? Because we could – AWS Summit, deeply AWS consultancy – we used as many AWS services as possible).
All of the results were also aggregated into a Sumo Logic dashboard, which tallied in graphical format the number of faces recognised, female versus male breakdown, and the number of moustaches, among other attributes.
The entire process, from hitting the red button to displaying results, took around 30 seconds due to the average network link available in the event exhibitors hall. This was frustrating but it did give us a bit of time to explain what was happening and what technologies were in play while people waited for the results.
Despite a few technical hurdles, like the slow network and those few instances where Rekognition got it wrong, overall it was worth the effort pulling it together on weekends and late at night because we had some great conversations about AWS technologies, machine learning and AI more broadly.
We were using this AWS Deep Lens showcase for a single-use – to engage people at the AWS Summit and to demonstrate the breadth and depth of AWS products and how they can be used.
But potential applications for use in the ‘real-world’ are everywhere and already emerging. In conversations we had with delegates at the Summit we talked about how it might be used in public transport, identity management for border control and safety applications in manufacturing and mining or in environmental fieldwork.
As we discovered from our little test case though, it can be wrong. It’s not sensitive enough yet to be ‘non-binary’ regarding gender for example, or overly specific about age. Cases of mistaken identity could be problematic for individuals wrongly identified where a machine cannot be discretionary.
And there are ethical considerations around privacy, permission and self-determination, which are well documented but yet to be resolved. Who or what should set the appropriate moral and ethical boundaries – opinions were divided.
More on: Modern machines and ethics
What we learnt
An obvious observation is the non-binary nature of our community when it comes to gender. Our programming of the showcase was binary and we know that assigning a male or female identity may not be accurate or appropriate.
Also, where our showcase got it wrong, we couldn’t give feedback to Rekognition that it had got it incorrect. This is an obvious drawback to the showcase as we built it.
What about age? The age ranges provided were deliberately kept broad and in only one instance, it was incorrect, missing the lower end of the age range of the person by one year.
Also, we had people ask us about recognition across the spectrum of ethnicity and how the showcase adapted to predicting age or gender based on appearance. Overall our observation was that regardless of skin colour or facial structure, its prediction was accurate. This would be an interesting angle to introduce in terms of big data for future iterations.
More on: The stats
- Number of faces identified: 1,780
- Male vs female breakdown: 75 per cent male, 25 per cent female
- Facial features recorded: Beard, Open eyes, Sunglasses, Open Mouth, Moustache, Smile