Welcome to the workshop about image recognition using Google's all new shiny ✨Vision✨ API!
- In your terminal do:
git clone https://github.com/TheCodePub/vision-api-workshop.git
- Set the
API_KEY
value insrc/api.js
(we will email the key to you before the workshop) - Open
index.html
in your favourite browser and start playing with the code!
The sample code we provide will give you a kickstart to start sending requests to Vision API. You can read more about the different types of API calls that Vision supports.
Now it's time to play with the code! If you don't know what to do, maybe try to build one of these ideas
Try to "translate" a face into an emoji. If you look at faceAnnotations you could do something like this to begin with:
if (result.faceAnnotations && result.faceAnnotations[0].joyLikelihood == "VERY_LIKELY") {
resultDiv.html('<span class="emoji">😊</span>');
}
Here we just check the first face data (result.faceAnnotations[0]
). Vision API can actually return data for multiple faces in the picture. Maybe generate emojis for all of the faces in the picture?
Use face detection and boundingPoly to add squares around peoples faces like this:
RaphaëlJS can be useful for the rendering, it's included in the repo. All data is in the API response, the challenge is to make a connection between the data and Raphael. There's also data that shows the locations of all facial features (eye, nose, etc) which you could also draw squares around.
Try to recreate/draw a picture of a face using canvas or other graphics library. The RaphaëlJS library is included in this repo so you can use that straight away if you want to. This can get as complex as you want!
- You can look at
joyLikelyHood
and include a happy/sad mouth - You can look at
headwearLikelihood
to include some headware or not
- Vision API docs: https://cloud.google.com/vision/docs/
- Face detection response: https://cloud.google.com/vision/docs/concepts#face_detection_responses
- Raphaël: http://dmitrybaranovskiy.github.io/raphael/