Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new video samples #382

Merged
merged 4 commits into from
May 10, 2017
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 10 additions & 8 deletions video/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

# Google Cloud Video Intelligence API Node.js Samples
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ping me after approving and merging #384 to talk about auto-generating the video-intelligence samples README file.


The [Cloud Video Intellience API][video_docs] allows developers to easily
integrate video analysis within applications, including video labeling, face
detection, and shot change detection.
The [Cloud Video Intelligence API][video_docs] allows developers to easily
integrate video analysis within applications, including video labeling, safe search
, and shot change detection.

[video_docs]: https://cloud.google.com/video-intelligence/docs/

Expand Down Expand Up @@ -41,17 +41,19 @@ __Usage:__ `node analyze.js --help`

```
Commands:
faces <gcsUri> Analyzes faces in a video using the Cloud Video Intelligence API.
shots <gcsUri> Analyzes shot angles in a video using the Cloud Video Intelligence API.
labels <gcsUri> Labels objects in a video using the Cloud Video Intelligence API.
shots <gcsUri> Analyzes shot angles in a video stored in Google Cloud Storage using the Cloud Video
Intelligence API.
labels-gcs <gcsUri> Labels objects in a video stored in Google Cloud Storage using the Cloud Video Intelligence API.
labels-file <gcsUri> Labels objects in a video stored locally using the Cloud Video Intelligence API.

Options:
--help Show help [boolean]

Examples:
node analyze.js faces gs://my-bucket/my-video.mp4
node analyze.js shots gs://my-bucket/my-video.mp4
node analyze.js labels gs://my-bucket/my-video.mp4
node analyze.js labels-gcs gs://my-bucket/my-video.mp4
node analyze.js labels-file my-video.mp4
node analyze.js unsafe-content gs://my-bucket/my-video.mp4

For more information, see https://cloud.google.com/video-intelligence/docs
```
Expand Down
158 changes: 132 additions & 26 deletions video/analyze.js
Original file line number Diff line number Diff line change
Expand Up @@ -18,33 +18,34 @@
function analyzeFaces (gcsUri) {
// [START analyze_faces]
// Imports the Google Cloud Video Intelligence library
const Video = require('@google-cloud/videointelligence').v1beta1();
const Video = require('@google-cloud/videointelligence');

// Instantiates a client
const video = Video.videoIntelligenceServiceClient();
const video = Video().videoIntelligenceServiceClient();

// The GCS filepath of the video to analyze
// const gcsUri = 'gs://my-bucket/my-video.mp4'
// const gcsUri = 'gs://my-bucket/my-video.mp4';

const request = {
inputUri: gcsUri,
features: ['FACE_DETECTION']
};

// Detect faces in a video
// Detects faces in a video
video.annotateVideo(request)
.then((results) => {
const operation = results[0];
console.log('Waiting for operation to complete...');
return operation.promise();
})
.then((results) => {
// Get faces for first video
// Gets faces
const faces = results[0].annotationResults[0].faceAnnotations;
console.log('Faces:');
faces.forEach((face, faceIdx) => {
console.log('Thumbnail size:', face.thumbnail.buffer.length);
console.log('\tThumbnail size:', face.thumbnail.length);
face.segments.forEach((segment, segmentIdx) => {
console.log(`Track ${segmentIdx} of face ${faceIdx}: frames ${segment.startTimeOffset} to ${segment.endTimeOffset}`);
console.log(`\tTrack ${segmentIdx} of face ${faceIdx}: frames ${segment.startTimeOffset} to ${segment.endTimeOffset}`);
});
});
})
Expand All @@ -54,32 +55,33 @@ function analyzeFaces (gcsUri) {
// [END analyze_faces]
}

function analyzeLabels (gcsUri) {
// [START analyze_labels]
function analyzeLabelsGCS (gcsUri) {
// [START analyze_labels_gcs]
// Imports the Google Cloud Video Intelligence library
const Video = require('@google-cloud/videointelligence').v1beta1();
const Video = require('@google-cloud/videointelligence');

// Instantiates a client
const video = Video.videoIntelligenceServiceClient();
const video = Video().videoIntelligenceServiceClient();

// The GCS filepath of the video to analyze
// const gcsUri = 'gs://my-bucket/my-video.mp4'
// const gcsUri = 'gs://my-bucket/my-video.mp4';

const request = {
inputUri: gcsUri,
features: ['LABEL_DETECTION']
};

// Detect labels in a video
// Detects labels in a video
video.annotateVideo(request)
.then((results) => {
const operation = results[0];
console.log('Waiting for operation to complete...');
return operation.promise();
})
.then((results) => {
// Get labels for first video
// Gets labels
const labels = results[0].annotationResults[0].labelAnnotations;
console.log('Labels:');
labels.forEach((label) => {
console.log('Label description:', label.description);
console.log('Locations:');
Expand All @@ -91,35 +93,83 @@ function analyzeLabels (gcsUri) {
.catch((err) => {
console.error('ERROR:', err);
});
// [END analyze_labels]
// [END analyze_labels_gcs]
}

function analyzeLabelsLocal (path) {
// [START analyze_labels_local]
// Imports the Google Cloud Video Intelligence library + Node's fs library
const Video = require('@google-cloud/videointelligence');
const fs = require('fs');

// Instantiates a client
const video = Video().videoIntelligenceServiceClient();

// The local filepath of the video to analyze
// const path = 'my-file.mp4';

// Reads a local video file and converts it to base64
const file = fs.readFileSync(path);
const inputContent = file.toString('base64');

// Constructs request
const request = {
inputContent: inputContent,
features: ['LABEL_DETECTION']
};

// Detects labels in a video
video.annotateVideo(request)
.then((results) => {
const operation = results[0];
console.log('Waiting for operation to complete...');
return operation.promise();
})
.then((results) => {
// Gets labels for first video
const labels = results[0].annotationResults[0].labelAnnotations;
console.log('Labels:');
labels.forEach((label) => {
console.log('Label description:', label.description);
console.log('Locations:');
label.locations.forEach((location) => {
console.log(`\tFrames ${location.segment.startTimeOffset} to ${location.segment.endTimeOffset}`);
});
});
})
.catch((err) => {
console.error('ERROR:', err);
});
// [END analyze_labels_local]
}

function analyzeShots (gcsUri) {
// [START analyze_shots]
// Imports the Google Cloud Video Intelligence library
const Video = require('@google-cloud/videointelligence').v1beta1();
const Video = require('@google-cloud/videointelligence');

// Instantiates a client
const video = Video.videoIntelligenceServiceClient();
const video = Video().videoIntelligenceServiceClient();

// The GCS filepath of the video to analyze
// const gcsUri = 'gs://my-bucket/my-video.mp4'
// const gcsUri = 'gs://my-bucket/my-video.mp4';

const request = {
inputUri: gcsUri,
features: ['SHOT_CHANGE_DETECTION']
};

// Detect camera shot changes
// Detects camera shot changes
video.annotateVideo(request)
.then((results) => {
const operation = results[0];
console.log('Waiting for operation to complete...');
return operation.promise();
})
.then((results) => {
// Get shot changes for first video
// Gets shot changes
const shotChanges = results[0].annotationResults[0].shotAnnotations;
console.log('Shot changes:');
shotChanges.forEach((shot, shotIdx) => {
console.log(`Scene ${shotIdx}:`);
console.log(`\tStart: ${shot.startTimeOffset}`);
Expand All @@ -132,29 +182,85 @@ function analyzeShots (gcsUri) {
// [END analyze_shots]
}

function analyzeSafeSearch (gcsUri) {
// [START analyze_safe_search]
// Imports the Google Cloud Video Intelligence library
const Video = require('@google-cloud/videointelligence');

// Instantiates a client
const video = Video().videoIntelligenceServiceClient();

// The GCS filepath of the video to analyze
// const gcsUri = 'gs://my-bucket/my-video.mp4';

const request = {
inputUri: gcsUri,
features: ['SAFE_SEARCH_DETECTION']
};

// Detects unsafe content
video.annotateVideo(request)
.then((results) => {
const operation = results[0];
console.log('Waiting for operation to complete...');
return operation.promise();
})
.then((results) => {
// Gets unsafe content
const safeSearchResults = results[0].annotationResults[0].safeSearchAnnotations;
console.log('Safe search results:');
safeSearchResults.forEach((result) => {
console.log(`Frame ${result.timeOffset}:`);
console.log(`\tAdult: ${result.adult}`);
console.log(`\tSpoof: ${result.spoof}`);
console.log(`\tMedical: ${result.medical}`);
console.log(`\tViolent: ${result.violent}`);
console.log(`\tRacy: ${result.racy}`);
});
})
.catch((err) => {
console.error('ERROR:', err);
});
// [END analyze_safe_search]
}

require(`yargs`) // eslint-disable-line
.demand(1)
.command(
`faces <gcsUri>`,
`Analyzes faces in a video using the Cloud Video Intelligence API.`,
`Analyzes faces in a video stored in Google Cloud Storage using the Cloud Video Intelligence API.`,
{},
(opts) => analyzeFaces(opts.gcsUri)
)
.command(
`shots <gcsUri>`,
`Analyzes shot angles in a video using the Cloud Video Intelligence API.`,
`Analyzes shot angles in a video stored in Google Cloud Storage using the Cloud Video Intelligence API.`,
{},
(opts) => analyzeShots(opts.gcsUri)
)
.command(
`labels <gcsUri>`,
`Labels objects in a video using the Cloud Video Intelligence API.`,
`labels-gcs <gcsUri>`,
`Labels objects in a video stored in Google Cloud Storage using the Cloud Video Intelligence API.`,
{},
(opts) => analyzeLabelsGCS(opts.gcsUri)
)
.command(
`labels-file <gcsUri>`,
`Labels objects in a video stored locally using the Cloud Video Intelligence API.`,
{},
(opts) => analyzeLabelsLocal(opts.gcsUri)
)
.command(
`safe-search <gcsUri>`,
`Detects adult content in a video stored in Google Cloud Storage.`,
{},
(opts) => analyzeLabels(opts.gcsUri)
(opts) => analyzeSafeSearch(opts.gcsUri)
)
.example(`node $0 faces gs://my-bucket/my-video.mp4`)
.example(`node $0 shots gs://my-bucket/my-video.mp4`)
.example(`node $0 labels gs://my-bucket/my-video.mp4`)
.example(`node $0 labels-gcs gs://my-bucket/my-video.mp4`)
.example(`node $0 labels-file my-video.mp4`)
.example(`node $0 safe-search gs://my-bucket/my-video.mp4`)
.wrap(120)
.recommendCommands()
.epilogue(`For more information, see https://cloud.google.com/video-intelligence/docs`)
Expand Down
2 changes: 1 addition & 1 deletion video/quickstart.js
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ video.annotateVideo(request)
// Gets faces for video from its annotations
const faces = annotations.faceAnnotations;
faces.forEach((face, faceIdx) => {
console.log('Thumbnail size:', face.thumbnail.buffer.length);
console.log('Thumbnail size:', face.thumbnail.length);
face.segments.forEach((segment, segmentIdx) => {
console.log(`Track ${segmentIdx} of face ${faceIdx}: frames ${segment.startTimeOffset} to ${segment.endTimeOffset}`);
});
Expand Down
Binary file added video/resources/cat.mp4
Binary file not shown.
23 changes: 18 additions & 5 deletions video/system-test/analyze.test.js
Original file line number Diff line number Diff line change
Expand Up @@ -25,19 +25,32 @@ const cmd = `node analyze.js`;
const cwd = path.join(__dirname, `..`);

// analyze_faces
test(`should analyze faces`, async (t) => {
test(`should analyze faces in a GCS file`, async (t) => {
const output = await tools.runAsync(`${cmd} faces gs://nodejs-docs-samples/video/google_gmail.mp4`, cwd);
t.regex(output, /Thumbnail size: \d+/);
});

// analyze_labels
test(`should analyze labels`, async (t) => {
const output = await tools.runAsync(`${cmd} labels gs://nodejs-docs-samples/video/cat.mp4`, cwd);
// analyze_labels_gcs
test(`should analyze labels in a GCS file`, async (t) => {
const output = await tools.runAsync(`${cmd} labels-gcs gs://nodejs-docs-samples/video/cat.mp4`, cwd);
t.regex(output, /Label description: Whiskers/);
});

// analyze_labels_local
test(`should analyze labels in a local file`, async (t) => {
const output = await tools.runAsync(`${cmd} labels-file resources/cat.mp4`, cwd);
t.regex(output, /Label description: Whiskers/);
});

// analyze_shots
test(`should analyze shots`, async (t) => {
test(`should analyze shots in a GCS file`, async (t) => {
const output = await tools.runAsync(`${cmd} shots gs://nodejs-docs-samples/video/gbike_dinosaur.mp4`, cwd);
t.regex(output, /Scene 0:/);
});

// analyze_safe_search
test(`should analyze safe search results in a GCS file`, async (t) => {
const output = await tools.runAsync(`${cmd} safe-search gs://nodejs-docs-samples/video/google_gmail.mp4`, cwd);
t.regex(output, /Frame \d+/);
t.regex(output, /Spoof: \d+/);
});