From fa3fe2eede0c784bd94b71614755aa4604768b63 Mon Sep 17 00:00:00 2001 From: Jean-Yves Perrier Date: Mon, 8 Aug 2022 14:24:34 +0200 Subject: [PATCH] Use arrow functions for anonymous functions where possible (part #8) (#19191) * Use arrow functions * const * Spaces * Spaces * Spaces * spaces * spaces * Space * Add comment * spaces * syntax + comments * indent * indent + this * indent + this * const + for...of * Revert to innerHTML * clean * remove Hungarian notation + indent * let -> const * Indent * fix indent * shorten string * indent * indent * Fix indent * Fix indent * Fix indent * Fix indent * Fix indent * Fix indent * Fix indent * Fix indent * Fix indent * Fix indent + template strings * Fix indent * Fix indent * Fix indent * indent - Hungarian notation * Fix indentation * Fix spaces * Update index.md * Update files/en-us/web/api/web_audio_api/simple_synth/index.md Co-authored-by: Joshua Chen * Update files/en-us/web/api/web_audio_api/simple_synth/index.md Co-authored-by: Joshua Chen * Update files/en-us/web/api/web_audio_api/simple_synth/index.md Co-authored-by: Joshua Chen * Update files/en-us/web/api/web_audio_api/using_iir_filters/index.md Co-authored-by: Joshua Chen * Update files/en-us/web/api/web_audio_api/using_iir_filters/index.md Co-authored-by: Joshua Chen * Update files/en-us/web/api/web_audio_api/web_audio_spatialization_basics/index.md Co-authored-by: Joshua Chen * Update index.md * Update files/en-us/web/api/web_workers_api/using_web_workers/index.md Co-authored-by: Joshua Chen * Update files/en-us/web/api/web_speech_api/using_the_web_speech_api/index.md Co-authored-by: Joshua Chen * Update files/en-us/web/api/web_workers_api/using_web_workers/index.md Co-authored-by: Joshua Chen * Update files/en-us/web/api/web_workers_api/using_web_workers/index.md Co-authored-by: Joshua Chen * Update files/en-us/web/api/web_workers_api/using_web_workers/index.md Co-authored-by: Joshua Chen * Update files/en-us/web/api/webgl_api/by_example/hello_glsl/index.md Co-authored-by: Joshua Chen * Update files/en-us/web/api/websockets_api/writing_websocket_server/index.md Co-authored-by: Joshua Chen * Revert over-correction * Update files/en-us/web/api/webrtc_api/signaling_and_video_calling/index.md Co-authored-by: Joshua Chen * Remove extraneous brace * Update files/en-us/web/api/webrtc_api/signaling_and_video_calling/index.md Co-authored-by: Joshua Chen * Update files/en-us/web/api/webrtc_api/signaling_and_video_calling/index.md Co-authored-by: Joshua Chen * small improvement * Update files/en-us/web/api/window/location/index.md Co-authored-by: Joshua Chen * Remove Hungarian left-over * Update files/en-us/web/api/window/location/index.md Co-authored-by: Joshua Chen * Fix logs * add parentheses * Update index.md * Update index.md * Changes Co-authored-by: Joshua Chen --- files/en-us/web/api/touch/pagey/index.md | 2 +- files/en-us/web/api/touch/screenx/index.md | 2 +- files/en-us/web/api/touch/target/index.md | 2 +- .../touch_events/using_touch_events/index.md | 2 +- .../en-us/web/api/touchevent/altkey/index.md | 2 +- .../api/touchevent/changedtouches/index.md | 2 +- .../en-us/web/api/touchevent/touches/index.md | 2 +- files/en-us/web/api/touchlist/item/index.md | 2 +- files/en-us/web/api/touchlist/length/index.md | 2 +- .../en-us/web/api/userproximityevent/index.md | 2 +- files/en-us/web/api/vibration_api/index.md | 2 +- files/en-us/web/api/videotrack/label/index.md | 4 +- .../api/visualviewport/resize_event/index.md | 4 +- .../api/visualviewport/scroll_event/index.md | 4 +- .../vrdisplay/cancelanimationframe/index.md | 6 +- .../en-us/web/api/vrdisplay/depthfar/index.md | 2 +- .../web/api/vrdisplay/depthnear/index.md | 2 +- .../web/api/vrdisplay/exitpresent/index.md | 6 +- .../web/api/vrdisplay/getframedata/index.md | 6 +- .../en-us/web/api/vrdisplay/getpose/index.md | 6 +- files/en-us/web/api/vrdisplay/index.md | 6 +- .../web/api/vrdisplay/isconnected/index.md | 10 +- .../web/api/vrdisplay/ispresenting/index.md | 4 +- .../vrdisplay/requestanimationframe/index.md | 6 +- .../web/api/vrdisplay/requestpresent/index.md | 12 +- .../web/api/vrdisplay/resetpose/index.md | 2 +- .../web/api/vrdisplay/submitframe/index.md | 6 +- .../web/api/vrdisplaycapabilities/index.md | 2 +- .../web/api/vrdisplayevent/display/index.md | 2 +- files/en-us/web/api/vrdisplayevent/index.md | 2 +- .../web/api/vrdisplayevent/reason/index.md | 2 +- files/en-us/web/api/vreyeparameters/index.md | 6 +- files/en-us/web/api/vrfieldofview/index.md | 2 +- .../web/api/vrframedata/timestamp/index.md | 6 +- files/en-us/web/api/vrlayerinit/index.md | 8 +- .../en-us/web/api/vrstageparameters/index.md | 4 +- .../using_the_web_animations_api/index.md | 16 +- .../api/web_audio_api/best_practices/index.md | 15 +- .../api/web_audio_api/simple_synth/index.md | 46 +- .../web_audio_api/using_iir_filters/index.md | 82 +-- .../using_web_audio_api/index.md | 42 +- .../web_audio_spatialization_basics/index.md | 326 ++++++----- files/en-us/web/api/web_midi_api/index.md | 2 +- .../using_the_web_speech_api/index.md | 36 +- .../using_the_web_storage_api/index.md | 2 +- .../using_web_workers/index.md | 533 +++++++++--------- .../basic_2d_animation_example/index.md | 21 +- .../by_example/canvas_size_and_webgl/index.md | 15 +- .../by_example/detect_webgl/index.md | 18 +- .../webgl_api/by_example/hello_glsl/index.md | 37 +- .../hello_vertex_attributes/index.md | 56 +- .../by_example/raining_rectangles/index.md | 25 +- .../by_example/scissor_animation/index.md | 32 +- .../by_example/textures_from_code/index.md | 47 +- .../animating_textures_in_webgl/index.md | 4 +- .../tutorial/using_textures_in_webgl/index.md | 2 +- .../webgl_api/webgl_best_practices/index.md | 4 +- .../webgl_model_view_projection/index.md | 12 +- .../webgl_lose_context/losecontext/index.md | 2 +- .../restorecontext/index.md | 2 +- .../en-us/web/api/webglcontextevent/index.md | 2 +- .../webglcontextevent/statusmessage/index.md | 2 +- .../connect_peers/answer_a_call/index.md | 34 +- .../create_a_peer_connection/index.md | 2 +- .../connect_peers/creating_a_call/index.md | 24 +- .../connect_peers/end_a_call/index.md | 12 +- .../connect_peers/show_hide_html/index.md | 30 +- .../signaling_and_video_calling/index.md | 80 ++- .../web/api/webrtc_api/using_dtmf/index.md | 20 +- .../web/api/websocket/binarytype/index.md | 19 +- .../web/api/websocket/close_event/index.md | 2 +- .../web/api/websocket/error_event/index.md | 2 +- files/en-us/web/api/websocket/index.md | 4 +- .../web/api/websocket/message_event/index.md | 2 +- .../index.md | 6 +- .../writing_websocket_server/index.md | 69 ++- .../webvr_api/using_the_webvr_api/index.md | 102 ++-- .../using_vr_controllers_with_webvr/index.md | 59 +- .../web/api/webxr_device_api/inputs/index.md | 2 +- .../api/window/appinstalled_event/index.md | 4 +- .../window/beforeinstallprompt_event/index.md | 4 +- .../window/deviceorientation_event/index.md | 21 +- .../web/api/window/hashchange_event/index.md | 2 +- .../api/window/languagechange_event/index.md | 4 +- files/en-us/web/api/window/location/index.md | 121 ++-- .../web/api/window/popstate_event/index.md | 14 +- .../web/api/window/sessionstorage/index.md | 2 +- .../web/api/window/speechsynthesis/index.md | 2 +- .../window/unhandledrejection_event/index.md | 2 +- .../web/api/window/unload_event/index.md | 8 +- .../window/vrdisplayactivate_event/index.md | 4 +- .../api/window/vrdisplayblur_event/index.md | 4 +- .../window/vrdisplayconnect_event/index.md | 4 +- .../window/vrdisplaydeactivate_event/index.md | 4 +- .../window/vrdisplaydisconnect_event/index.md | 4 +- .../api/window/vrdisplayfocus_event/index.md | 4 +- .../vrdisplaypointerrestricted_event/index.md | 4 +- .../index.md | 4 +- .../vrdisplaypresentchange_event/index.md | 8 +- files/en-us/web/api/windowclient/index.md | 4 +- 100 files changed, 1096 insertions(+), 1124 deletions(-) diff --git a/files/en-us/web/api/touch/pagey/index.md b/files/en-us/web/api/touch/pagey/index.md index 9c4399ab99756d2..002505469c59022 100644 --- a/files/en-us/web/api/touch/pagey/index.md +++ b/files/en-us/web/api/touch/pagey/index.md @@ -39,7 +39,7 @@ are accessed via the event's {{domxref("TouchEvent.changedTouches")}} list. // Register a touchmove listeners for the 'source' element const src = document.getElementById("source"); -src.addEventListener('touchmove', function(e) { +src.addEventListener('touchmove', (e) => { // Iterate through the touch points that have moved and log each // of the pageX/Y coordinates. The unit of each coordinate is CSS pixels. for (let i = 0; i < e.changedTouches.length; i++) { diff --git a/files/en-us/web/api/touch/screenx/index.md b/files/en-us/web/api/touch/screenx/index.md index 2755fff3e56247d..21977ee005f2605 100644 --- a/files/en-us/web/api/touch/screenx/index.md +++ b/files/en-us/web/api/touch/screenx/index.md @@ -30,7 +30,7 @@ In following simple code snippet, we assume the user initiates multiple touch co // Register a touchstart listeners for the 'source' element const src = document.getElementById("source"); -src.addEventListener('touchstart', function(e) { +src.addEventListener('touchstart', (e) => { // Iterate through the touch points and log each screenX/Y coordinate. // The unit of each coordinate is CSS pixels. for (let i = 0; i < e.touches.length; i++) { diff --git a/files/en-us/web/api/touch/target/index.md b/files/en-us/web/api/touch/target/index.md index 0481ba436b90a6f..a3d52aa0ebe3e1c 100644 --- a/files/en-us/web/api/touch/target/index.md +++ b/files/en-us/web/api/touch/target/index.md @@ -29,7 +29,7 @@ In following simple code snippet, we assume the user initiates one or more touch // Register a touchmove listener for the 'source' element const src = document.getElementById("source"); -src.addEventListener('touchstart', function(e) { +src.addEventListener('touchstart', (e) => { // Iterate through the touch points that were activated // for this element. for (let i = 0; i < e.targetTouches.length; i++) { diff --git a/files/en-us/web/api/touch_events/using_touch_events/index.md b/files/en-us/web/api/touch_events/using_touch_events/index.md index eb866a6f97bc79a..993dba713db2e3d 100644 --- a/files/en-us/web/api/touch_events/using_touch_events/index.md +++ b/files/en-us/web/api/touch_events/using_touch_events/index.md @@ -79,7 +79,7 @@ Access the attributes of a touch point. ```js // Create touchstart handler -someElement.addEventListener('touchstart', function(ev) { +someElement.addEventListener('touchstart', (ev) => { // Iterate through the touch points that were activated // for this element and process each event 'target' for (let i = 0; i < ev.targetTouches.length; i++) { diff --git a/files/en-us/web/api/touchevent/altkey/index.md b/files/en-us/web/api/touchevent/altkey/index.md index 4a2aec0b53a3f75..b8a0dc7ac8d26e6 100644 --- a/files/en-us/web/api/touchevent/altkey/index.md +++ b/files/en-us/web/api/touchevent/altkey/index.md @@ -30,7 +30,7 @@ This example illustrates how to access the {{domxref("TouchEvent")}} key modifie In following code snippet, the {{domxref("Element/touchstart_event", "touchstart")}} event handler logs the state of the event's modifier keys. ```js -someElement.addEventListener('touchstart', function(e) { +someElement.addEventListener('touchstart', (e) => { // Log the state of this event's modifier keys console.log(`altKey = ${e.altKey}`); console.log(`ctrlKey = ${e.ctrlKey}`); diff --git a/files/en-us/web/api/touchevent/changedtouches/index.md b/files/en-us/web/api/touchevent/changedtouches/index.md index 5ff2c80754db8f6..2ef69b2261583cc 100644 --- a/files/en-us/web/api/touchevent/changedtouches/index.md +++ b/files/en-us/web/api/touchevent/changedtouches/index.md @@ -33,7 +33,7 @@ This example illustrates the {{domxref("TouchEvent")}} object's {{domxref("Touch In following code snippet, the {{domxref("Element/touchmove_event", "touchmove")}} event handler iterates through the `changedTouches` list and prints the identifier of each touch point that changed since the last event. ```js -someElement.addEventListener('touchmove', function(e) { +someElement.addEventListener('touchmove', (e) => { // Iterate through the list of touch points that changed // since the last event and print each touch point's identifier. for (let i = 0; i < e.changedTouches.length; i++) { diff --git a/files/en-us/web/api/touchevent/touches/index.md b/files/en-us/web/api/touchevent/touches/index.md index ea8ce564b9d9fb0..8138e2c54cf67ca 100644 --- a/files/en-us/web/api/touchevent/touches/index.md +++ b/files/en-us/web/api/touchevent/touches/index.md @@ -42,7 +42,7 @@ that were activated and then invokes different handlers depending on the number points. ```js -someElement.addEventListener('touchstart', function(e) { +someElement.addEventListener('touchstart', (e) => { // Invoke the appropriate handler depending on the // number of touch points. switch (e.touches.length) { diff --git a/files/en-us/web/api/touchlist/item/index.md b/files/en-us/web/api/touchlist/item/index.md index 0e6124045e76c9a..32498e06f5ae1f5 100644 --- a/files/en-us/web/api/touchlist/item/index.md +++ b/files/en-us/web/api/touchlist/item/index.md @@ -45,7 +45,7 @@ This code example illustrates the use of the {{domxref("TouchList")}} interface' ```js const target = document.getElementById("target"); -target.addEventListener('touchstart', function(ev) { +target.addEventListener('touchstart', (ev) => { // If this touchstart event started on element target, // set touch to the first item in the targetTouches list; diff --git a/files/en-us/web/api/touchlist/length/index.md b/files/en-us/web/api/touchlist/length/index.md index 2a674030db5e60a..3580c91ab5e9acc 100644 --- a/files/en-us/web/api/touchlist/length/index.md +++ b/files/en-us/web/api/touchlist/length/index.md @@ -31,7 +31,7 @@ This code example illustrates the use of the {{domxref("TouchList")}} interface' ```js const target = document.getElementById("target"); -target.addEventListener('touchstart', function(ev) { +target.addEventListener('touchstart', (ev) => { // If this touchstart event started on element target, // set touch to the first item in the targetTouches list; diff --git a/files/en-us/web/api/userproximityevent/index.md b/files/en-us/web/api/userproximityevent/index.md index ba23f73fbbaf4ce..a87081b8e66bee7 100644 --- a/files/en-us/web/api/userproximityevent/index.md +++ b/files/en-us/web/api/userproximityevent/index.md @@ -25,7 +25,7 @@ The **`UserProximityEvent`** indicates whether a nearby physical object is prese ## Examples ```js -window.addEventListener('userproximity', function(event) { +window.addEventListener('userproximity', (event) => { if (event.near) { // let's power off the screen navigator.mozPower.screenEnabled = false; diff --git a/files/en-us/web/api/vibration_api/index.md b/files/en-us/web/api/vibration_api/index.md index 4f87410ce0d9468..e792b9f0c19f965 100644 --- a/files/en-us/web/api/vibration_api/index.md +++ b/files/en-us/web/api/vibration_api/index.md @@ -66,7 +66,7 @@ function stopVibrate() { // Start persistent vibration at given duration and interval // Assumes a number value is given function startPersistentVibrate(duration, interval) { - vibrateInterval = setInterval(function() { + vibrateInterval = setInterval(() => { startVibrate(duration); }, interval); } diff --git a/files/en-us/web/api/videotrack/label/index.md b/files/en-us/web/api/videotrack/label/index.md index 4180ff90b92e891..b0310e82d17960d 100644 --- a/files/en-us/web/api/videotrack/label/index.md +++ b/files/en-us/web/api/videotrack/label/index.md @@ -42,12 +42,12 @@ only allow certain track kinds through. ```js function getTrackList(el) { - let trackList = []; + cons trackList = []; const wantedKinds = [ "main", "alternative", "commentary" ]; - el.videoTracks.forEach(function(track) { + el.videoTracks.forEach((track) => { if (wantedKinds.includes(track.kind)) { trackList.push({ id: track.id, diff --git a/files/en-us/web/api/visualviewport/resize_event/index.md b/files/en-us/web/api/visualviewport/resize_event/index.md index d92f640bf419a29..b4372e2caabc68b 100644 --- a/files/en-us/web/api/visualviewport/resize_event/index.md +++ b/files/en-us/web/api/visualviewport/resize_event/index.md @@ -33,7 +33,7 @@ A generic {{domxref("Event")}}. You can use the `resize` event in an [`addEventListener`](/en-US/docs/Web/API/EventTarget/addEventListener) method: ```js -visualViewport.addEventListener('resize', function() { +visualViewport.addEventListener('resize', () => { // … }); ``` @@ -41,7 +41,7 @@ visualViewport.addEventListener('resize', function() { Or use the `onresize` event handler property: ```js -visualViewport.onresize = function() { +visualViewport.onresize = () => { // … }; ``` diff --git a/files/en-us/web/api/visualviewport/scroll_event/index.md b/files/en-us/web/api/visualviewport/scroll_event/index.md index 616c13ce2381c0e..abb05f7db129ec1 100644 --- a/files/en-us/web/api/visualviewport/scroll_event/index.md +++ b/files/en-us/web/api/visualviewport/scroll_event/index.md @@ -33,7 +33,7 @@ A generic {{domxref("Event")}}. You can use the `scroll` event in an [`addEventListener`](/en-US/docs/Web/API/EventTarget/addEventListener) method: ```js -visualViewport.addEventListener('scroll', function() { +visualViewport.addEventListener('scroll', () => { // … }); ``` @@ -41,7 +41,7 @@ visualViewport.addEventListener('scroll', function() { Or use the `onscroll` event handler property: ```js -visualViewport.onscroll = function() { +visualViewport.onscroll = () => { // … }; ``` diff --git a/files/en-us/web/api/vrdisplay/cancelanimationframe/index.md b/files/en-us/web/api/vrdisplay/cancelanimationframe/index.md index 9c2e3b2a46592a6..d3fb697dce5fc40 100644 --- a/files/en-us/web/api/vrdisplay/cancelanimationframe/index.md +++ b/files/en-us/web/api/vrdisplay/cancelanimationframe/index.md @@ -46,15 +46,15 @@ drawScene(); if(navigator.getVRDisplays) { console.log('WebVR 1.1 supported'); // Then get the displays attached to the computer - navigator.getVRDisplays().then(function(displays) { + navigator.getVRDisplays().then((displays) => { // If a display is available, use it to present the scene if(displays.length > 0) { vrDisplay = displays[0]; console.log('Display found'); // Starting the presentation when the button is clicked: It can only be called in response to a user gesture - btn.addEventListener('click', function() { + btn.addEventListener('click', () => { if(btn.textContent === 'Start VR display') { - vrDisplay.requestPresent([{ source: canvas }]).then(function() { + vrDisplay.requestPresent([{ source: canvas }]).then(() => { console.log('Presenting to WebVR display'); // Set the canvas size to the size of the vrDisplay viewport diff --git a/files/en-us/web/api/vrdisplay/depthfar/index.md b/files/en-us/web/api/vrdisplay/depthfar/index.md index 4508c75f07a8623..18f489eef8e4a05 100644 --- a/files/en-us/web/api/vrdisplay/depthfar/index.md +++ b/files/en-us/web/api/vrdisplay/depthfar/index.md @@ -32,7 +32,7 @@ It initial value is `10000.0`. ```js let vrDisplay; -navigator.getVRDisplays().then(function(displays) { +navigator.getVRDisplays().then((displays) => { vrDisplay = displays[0]; vrDisplay.depthNear = 1.0; vrDisplay.depthFar = 7500.0; diff --git a/files/en-us/web/api/vrdisplay/depthnear/index.md b/files/en-us/web/api/vrdisplay/depthnear/index.md index e0a70ec3fb34ad7..bff3259730dffa4 100644 --- a/files/en-us/web/api/vrdisplay/depthnear/index.md +++ b/files/en-us/web/api/vrdisplay/depthnear/index.md @@ -31,7 +31,7 @@ A double, representing the z-depth in meters; its initial value is `0.01`. ```js let vrDisplay; -navigator.getVRDisplays().then(function(displays) { +navigator.getVRDisplays().then((displays) => { vrDisplay = displays[0]; vrDisplay.depthNear = 1.0; vrDisplay.depthFar = 7500.0; diff --git a/files/en-us/web/api/vrdisplay/exitpresent/index.md b/files/en-us/web/api/vrdisplay/exitpresent/index.md index 5358b54d0326e8b..bd57ebf782d8cc4 100644 --- a/files/en-us/web/api/vrdisplay/exitpresent/index.md +++ b/files/en-us/web/api/vrdisplay/exitpresent/index.md @@ -40,15 +40,15 @@ A promise that resolves once the presentation has ended. If the `VRDisplay` is n if(navigator.getVRDisplays) { console.log('WebVR 1.1 supported'); // Then get the displays attached to the computer - navigator.getVRDisplays().then(function(displays) { + navigator.getVRDisplays().then((displays) => { // If a display is available, use it to present the scene if(displays.length > 0) { vrDisplay = displays[0]; console.log('Display found'); // Starting the presentation when the button is clicked: It can only be called in response to a user gesture - btn.addEventListener('click', function() { + btn.addEventListener('click', () => { if(btn.textContent === 'Start VR display') { - vrDisplay.requestPresent([{ source: canvas }]).then(function() { + vrDisplay.requestPresent([{ source: canvas }]).then(() => { console.log('Presenting to WebVR display'); // Set the canvas size to the size of the vrDisplay viewport diff --git a/files/en-us/web/api/vrdisplay/getframedata/index.md b/files/en-us/web/api/vrdisplay/getframedata/index.md index 36285fc842f208a..d6a86feeea4c6f6 100644 --- a/files/en-us/web/api/vrdisplay/getframedata/index.md +++ b/files/en-us/web/api/vrdisplay/getframedata/index.md @@ -43,12 +43,12 @@ A boolean value — a value of `true` is returned if the {{domxref("VRFrameData" const frameData = new VRFrameData(); let vrDisplay; -navigator.getVRDisplays().then(function(displays) { +navigator.getVRDisplays().then((displays) => { vrDisplay = displays[0]; console.log('Display found'); // Starting the presentation when the button is clicked: It can only be called in response to a user gesture - btn.addEventListener('click', function() { - vrDisplay.requestPresent([{ source: canvas }]).then(function() { + btn.addEventListener('click', () => { + vrDisplay.requestPresent([{ source: canvas }]).then(() => { drawVRScene(); }); }); diff --git a/files/en-us/web/api/vrdisplay/getpose/index.md b/files/en-us/web/api/vrdisplay/getpose/index.md index b0b6470897097f9..44ae1a8643533a4 100644 --- a/files/en-us/web/api/vrdisplay/getpose/index.md +++ b/files/en-us/web/api/vrdisplay/getpose/index.md @@ -41,12 +41,12 @@ A {{domxref("VRPose")}} object. Once we have a reference to a {{domxref("VRDisplay")}} object, we can retrieve the {{domxref("VRPose")}} representing the current pose of the display. ```js -if(navigator.getVRDisplays) { +if (navigator.getVRDisplays) { console.log('WebVR 1.1 supported'); // Then get the displays attached to the computer - navigator.getVRDisplays().then(function(displays) { + navigator.getVRDisplays().then((displays) => { // If a display is available, use it to present the scene - if(displays.length > 0) { + if (displays.length > 0) { vrDisplay = displays[0]; console.log('Display found'); diff --git a/files/en-us/web/api/vrdisplay/index.md b/files/en-us/web/api/vrdisplay/index.md index 6627c385718bb52..77e10b057fe4637 100644 --- a/files/en-us/web/api/vrdisplay/index.md +++ b/files/en-us/web/api/vrdisplay/index.md @@ -70,12 +70,12 @@ An array of all connected VR Devices can be returned by invoking the {{domxref(" ## Examples ```js -if(navigator.getVRDisplays) { +if (navigator.getVRDisplays) { console.log('WebVR 1.1 supported'); // Then get the displays attached to the computer - navigator.getVRDisplays().then(function(displays) { + navigator.getVRDisplays().then((displays) => { // If a display is available, use it to present the scene - if(displays.length > 0) { + if (displays.length > 0) { vrDisplay = displays[0]; // Now we have our VRDisplay object and can do what we want with it } diff --git a/files/en-us/web/api/vrdisplay/isconnected/index.md b/files/en-us/web/api/vrdisplay/isconnected/index.md index 28d5f44f1df4a2a..e41c1016e9ecb40 100644 --- a/files/en-us/web/api/vrdisplay/isconnected/index.md +++ b/files/en-us/web/api/vrdisplay/isconnected/index.md @@ -27,16 +27,16 @@ A boolean value; `true` means the display is connected; `false` means it isn't. ## Examples ```js -navigator.getVRDisplays().then(function(displays) { +navigator.getVRDisplays().then((displays) => { // If a display is available, use it to present the scene - if(displays.length > 0) { + if (displays.length > 0) { vrDisplay = displays[0]; // Starting the presentation when the button is clicked: It can only be called in response to a user gesture - btn.addEventListener('click', function() { + btn.addEventListener('click', () => { // Only request presentation if the display is still connected. - if(vrDisplay.isConnected) { - vrDisplay.requestPresent([{ source: canvas }]).then(function() { + if (vrDisplay.isConnected) { + vrDisplay.requestPresent([{ source: canvas }]).then(() => { // start rendering the app, etc. }); } else { diff --git a/files/en-us/web/api/vrdisplay/ispresenting/index.md b/files/en-us/web/api/vrdisplay/ispresenting/index.md index 0eccdc8b022a472..92b7d57b629b172 100644 --- a/files/en-us/web/api/vrdisplay/ispresenting/index.md +++ b/files/en-us/web/api/vrdisplay/ispresenting/index.md @@ -33,9 +33,9 @@ function onVRExitPresent () { // we weren't presenting.) if (!vrDisplay.isPresenting) return; - vrDisplay.exitPresent().then(function () { + vrDisplay.exitPresent().then(() => { // Nothing to do because we're handling things in onVRPresentChange. - }, function (err) { + }, (err) => { let errMsg = "exitPresent failed."; if (err && err.message) { errMsg += `
${err.message}`; diff --git a/files/en-us/web/api/vrdisplay/requestanimationframe/index.md b/files/en-us/web/api/vrdisplay/requestanimationframe/index.md index 0151234f1c3a298..3fff86eb5f98ac5 100644 --- a/files/en-us/web/api/vrdisplay/requestanimationframe/index.md +++ b/files/en-us/web/api/vrdisplay/requestanimationframe/index.md @@ -44,12 +44,12 @@ A long representing the handle of the `requestAnimationFrame()` call. This can t const frameData = new VRFrameData(); let vrDisplay; -navigator.getVRDisplays().then(function(displays) { +navigator.getVRDisplays().then((displays) => { vrDisplay = displays[0]; console.log('Display found'); // Starting the presentation when the button is clicked: It can only be called in response to a user gesture - btn.addEventListener('click', function() { - vrDisplay.requestPresent([{ source: canvas }]).then(function() { + btn.addEventListener('click', () => { + vrDisplay.requestPresent([{ source: canvas }]).then(() => { drawVRScene(); }); }); diff --git a/files/en-us/web/api/vrdisplay/requestpresent/index.md b/files/en-us/web/api/vrdisplay/requestpresent/index.md index 2ebb0ac5b983cf9..2c7663f2b3e9988 100644 --- a/files/en-us/web/api/vrdisplay/requestpresent/index.md +++ b/files/en-us/web/api/vrdisplay/requestpresent/index.md @@ -43,18 +43,18 @@ A promise that resolves once the presentation has begun. there are a number of r ## Examples ```js -if(navigator.getVRDisplays) { +if (navigator.getVRDisplays) { console.log('WebVR 1.1 supported'); // Then get the displays attached to the computer - navigator.getVRDisplays().then(function(displays) { + navigator.getVRDisplays().then((displays) => { // If a display is available, use it to present the scene - if(displays.length > 0) { + if (displays.length > 0) { vrDisplay = displays[0]; console.log('Display found'); // Starting the presentation when the button is clicked: It can only be called in response to a user gesture - btn.addEventListener('click', function() { - if(btn.textContent === 'Start VR display') { - vrDisplay.requestPresent([{ source: canvas }]).then(function() { + btn.addEventListener('click', () => { + if (btn.textContent === 'Start VR display') { + vrDisplay.requestPresent([{ source: canvas }]).then(() => { console.log('Presenting to WebVR display'); // Set the canvas size to the size of the vrDisplay viewport diff --git a/files/en-us/web/api/vrdisplay/resetpose/index.md b/files/en-us/web/api/vrdisplay/resetpose/index.md index 09e0f15de3b5e8e..845fa052b78448e 100644 --- a/files/en-us/web/api/vrdisplay/resetpose/index.md +++ b/files/en-us/web/api/vrdisplay/resetpose/index.md @@ -43,7 +43,7 @@ None ({{jsxref("undefined")}}). ```js // Assuming vrDisplay already contains a VRDisplay object, // and we have a + Press here to [verb goes here] the animation. + ``` ```html hidden -Your browser does not seem to support - HTML5 canvas. +Your browser does not seem to support canvases. ``` ```css hidden @@ -60,11 +57,11 @@ button { ``` ```js hidden -;(function(){ +;(() => { + "use strict"; ``` ```js -"use strict" window.addEventListener("load", setupAnimation, false); // Variables to hold the WebGL context, and the color and // position of animated squares. @@ -74,8 +71,7 @@ let position; function setupAnimation (evt) { window.removeEventListener(evt.type, setupAnimation, false); - if (!(gl = getRenderingContext())) - return; + if (!(gl = getRenderingContext())) return; gl.enable(gl.SCISSOR_TEST); gl.clearColor(color[0], color[1], color[2], 1.0); @@ -87,6 +83,7 @@ function setupAnimation (evt) { const button = document.querySelector("button"); let timer; + function startAnimation(evt) { button.removeEventListener(evt.type, startAnimation, false); button.addEventListener("click", stopAnimation, false); @@ -94,17 +91,19 @@ function setupAnimation (evt) { timer = setInterval(drawAnimation, 17); drawAnimation(); } + function stopAnimation(evt) { button.removeEventListener(evt.type, stopAnimation, false); button.addEventListener("click", startAnimation, false); document.querySelector("strong").textContent = "start"; clearInterval(timer); } + stopAnimation({type: "click"}); } // Variables to hold the size and velocity of the square. -let size = [60, 60]; +const size = [60, 60]; let velocity = 3.0; function drawAnimation () { gl.scissor(position[0], position[1], size[0] , size[1]); @@ -139,16 +138,13 @@ function getRenderingContext() { const canvas = document.querySelector("canvas"); canvas.width = canvas.clientWidth; canvas.height = canvas.clientHeight; - const gl = canvas.getContext("webgl") - || canvas.getContext("experimental-webgl"); + const gl = canvas.getContext("webgl") || canvas.getContext("experimental-webgl"); if (!gl) { const paragraph = document.querySelector("p"); - paragraph.textContent = "Failed to get WebGL context." - + "Your browser or device may not support WebGL."; + paragraph.textContent = "Failed. Your browser or device may not support WebGL."; return null; } - gl.viewport(0, 0, - gl.drawingBufferWidth, gl.drawingBufferHeight); + gl.viewport(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight); gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); return gl; diff --git a/files/en-us/web/api/webgl_api/by_example/textures_from_code/index.md b/files/en-us/web/api/webgl_api/by_example/textures_from_code/index.md index 846599d94feddee..17ec28ed1eec76e 100644 --- a/files/en-us/web/api/webgl_api/by_example/textures_from_code/index.md +++ b/files/en-us/web/api/webgl_api/by_example/textures_from_code/index.md @@ -20,18 +20,16 @@ This WebGL example provides a simple demonstration of procedural texturing with ## Drawing textures with code -{{EmbedLiveSample("Drawing_textures_with_code",660,425)}} +{{EmbedLiveSample("Drawing_textures_with_code", 660, 425)}} Texturing a point sprite with calculations done per-pixel in the fragment shader. ```html hidden -

Texture from code. Simple demonstration - of procedural texturing

+

Texture from code. Simple demonstration of procedural texturing

``` ```html hidden -Your browser does not seem to support - HTML5 canvas. +Your browser does not seem to support canvases. ``` ```css hidden @@ -85,27 +83,30 @@ void main() { ``` ```js hidden -;(function(){ +;(() => { + "use strict"; ``` ```js -"use strict" window.addEventListener("load", setupWebGL, false); + let gl; let program; + function setupWebGL (evt) { window.removeEventListener(evt.type, setupWebGL, false); - if (!(gl = getRenderingContext())) - return; + if (!(gl = getRenderingContext())) return; let source = document.querySelector("#vertex-shader").innerHTML; const vertexShader = gl.createShader(gl.VERTEX_SHADER); - gl.shaderSource(vertexShader,source); + gl.shaderSource(vertexShader, source); gl.compileShader(vertexShader); + source = document.querySelector("#fragment-shader").innerHTML const fragmentShader = gl.createShader(gl.FRAGMENT_SHADER); - gl.shaderSource(fragmentShader,source); + gl.shaderSource(fragmentShader, source); gl.compileShader(fragmentShader); + program = gl.createProgram(); gl.attachShader(program, vertexShader); gl.attachShader(program, fragmentShader); @@ -117,8 +118,7 @@ function setupWebGL (evt) { if (!gl.getProgramParameter(program, gl.LINK_STATUS)) { const linkErrLog = gl.getProgramInfoLog(program); cleanup(); - document.querySelector("p").innerHTML = - `Shader program did not link successfully. Error log: ${linkErrLog}`; + document.querySelector("p").textContent = `Shader program did not link successfully. Error log: ${linkErrLog}`; return; } initializeAttributes(); @@ -137,11 +137,13 @@ function initializeAttributes() { } function cleanup() { -gl.useProgram(null); -if (buffer) - gl.deleteBuffer(buffer); -if (program) - gl.deleteProgram(program); + gl.useProgram(null); + if (buffer) { + gl.deleteBuffer(buffer); + } + if (program) { + gl.deleteProgram(program); + } } ``` @@ -150,16 +152,13 @@ function getRenderingContext() { const canvas = document.querySelector("canvas"); canvas.width = canvas.clientWidth; canvas.height = canvas.clientHeight; - const gl = canvas.getContext("webgl") - || canvas.getContext("experimental-webgl"); + const gl = canvas.getContext("webgl") || canvas.getContext("experimental-webgl"); if (!gl) { const paragraph = document.querySelector("p"); - paragraph.innerHTML = "Failed to get WebGL context." - + "Your browser or device may not support WebGL."; + paragraph.textContent = "Failed. Your browser or device may not support WebGL."; return null; } - gl.viewport(0, 0, - gl.drawingBufferWidth, gl.drawingBufferHeight); + gl.viewport(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight); gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); return gl; diff --git a/files/en-us/web/api/webgl_api/tutorial/animating_textures_in_webgl/index.md b/files/en-us/web/api/webgl_api/tutorial/animating_textures_in_webgl/index.md index 0189a75943748c4..f9b486dab068b16 100644 --- a/files/en-us/web/api/webgl_api/tutorial/animating_textures_in_webgl/index.md +++ b/files/en-us/web/api/webgl_api/tutorial/animating_textures_in_webgl/index.md @@ -35,12 +35,12 @@ function setupVideo(url) { // Waiting for these 2 events ensures // there is data in the video - video.addEventListener('playing', function() { + video.addEventListener('playing', () => { playing = true; checkReady(); }, true); - video.addEventListener('timeupdate', function() { + video.addEventListener('timeupdate', () => { timeupdate = true; checkReady(); }, true); diff --git a/files/en-us/web/api/webgl_api/tutorial/using_textures_in_webgl/index.md b/files/en-us/web/api/webgl_api/tutorial/using_textures_in_webgl/index.md index 051e0d99d62cf2a..cc7e9801bffa197 100644 --- a/files/en-us/web/api/webgl_api/tutorial/using_textures_in_webgl/index.md +++ b/files/en-us/web/api/webgl_api/tutorial/using_textures_in_webgl/index.md @@ -47,7 +47,7 @@ function loadTexture(gl, url) { pixel); const image = new Image(); - image.onload = function() { + image.onload = () => { gl.bindTexture(gl.TEXTURE_2D, texture); gl.texImage2D(gl.TEXTURE_2D, level, internalFormat, srcFormat, srcType, image); diff --git a/files/en-us/web/api/webgl_api/webgl_best_practices/index.md b/files/en-us/web/api/webgl_api/webgl_best_practices/index.md index 4ecba06e884f816..53b60df9bb77e64 100644 --- a/files/en-us/web/api/webgl_api/webgl_best_practices/index.md +++ b/files/en-us/web/api/webgl_api/webgl_best_practices/index.md @@ -594,7 +594,7 @@ Demo: [Device pixel presnap](https://kdashg.github.io/misc/webgl/device-pixel-pr On supporting browsers (Chromium?), `ResizeObserver` can be used with `'device-pixel-content-box'` to request a callback that includes the true device pixel size of an element. This can be used to build an async-but-accurate function: ```js -window.getDevicePixelSize = window.getDevicePixelSize || async function(elem) { +window.getDevicePixelSize = window.getDevicePixelSize || (async (elem) => { await new Promise((fn_resolve) => { const observer = new ResizeObserver((entries) => { for (const cur of entries) { @@ -611,7 +611,7 @@ window.getDevicePixelSize = window.getDevicePixelSize || async function(elem) { }); observer.observe(elem, {box: 'device-pixel-content-box'}); }); -}; +}); ``` Please refer to [the specification](https://www.w3.org/TR/resize-observer/#resize-observer-interface) for more details. diff --git a/files/en-us/web/api/webgl_api/webgl_model_view_projection/index.md b/files/en-us/web/api/webgl_api/webgl_model_view_projection/index.md index f658abdeeae32a9..850e6356776e3a5 100644 --- a/files/en-us/web/api/webgl_api/webgl_model_view_projection/index.md +++ b/files/en-us/web/api/webgl_api/webgl_model_view_projection/index.md @@ -80,7 +80,7 @@ function WebGLBox() { Now we'll create a method to draw a box on the screen. ```js -WebGLBox.prototype.draw = function(settings) { +WebGLBox.prototype.draw = function (settings) { // Create some attribute data; these are the triangles that will end being // drawn to the screen. There are two that form a square. @@ -345,7 +345,7 @@ In this case, for every frame of the animation a series of scale, rotation, and The following code sample defines a method on the `CubeDemo` object that will create the model matrix. It uses custom functions to create and multiply matrices as defined in the [MDN WebGL](https://github.com/gregtatum/mdn-webgl) shared code. The new function looks like this: ```js -CubeDemo.prototype.computeModelMatrix = function(now) { +CubeDemo.prototype.computeModelMatrix = function (now) { //Scale down by 50% const scale = MDN.scaleMatrix(0.5, 0.5, 0.5); @@ -510,7 +510,7 @@ Which is exactly the same as the `(z + 1) * scaleFactor` that we used in the pre In the box demo, an additional `computeSimpleProjectionMatrix()` method is added. This is called in the `draw()` method and has the scale factor passed to it. The result should be identical to the last example: ```js -CubeDemo.prototype.computeSimpleProjectionMatrix = function(scaleFactor) { +CubeDemo.prototype.computeSimpleProjectionMatrix = function (scaleFactor) { this.transforms.projection = [ 1, 0, 0, 0, 0, 1, 0, 0, @@ -572,7 +572,7 @@ The reason to flip the z axis is that the clip space coordinate system is a left Let's take a look at a `perspectiveMatrix()` function, which computes the perspective projection matrix. ```js -MDN.perspectiveMatrix = function(fieldOfViewInRadians, aspectRatio, near, far) { +MDN.perspectiveMatrix = function (fieldOfViewInRadians, aspectRatio, near, far) { const f = 1.0 / Math.tan(fieldOfViewInRadians / 2); const rangeInv = 1 / (near - far); @@ -599,7 +599,7 @@ The four parameters into this function are: In the latest version of the box demo, the `computeSimpleProjectionMatrix()` method has been replaced with the `computePerspectiveMatrix()` method. ```js -CubeDemo.prototype.computePerspectiveMatrix = function() { +CubeDemo.prototype.computePerspectiveMatrix = function () { const fieldOfViewInRadians = Math.PI * 0.5; const aspectRatio = window.innerWidth / window.innerHeight; const nearClippingPlaneDistance = 1; @@ -658,7 +658,7 @@ Unlike the model matrix, which directly transforms the model vertices, the view The following `computeViewMatrix()` method animates the view matrix by moving it in and out, and left and right. ```js -CubeDemo.prototype.computeViewMatrix = function(now) { +CubeDemo.prototype.computeViewMatrix = function (now) { const moveInAndOut = 20 * Math.sin(now * 0.002); const moveLeftAndRight = 15 * Math.sin(now * 0.0017); diff --git a/files/en-us/web/api/webgl_lose_context/losecontext/index.md b/files/en-us/web/api/webgl_lose_context/losecontext/index.md index 510b7df159dd899..77eb08c38312950 100644 --- a/files/en-us/web/api/webgl_lose_context/losecontext/index.md +++ b/files/en-us/web/api/webgl_lose_context/losecontext/index.md @@ -43,7 +43,7 @@ event: const canvas = document.getElementById('canvas'); const gl = canvas.getContext('webgl'); -canvas.addEventListener('webglcontextlost', function(e) { +canvas.addEventListener('webglcontextlost', (e) => { console.log(e); }, false); diff --git a/files/en-us/web/api/webgl_lose_context/restorecontext/index.md b/files/en-us/web/api/webgl_lose_context/restorecontext/index.md index f0c4f2258310e57..999a9e4c5d69798 100644 --- a/files/en-us/web/api/webgl_lose_context/restorecontext/index.md +++ b/files/en-us/web/api/webgl_lose_context/restorecontext/index.md @@ -46,7 +46,7 @@ event: const canvas = document.getElementById('canvas'); const gl = canvas.getContext('webgl'); -canvas.addEventListener('webglcontextrestored', function(e) { +canvas.addEventListener('webglcontextrestored', (e) => { console.log(e); }, false); diff --git a/files/en-us/web/api/webglcontextevent/index.md b/files/en-us/web/api/webglcontextevent/index.md index dae03840d8029d5..13c24cc5ee8d3c0 100644 --- a/files/en-us/web/api/webglcontextevent/index.md +++ b/files/en-us/web/api/webglcontextevent/index.md @@ -35,7 +35,7 @@ With the help of the {{domxref("WEBGL_lose_context")}} extension, you can simula const canvas = document.getElementById('canvas'); const gl = canvas.getContext('webgl'); -canvas.addEventListener('webglcontextlost', function(e) { +canvas.addEventListener('webglcontextlost', (e) => { console.log(e); }, false); diff --git a/files/en-us/web/api/webglcontextevent/statusmessage/index.md b/files/en-us/web/api/webglcontextevent/statusmessage/index.md index 8915fcf076d02c3..151a0d18b0698bf 100644 --- a/files/en-us/web/api/webglcontextevent/statusmessage/index.md +++ b/files/en-us/web/api/webglcontextevent/statusmessage/index.md @@ -23,7 +23,7 @@ The `statusMessage` property can contain a platform dependent string with detail const canvas = document.getElementById('canvas'); const gl = canvas.getContext('webgl'); -canvas.addEventListener('webglcontextcreationerror', function(e) { +canvas.addEventListener('webglcontextcreationerror', (e) => { console.log(`WebGL context creation failed: ${e.statusMessage || 'Unknown error'}`); }, false); ``` diff --git a/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/answer_a_call/index.md b/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/answer_a_call/index.md index b4d5e0c5a6aa588..605ef9c8a8dde66 100644 --- a/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/answer_a_call/index.md +++ b/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/answer_a_call/index.md @@ -12,8 +12,8 @@ Now our users can make a call, but they can't answer one. Let's add the next pie 1. The peerJS framework makes the `.on('call')` event available to use so let's use it here. Add this to the bottom of `script.js`: ```js - peer.on('call', function(call) { - const answerCall = confirm("Do you want to answer?") + peer.on('call', (call) => { + const answerCall = confirm("Do you want to answer?") }); ``` @@ -26,20 +26,20 @@ Now our users can make a call, but they can't answer one. Let's add the next pie 2. Let's flesh out this event listener. Update it as follows: ```js - peer.on('call', function(call) { - const answerCall = confirm("Do you want to answer?") - - if(answerCall){ - call.answer(window.localStream) // A - showConnectedContent(); // B - call.on('stream', function(stream) { // C - window.remoteAudio.srcObject = stream; - window.remoteAudio.autoplay = true; - window.peerStream = stream; - }); - } else { - console.log("call denied"); // D - } + peer.on('call', (call) => { + const answerCall = confirm("Do you want to answer?") + + if (answerCall){ + call.answer(window.localStream) // A + showConnectedContent(); // B + call.on('stream', (stream) => { // C + window.remoteAudio.srcObject = stream; + window.remoteAudio.autoplay = true; + window.peerStream = stream; + }); + } else { + console.log("call denied"); // D + } }); ``` @@ -47,7 +47,7 @@ Now our users can make a call, but they can't answer one. Let's add the next pie - `call.answer(window.localStream)`: if `answerCall` is `true`, you'll want to call peerJS's `answer()` function on the call to create an answer, passing it the local stream. - `showCallContent`: Similar to what you did in the call button event listener, you want to ensure the person being called sees the correct HTML content. - - Everything in the `call.on('stream', function(){ }` block is exactly the same as it is in call button's event listener. The reason you need to add it here too is so that the browser is also updated for the person answering the call. + - Everything in the `call.on('stream', () => { }` block is exactly the same as it is in call button's event listener. The reason you need to add it here too is so that the browser is also updated for the person answering the call. - If the person denies the call, we're just going to log a message to the console. 3. The code you have now is enough for you to create a call and answer it. Refresh your browsers and test it out. You'll want to make sure that both browsers have the console open or else you won't get the prompt to answer the call. Click call, submit the peer ID for the other browser and then answer the call. The final page should look like this: diff --git a/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/create_a_peer_connection/index.md b/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/create_a_peer_connection/index.md index 4e9b647ea3de691..e4d118c28888942 100644 --- a/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/create_a_peer_connection/index.md +++ b/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/create_a_peer_connection/index.md @@ -33,7 +33,7 @@ Next, you want to ensure your users have a way of connecting with their peers. I 3. When a connection is created, let's use the PeerJS framework's `on('connection')` to set the remote peer's ID and open the connection. The function for this listener accepts a `connection` object which is an instance of the `DataConnection` object (which is a wrapper around WebRTC's [`RTCDataChannel`](/en-US/docs/Web/API/RTCDataChannel)); within this function you'll want to assign it to a variable. Again you'll want to create the variable outside of the function so that you can assign it later. Add the following below your previous code: ```js - peer.on('connection', function(connection){ + peer.on('connection', (connection) => { conn = connection; }); ``` diff --git a/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/creating_a_call/index.md b/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/creating_a_call/index.md index 068062fde5da399..6a60e13c1114ab2 100644 --- a/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/creating_a_call/index.md +++ b/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/creating_a_call/index.md @@ -18,24 +18,24 @@ Exciting times — now you're going to give your users the ability to create cal 2. When a caller clicks "Call" you'll want to ask them for the peer ID of the peer they want to call (which we will store in the `code` variable in `getStreamCode()`) and then you'll want to create a connection with that code. Add the following below your previous code: ```js - callBtn.addEventListener('click', function(){ - getStreamCode(); - connectPeers(); - const call = peer.call(code, window.localStream); // A - - call.on('stream', function(stream) { // B - window.remoteAudio.srcObject = stream; // C - window.remoteAudio.autoplay = true; // D - window.peerStream = stream; //E - showConnectedContent(); //F }); - }) + callBtn.addEventListener('click', () => { + getStreamCode(); + connectPeers(); + const call = peer.call(code, window.localStream); // A + + call.on('stream', (stream) => { // B + window.remoteAudio.srcObject = stream; // C + window.remoteAudio.autoplay = true; // D + window.peerStream = stream; //E + showConnectedContent(); //F }); + }) }) ``` Let's walk through this code: - `const call = peer.call(code, window.localStream)`: This will create a call with the `code` and `window.localStream` we've previously assigned. Note that the `localStream` will be the user's `localStream`. So for caller A it'll be their stream & for B, their own stream. - - `call.on('stream', function(stream) {` : peerJS gives us a `stream` event which you can use on the `call` that you've created. When a call starts streaming, you need to ensure that the remote stream coming from the call is assigned to the correct HTML elements and window, this is where you'll do that. + - `call.on('stream', (stream) => {` : peerJS gives us a `stream` event which you can use on the `call` that you've created. When a call starts streaming, you need to ensure that the remote stream coming from the call is assigned to the correct HTML elements and window, this is where you'll do that. - The anonymous function takes a `MediaStream` object as an argument, which you then have to set to your window's HTML like you've done before. Here we get your remote `