-
Notifications
You must be signed in to change notification settings - Fork 10.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[discovery] "GO" Channels have been discontinued #14954
Comments
Thanks @remitamine! Free videos are working great. The *GO channels all redirect to the "regular" sites now. |
Would it be possible to use a --ap-mso login instead of cookies for the Discovery sites? I was constantly having issues keeping my cookies up to date. |
Can I provide someone with a cookies file to work on getting the non-free episodes? |
@StevenDTX I have found that the name of the cookie varies, even when logged in. So far I have only seen let cookies = document.cookie.split(';').map(function(x) {
return x.trim().split(/(=)/);
}).reduce(function(a, b) {
a[b[0]] = a[b[0]] ? a[b[0]] + ', ' + b.slice(2).join('') :
b.slice(2).join('');
return a;
}, {});
let token;
for(let i in cookies) {
if(cookies[i].substr(0, 1) == '%') {
let temp = JSON.parse(decodeURIComponent(decodeURIComponent(cookies[i])));
let longest = 0;
for(let j in temp) {
if(temp[j].length > longest) {
token = temp[j];
longest = temp[j].length;
}
}
break;
}
} From there I fetch the m3u8 link: fetch('https://api.discovery.com/v1/streaming/video/' + video.id, {
headers: {
'authorization': 'Bearer ' + token,
},
}).then(function(result) {
return result.json();
}).then(function(json) {
//json.streamUrl is the episode's master m3u8
}); This method works fine even on the free episodes. I would say the current method (grabbing an anonymous token on every download) from cb0c231 is equally suited to the free videos. |
I apologize, as a lot of what you said if a bit over my head. Is the first section of code the userscript you run? In, I assume, TamperMonkey or something? |
@StevenDTX Exactly |
Thanks a lot @cookieguru !! I was able to get the script installed and in the Firefox console I get a link that points to https://content-ausc4.uplynk.com/444fe784b93347829dce878e052b952d/i.m3u8. If I expand that, I get the full link with authorization and stuff. I am actually able to download the video, in 1080p!! The free videos are only getting downloaded in 720p. It's not an automated process, but I only need a few shows a week. |
@StevenDTX We have the same process ;) I'm hoping someone more fluent in python can help integrate that methodology in to the existing extractor. AFAIK the extractor as it is will work fine; it just doesn't know how to get to the playlist file containing all the formats. |
@cookieguru I understand your first code snippet is a userscript. I put that into a script with @grant none. However, I don't understand what to do with your second code snippet, do you mind expanding? Pasting that second snippet into the script along with the first snippet causes an execution error. I noticed @StevenDTX mention getting a link in the console, but I see no console output code and pasting the second snippet of code into the console directly yields a similar execution error about video not being defined. I see there's some sort of working method here but it's just a little out of my grasp of understanding. Hoping you'll be willing to help. I miss being able to grab shows from discovery! I used to use the cookies/directv command but they stopped working a while ago. |
@Allmight3 The first script iterates over the browser's cookies and extracts the necessary authorization token that is needed to perform a request to Discovery's API to get the link to the video playlist (with multiple formats). The second script is missing context and wasn't meant to be copy/pastable; rather just an example of how to use the authorization token. Since there's obviously a desire for others to use this before the changes can be worked in to youtube-dl I'll post the full user script here: // ==UserScript==
// @name Science Channel Go/Discovery Go
// @namespace https://github.com/violentmonkey/violentmonkey
// @version 1.0
// @author https://github.com/cookieguru
// @match https://www.discovery.com/*
// @match https://www.sciencechannel.com/*
// @grant none
// ==/UserScript==
(function() {
'use strict';
let video;
__reactTransmitPacket.layout[window.location.pathname].contentBlocks.forEach((block) => {
if(block.type === 'video') {
video = block.content.items[0];
}
});
let cookies = document.cookie.split(';').map(function(x) {
return x.trim().split(/(=)/);
}).reduce(function(a, b) {
a[b[0]] = a[b[0]] ? a[b[0]] + ', ' + b.slice(2).join('') :
b.slice(2).join('');
return a;
}, {});
let token;
for(let i in cookies) {
if(cookies[i].substr(0, 1) == '%') {
let temp = JSON.parse(decodeURIComponent(decodeURIComponent(cookies[i])));
let longest = 0;
for(let j in temp) {
if(temp[j].length > longest) {
token = temp[j];
longest = temp[j].length;
}
}
break;
}
}
let style = document.createElement('style');
style.innerHTML = '#react-tooltip-lite-instace-3, #react-tooltip-lite-instace-4, #react-tooltip-lite-instace-5 { display:none; }';
document.head.appendChild(style);
fetch('https://api.discovery.com/v1/streaming/video/' + video.id, {
headers: {
'authorization': 'Bearer ' + token,
},
}).then(function(result) {
return result.json();
}).then(function(json) {
document.body.innerHTML = "'S" + ('0' + video.season.number).slice(-2) + 'E' + ('0' + video.episodeNumber).slice(-2) + ' ' + video.name.replace(/'/g, '') + "' => '" + json.streamUrl + "',";
});
})(); Note that this just sends you to an m3u8 which contains links to the other m3u8s of different formats. You'll have to visit the file linked in the browser and figure out which format you want to download. I paste this line in to another (non-browser based) script that does it in batches. The TL;DR of that script is to grab the m3u8 link to the resolution you want and pass that to ffmpeg: ffmpeg -i "http://www.example.com/1920x1080.m3u8" -acodec copy -bsf:a aac_adtstoasc -vcodec copy "filename.mkv" |
@cookieguru Thank you! That worked perfectly and was easy to follow. I have successfully downloaded my show in 1080p. My family will enjoy this. I appreciate your time and effort. |
Thanks so much for working on this. The userscript you posted works with Greasemonkey and I am able to get the video. I compiled youtube-dl with your discovery.py commit. But when I try to get a video from the Discovery site I get an error, both for free videos and videos that require a login cookie.
I compiled it multiple times to make sure I didn't make a mistake, but still no luck. I will wait for the commit to be commited and perhaps the precompiled youtube-dl will work for me. Until then I will use the userscript. Thanks again. |
@Mr-Jake That indicates that something from |
@cookieguru The reason I use an older version is because I had a conflict getting py2exe working with 3.x. I didn't think it would be an issue since the youtube-dl documentation says 2.6, 2.7, or 3.2+ can be used. |
Semi-related question: does youtube-dl generate the requisite json that gets output by the --write-info-json option, or is that json info transmitted as-is by the player interface? I ask because, while the browser userscript @cookieguru posted works swimmingly to get the m3u8 link, it's obviously missing both the metadata (which can be reconstructed via the page dump, thankfully) and the link to the SCC and XML/TTML subtitles (which can't, unfortunately; those get served by a completely different URL). If the contents of the file output by --write-info-json are transmitted by the website, all the right data is there and the subtitles would still be grabbable with only minimal tweaking to the userscript, right? |
@Mr-Jake I just pushed a new commit that should work with 2.6+. Could you try it again? It seems to (still) work OK for free videos, but I'm seeing some HTTP 403 errors when I log in with ap-mso. @Nii-90 The video metadata comes from the page itself, that is, the URL that you pass to youtube-dl to initiate the download. The subtitles come from the stream metadata; IIRC they will be near the top of the m3u8 file of your chosen format. |
They aren't. Neither the 6 KB preplay playlist nor the large segment playlist for a particular resolution have the link to the vtt or xml/ttml file (me thinking it was scc was confusing Science Channel for Fox, since they also use Uplynk and I have to use similar script manipulation to restore the chapter marks there too). grep didn't find it, checking it visually in a text editor I couldn't see it, and the fusionddmcdn domain that they come from is not contained in the m3u8. The m3u8 only has the uplynk urls the video data is served from. Using --write-info-json on one of the free videos on Science Channel, and then parsing the result (the actual URLs redacted here for paranoia):
Running a grep for fusionddmcdn (or vtt or ttml) on either the preplay or segment/resolution-specific m3u8 yields nothing. |
@cookieguru But when I include --cookies for a login video, I get:
EDIT: In your commit description I see you mentioned eosAf and eosAn. Not sure exactly what that is, but when I looked at my cookie file I have eosAd and eosAf. |
@Nii-90 According to discoverygo.py#L69 that's where they come from. I don't use subs so I can't speak to when that last worked. Maybe things have changed since the switchover to Uplynk and/or the switch to Oauth for getting the stream URLs. If you paste the six lines starting with and including @Mr-Jake The point of the commit was to eliminate the need for Regardless though I can't get this to work on authenticated videos. I think what is happening is that it's not logging in before getting the token. If anyone can point me in the direction of an extractor that won't even run |
youtube-dl can get the subs from the free videos, so I think it's just that the authentication is getting in the way. Speaking of, shouldn't discovery.py (or discoverygo.py) be importing the adobepass module to streamline handling the auth stuff? I didn't think ap-mso/ap-username/ap-password parameters would work for a particular site without the extractor for that site using AdobePassIE.
The chapter marks for Uplynk-based sites don't actually exist in a way that youtube-dl is set up to parse, but they can be re-derived from scratch by parsing the m3u8. Every single time #UPLYNK-SEGMENT or #EXT-X-DISCONTINUITY appears in the resolution-specific m3u8 file, that's a break in the video stream, usually for the insertion of advertisements, which occurs at the same boundaries as the natural chapter segments. I simply whipped up a bash script that automates splitting the big m3u8 apart into child m3u8s, I then download the individual segments using a for loop, and then mkvtoolnix can generate chapters at append boundaries (for speed/size purposes I only append the audio track back together in mkvtoolnix, and then dump the chapter info from it using ffmpeg). The regular metadata and the chapter info can then be merged into a single ffmetadata file and used when the individual segments get concatenated by ffmpeg (in two steps, as opposed to youtube-dl taking three steps to do the same things*). *Youtube-DL currently:
vs.
|
@Nii-90 This is just what I was looking for. I thought youtube-dl would automatically do the login stuff when the various ap switches were passed. If you have some additions to my PR to make this happen, I'm all ears, otherwise I'll look in to it when I have some free time in the next few days. |
has there been any update on this? I too am passing cookies.txt but it just complains with "you should use --cookies" Thanks |
@halolordkiller3 Cookies won't work as they aren't used to grab videos any more. #15455 still needs the |
@Mr-Jake All tabs works for me on Investigation Discovery - Official Site. I tired it on both Google Chrome 65.0.3325.181 (Official Build) (64-bit) and FireFox 59.0.1 (64-bit). @StevenDTX - Did you download from Investigation Discovery - Official Site? |
@cookieguru Both Discovery and History appear to be using the same cable provider login authentication for Comcast. From the code changes I compiled, I got youtube-dl to authenticate and download videos from History. My plan is to debug the authentication of both sites to determine why Discovery fails and why History succeeds. The Pull Request that I compiled that fixed authentication with History is #11080 by |
I was just wondering how Youtube-dl is coming to support the Discovery sites? |
I tried to fix the authentication for Discovery, but in end did not have any luck. While I was attempting to fix it, I found a solution to authenticate A&E networks (History Channel, etc) for Comcast login. I was also successful at restoring HTTP 720p downloads for A&E networks. I'm not sure why the extractor was recently changed to HLS since HTTP is still available. HTTP is better since no remux is needed. --cookies does not work for any site I try, while I used to use it all the time in the past. Since it is broken for all sites I tried, I wonder if a commit broke it. When I have time I will look back to see if commit is to blame. |
@Mr-Jake Thank you for your effort. --cookies doesn't work for me either, however; it did in the past. I use the DirecTV login, so I don't know if Youtube-dl will work for me with A&E networks (History Channel, etc) or TLC. |
I can only speak for the Discovery sites but can say with 100% certainty that the reason it works has nothing to do with youtube-dl and everything to do with Discovery's site. Previously, Discovery sent the URL to the playlist file with the webpage on all authenticated requests. Now the URL to the playlist is hidden behind a separate endpoint that the browser requests when the page is first loaded in the browser. This requires a separate set of cookies that is unrelated to the Adobe Pass cookies, and you have to be authenticated to get the cookies that the endpoint needs. |
Anyone know how to get ffmpeg to download from links stored in a txt file? |
@hemps37 http://lmgtfy.com/?q=ffmpeg+download+from+links+stored+in+a+txt+file This issue is only for discussion related to fixing youtube-dl. It is not a ffmpeg support forum |
Bumping this. I wish I could help. Having the exact same problem with some TLC shows, as it's a Discovery site. What's odd is some older episodes work fine, but I feel like it's newer episodes of shows that have this issue where youtube-dl is asking for cookies and the adobe pass isn't working. |
Thanks, @remitamine. It works great with cookies. |
@cookieguru Hmm I think discovery changed the site again, I can't seem to get the playlist... Maybe it's me doing it wrong? |
@beren12 I checked a couple of episodes and they still load fine for me |
I used the Tampermonkey script to download the full URL for the main manifest M3U8. I then replaced the beginning portion with i.m3u8 (1080p). I then took the full i.m3u8 URL and the download in both FFmpeg and Streamlink fail due to the lack of authorization (403) for check2, just like #14954 (comment). Has Discovery defeated us? |
@besweeet youtube-dl has been uploaded; why are you still using the userscript? |
@cookieguru youtube-dl, when just providing a URL (example: https://www.sciencechannel.com/tv-shows/outrageous-acts-of-science/full-episodes/savage-skills), throws the following error: It was said here that the |
@cookieguru All good now. Was using an incognito tab when exporting cookies with the cookies.txt Chrome extension which may not have included everything. |
@cookieguru the script is broken, kindly fix it. sent you an e-mail of the script. Thanks. |
You shouldnt be using @cookieguru userscript. All of the Discovery channels work just fine with youtube-dl if you use cookies. |
@StevenDTX, how do I use the cookies in the you tube-dl? Do I use the you tube-dl GUI or the command option? I'm very raw in these techno savvy items. |
@StevenDTX and @cookieguru , below is where I'm now stuck, how do I choose the best quality (1080p) to download: C:\Users\user>C:\youtube-dl\youtube-dl.exe "https://www.sciencechannel.com/tv-shows/monster-black-hole-the-first-image/full-episodes/monster-black-hole-the-first-image" --cookies C:\youtube-dl\cookies.txt -F -v C:\Users\user> |
@nairobi1982: Replace "-F -v" with "-f best". To choose a specific quality, replace "-f best" with "-f hls-449". In that example, it will download the 448x252 version. So, the format is "-f format" where you replace "format" with the code that you see at the start of each line. Hope that makes sense. |
I use this Chrome extension: cookies.txt Offered by: Genuinous https://chrome.google.com/webstore/detail/cookiestxt/njabckikapfpffapmjgojcnbfjonfjfg?hl=en Copy the cookies from the extension into a text file (say, cookies.txt) and then pipe that file name into the yt-dl script with --cookies cookies.txt ...on a completely unrelated note, the latest episode of Battlebots from the discovery site does not download, instead it downloads a 2 minute ad for a TLC show. Playing the episode on the site works fine. Is anyone else seeing this issue? https://go.discovery.com/tv-shows/battlebots/full-episodes/eyes-on-the-prize EDIT: oops, I'm a little late to the party...but could someone check on my Battlebots issue? |
@besweeet , Thank you very much. It has worked. |
Never mind. Problem solved by just updating to the latest version (2019.08.13) |
Bombarding me with demands is not a way to get a resolution. And as previously mentioned, this issue is for fixing youtube-dl, not the script I previously used to triage the issue. Glad someone else was able to spoonfeed you answers. |
@cookieguru , it is all well mate. Everything is now sorted. Thanks. |
Please follow the guide below
x
into all the boxes [ ] relevant to your issue (like this:[x]
)Make sure you are using the latest version: run
youtube-dl --version
and ensure your version is 2017.12.10. If it's not, read this FAQ entry and update. Issues with outdated version will be rejected.Before submitting an issue make sure you have:
What is the purpose of your issue?
The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your issue
If the purpose of this issue is a bug report, site support request or you are not completely sure provide the full verbose output as follows:
Add the
-v
flag to your command line you run youtube-dl with (youtube-dl -v <your command line>
), copy the whole output and insert it here. It should look similar to one below (replace it with your log inserted between triple ```):If the purpose of this issue is a site support request please provide all kinds of example URLs support for which should be included (replace following example URLs by yours):
Description of your issue, suggested solution and other information
All of the Discovery "GO" channels (discoverygo.com, tlcgo.com, animalplanetgo.com, etc) are being discontinued. They have moved all of the Full Episodes to the non-GO channels (discovery.com, tlc.com, animalplanet.com, etc).
The current [discovery] extractor does not work on these sites.
They also have lowered the quality of the videos on the GO channels to 720p. It appears that the 1080p videos are available on the non-GO channels.
Thanks.!
The text was updated successfully, but these errors were encountered: