Skip to content

Commit

Permalink
Merge pull request #8804 from pymedusa/release/release-0.5.0
Browse files Browse the repository at this point in the history
Release/release 0.5.0
  • Loading branch information
p0psicles authored Nov 30, 2020
2 parents e6025de + 26f28ab commit ce68da5
Show file tree
Hide file tree
Showing 431 changed files with 5,402 additions and 69,991 deletions.
31 changes: 31 additions & 0 deletions .github/workflows/api-tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
name: API tests

on: [push]

jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7]
node-version: [14.x]

steps:
- uses: actions/checkout@v2
- name: Set up Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install Python dependencies
run: |
pip install dredd_hooks
pip install 'PyYAML>=5.1'
pip install 'six>=1.13.0'
- name: Install Node.js dependencies
run: yarn install --ignore-scripts
- name: Test with Dredd
run: yarn test-api
34 changes: 34 additions & 0 deletions .github/workflows/node-frontend.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
name: Frontend tests

on: [push]

jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [14.x]

steps:
- uses: actions/checkout@v2
- name: Set up Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}
- name: Install themes dependencies
working-directory: ./themes-default/slim
run: yarn install --ignore-scripts
- name: Check themes builds
run: |
mv ./.github/build-themes-check.sh ./themes-default/slim
cd ./themes-default/slim
bash build-themes-check.sh
- name: Test lint
working-directory: ./themes-default/slim
run: yarn lint && yarn lint-css
- name: Test build
working-directory: ./themes-default/slim
run: yarn test
- name: Upload coverage to Codecov
working-directory: ./themes-default/slim
run: yarn coverage
26 changes: 26 additions & 0 deletions .github/workflows/python-backend.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
name: Backend tests

on: [push]

jobs:
build:
runs-on: ubuntu-latest
strategy:
max-parallel: 4
matrix:
python-version: [3.6, 3.7, 3.8, 3.9]

steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install tox tox-gh-actions
- name: Test with tox
run: tox
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
12 changes: 6 additions & 6 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -56,12 +56,6 @@ jobs:
- yarn lint-css
- yarn test
- yarn coverage
# backend tests (py2.7) start here
- name: 'Backend tests (py2.7)'
python: '2.7'
env:
- TOXENV=py27
<<: *_backend_tests
# backend tests (py3.6) start here
- name: 'Backend tests (py3.6)'
python: '3.6'
Expand All @@ -80,6 +74,12 @@ jobs:
env:
- TOXENV=py38
<<: *_backend_tests
# backend tests (py3.9) start here
- name: 'Backend tests (py3.9)'
python: '3.9'
env:
- TOXENV=py39
<<: *_backend_tests
# dredd tests (py3.7) start here
- name: 'Dredd tests (py3.7)'
python: '3.7'
Expand Down
16 changes: 16 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,19 @@
## 0.5.0 (30-11-2020)

First Python 3.x version

#### New Features
- Separate proxy configs for Providers, Indexers, CLients (torrent/nzb) and others ([8605](https://github.com/pymedusa/Medusa/pull/8605))

#### Improvements
- Add absolute numbering to indexers tvmaze and tmdb, making them suitable for anime ([8777](https://github.com/pymedusa/Medusa/pull/8777))

#### Fixes
- Provider Nyaa.si: Correct the category that is used for anime searches ([8777](https://github.com/pymedusa/Medusa/pull/8777))
- Indexer TMDB: Fix adding show using an alternative language ([8784](https://github.com/pymedusa/Medusa/pull/8784))

-----

## 0.4.6 (25-11-2020)

Last version that runs on Python 2.7!
Expand Down
14 changes: 6 additions & 8 deletions ext/feedparser/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Copyright 2010-2015 Kurt McKee <[email protected]>
# Copyright 2010-2020 Kurt McKee <[email protected]>
# Copyright 2002-2008 Mark Pilgrim
# All rights reserved.
#
Expand All @@ -25,22 +25,20 @@
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE."""

from __future__ import absolute_import, unicode_literals
from .api import parse
from .datetimes import registerDateHandler
from .exceptions import *
from .util import FeedParserDict

__author__ = 'Kurt McKee <[email protected]>'
__license__ = 'BSD 2-clause'
__version__ = '5.2.1'
__version__ = '6.0.2'

# HTTP "User-Agent" header to send to servers when downloading feeds.
# If you are embedding feedparser in a larger application, you should
# change this to your application name and URL.
USER_AGENT = "feedparser/%s +https://github.com/kurtmckee/feedparser/" % __version__

from . import api
from .api import parse
from .datetimes import registerDateHandler
from .exceptions import *

# If you want feedparser to automatically resolve all relative URIs, set this
# to 1.
RESOLVE_RELATIVE_URIS = 1
Expand Down
84 changes: 34 additions & 50 deletions ext/feedparser/api.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# The public API for feedparser
# Copyright 2010-2015 Kurt McKee <[email protected]>
# Copyright 2010-2020 Kurt McKee <[email protected]>
# Copyright 2002-2008 Mark Pilgrim
# All rights reserved.
#
Expand All @@ -26,27 +26,10 @@
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.

from __future__ import absolute_import, unicode_literals

import io
import urllib.parse
import xml.sax

try:
from io import BytesIO as _StringIO
except ImportError:
try:
from cStringIO import StringIO as _StringIO
except ImportError:
from StringIO import StringIO as _StringIO

try:
import urllib.parse
except ImportError:
from urlparse import urlparse

class urllib(object):
class parse(object):
urlparse = staticmethod(urlparse)

from .datetimes import registerDateHandler, _parse_date
from .encodings import convert_to_utf8
from .exceptions import *
Expand All @@ -58,17 +41,9 @@ class parse(object):
from .parsers.strict import _StrictFeedParser
from .sanitizer import replace_doctype
from .sgml import *
from .urls import _convert_to_idn, _makeSafeAbsoluteURI
from .urls import convert_to_idn, make_safe_absolute_uri
from .util import FeedParserDict

bytes_ = type(b'')
unicode_ = type('')
try:
unichr
basestring
except NameError:
unichr = chr
basestring = str

# List of preferred XML parsers, by SAX driver name. These will be tried first,
# but if they're not installed, Python will keep searching through its own list
Expand Down Expand Up @@ -96,6 +71,7 @@ class parse(object):
'cdf': 'CDF',
}


def _open_resource(url_file_stream_or_string, etag, modified, agent, referrer, handlers, request_headers, result):
"""URL, filename, or string --> stream
Expand Down Expand Up @@ -127,13 +103,13 @@ def _open_resource(url_file_stream_or_string, etag, modified, agent, referrer, h
if request_headers is supplied it is a dictionary of HTTP request headers
that will override the values generated by FeedParser.
:return: A :class:`StringIO.StringIO` or :class:`io.BytesIO`.
:return: A bytes object.
"""

if hasattr(url_file_stream_or_string, 'read'):
return url_file_stream_or_string.read()

if isinstance(url_file_stream_or_string, basestring) \
if isinstance(url_file_stream_or_string, str) \
and urllib.parse.urlparse(url_file_stream_or_string)[0] in ('http', 'https', 'ftp', 'file', 'feed'):
return http.get(url_file_stream_or_string, etag, modified, agent, referrer, handlers, request_headers, result)

Expand All @@ -142,7 +118,7 @@ def _open_resource(url_file_stream_or_string, etag, modified, agent, referrer, h
with open(url_file_stream_or_string, 'rb') as f:
data = f.read()
except (IOError, UnicodeEncodeError, TypeError, ValueError):
# if url_file_stream_or_string is a unicode object that
# if url_file_stream_or_string is a str object that
# cannot be converted to the encoding returned by
# sys.getfilesystemencoding(), a UnicodeEncodeError
# will be thrown
Expand All @@ -154,19 +130,26 @@ def _open_resource(url_file_stream_or_string, etag, modified, agent, referrer, h
return data

# treat url_file_stream_or_string as string
if not isinstance(url_file_stream_or_string, bytes_):
if not isinstance(url_file_stream_or_string, bytes):
return url_file_stream_or_string.encode('utf-8')
return url_file_stream_or_string

LooseFeedParser = type(str('LooseFeedParser'), (
_LooseFeedParser, _FeedParserMixin, _BaseHTMLProcessor, object
), {})
StrictFeedParser = type(str('StrictFeedParser'), (
_StrictFeedParser, _FeedParserMixin, xml.sax.handler.ContentHandler, object
), {})

LooseFeedParser = type(
'LooseFeedParser',
(_LooseFeedParser, _FeedParserMixin, _BaseHTMLProcessor, object),
{},
)

StrictFeedParser = type(
'StrictFeedParser',
(_StrictFeedParser, _FeedParserMixin, xml.sax.handler.ContentHandler, object),
{},
)


def parse(url_file_stream_or_string, etag=None, modified=None, agent=None, referrer=None, handlers=None, request_headers=None, response_headers=None, resolve_relative_uris=None, sanitize_html=None):
'''Parse a feed from a URL, file, stream, or string.
"""Parse a feed from a URL, file, stream, or string.
:param url_file_stream_or_string:
File-like object, URL, file path, or string. Both byte and text strings
Expand Down Expand Up @@ -210,7 +193,8 @@ def parse(url_file_stream_or_string, etag=None, modified=None, agent=None, refer
:data:`feedparser.SANITIZE_HTML`, which is ``True``.
:return: A :class:`FeedParserDict`.
'''
"""

if not agent or sanitize_html is None or resolve_relative_uris is None:
import feedparser
if not agent:
Expand All @@ -221,10 +205,10 @@ def parse(url_file_stream_or_string, etag=None, modified=None, agent=None, refer
resolve_relative_uris = feedparser.RESOLVE_RELATIVE_URIS

result = FeedParserDict(
bozo = False,
entries = [],
feed = FeedParserDict(),
headers = {},
bozo=False,
entries=[],
feed=FeedParserDict(),
headers={},
)

data = _open_resource(url_file_stream_or_string, etag, modified, agent, referrer, handlers, request_headers, result)
Expand All @@ -243,10 +227,10 @@ def parse(url_file_stream_or_string, etag=None, modified=None, agent=None, refer
# Ensure that baseuri is an absolute URI using an acceptable URI scheme.
contentloc = result['headers'].get('content-location', '')
href = result.get('href', '')
baseuri = _makeSafeAbsoluteURI(href, contentloc) or _makeSafeAbsoluteURI(contentloc) or href
baseuri = make_safe_absolute_uri(href, contentloc) or make_safe_absolute_uri(contentloc) or href

baselang = result['headers'].get('content-language', None)
if isinstance(baselang, bytes_) and baselang is not None:
if isinstance(baselang, bytes) and baselang is not None:
baselang = baselang.decode('utf-8', 'ignore')

if not _XML_AVAILABLE:
Expand All @@ -266,20 +250,20 @@ def parse(url_file_stream_or_string, etag=None, modified=None, agent=None, refer
saxparser.setContentHandler(feedparser)
saxparser.setErrorHandler(feedparser)
source = xml.sax.xmlreader.InputSource()
source.setByteStream(_StringIO(data))
source.setByteStream(io.BytesIO(data))
try:
saxparser.parse(source)
except xml.sax.SAXException as e:
result['bozo'] = 1
result['bozo_exception'] = feedparser.exc or e
use_strict_parser = 0
if not use_strict_parser and _SGML_AVAILABLE:
if not use_strict_parser:
feedparser = LooseFeedParser(baseuri, baselang, 'utf-8', entities)
feedparser.resolve_relative_uris = resolve_relative_uris
feedparser.sanitize_html = sanitize_html
feedparser.feed(data.decode('utf-8', 'replace'))
result['feed'] = feedparser.feeddata
result['entries'] = feedparser.entries
result['version'] = result['version'] or feedparser.version
result['namespaces'] = feedparser.namespacesInUse
result['namespaces'] = feedparser.namespaces_in_use
return result
Loading

0 comments on commit ce68da5

Please sign in to comment.