-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Send back the complete API responses. #3059
Conversation
@lukesneeringer why the copyright year changes? Those aren't new files this year (aside from the new file in the PR :D)? |
I changed the ones I edited. (Something I should be better at doing generally.) |
I don't know what a lawyer-cat says but @jonparrott does know (sometimes) |
@lukesneeringer I thought the copyright was only for when the file was created but I could be wrong. I'm no copyright lawyer. |
@lukesneeringer hyphenating makes a lot more sense. 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see any unit tests for EntityResponse
, SentimentResponse
or SyntaxResponse
. Am I missing them?
|
||
"""Classes representing each of the types of responses returned from | ||
the Natural Language API. | ||
""" |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
|
||
class EntityResponse(object): | ||
"""A representation of a response sent back from the | ||
``analyzeEntites`` request to the Google Natural language API. |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
:param payload: A dictionary representing the response. | ||
""" | ||
return cls( | ||
entities=[Entity.from_api_repr(i) for i in payload['entities']], |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
|
||
class SentimentResponse(object): | ||
"""A representation of a response to an ``analyzeSentiment`` request | ||
to the Google Natural Language API. |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
:param payload: A dictionary representing the response. | ||
""" | ||
return cls( | ||
language=payload.get('language', None), |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
@@ -84,7 +80,7 @@ class Entity(object): | |||
def __init__(self, name, entity_type, metadata, salience, mentions): | |||
self.name = name | |||
self.entity_type = entity_type | |||
self.wikipedia_url = metadata.pop('wikipedia_url', None) | |||
self.wikipedia_url = metadata.get('wikipedia_url', None) |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
:rtype: :class:`Sentence` | ||
:returns: The sentence parsed from the API representation. | ||
""" | ||
text_span = payload['text'] |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
@@ -1,4 +1,4 @@ | |||
# Copyright 2016 Google Inc. | |||
# Copyright 2016-2017 Google Inc. |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
# The sentence may or may not have a sentiment; only attempt the | ||
# typecast if one is present. | ||
sentiment = None | ||
if payload.get('sentiment', None): |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
@@ -168,53 +169,3 @@ def from_api_repr(cls, payload): | |||
lemma = payload['lemma'] | |||
return cls(text_content, text_begin, part_of_speech, | |||
edge_index, edge_label, lemma) | |||
|
|||
|
|||
class Sentence(object): |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
Copyright is for when the file was first published, not when it was last edited. |
I also failed at updating the system tests -- working on that now. |
Still need explicit SyntaxResponse and SentimentResponse tests.
language/unit_tests/test_sentence.py
Outdated
sentence = klass.from_api_repr(payload) | ||
self.assertEqual(sentence.content, content) | ||
self.assertEqual(sentence.begin, begin) | ||
self.assertEqual(sentence.sentiment, None) |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
@@ -84,7 +80,7 @@ class Entity(object): | |||
def __init__(self, name, entity_type, metadata, salience, mentions): | |||
self.name = name | |||
self.entity_type = entity_type | |||
self.wikipedia_url = metadata.pop('wikipedia_url', None) | |||
self.wikipedia_url = metadata.get('wikipedia_url') |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
|
||
class EntityResponse(object): | ||
"""A representation of a response sent back from the | ||
``analyzeEntites`` request to the Google Natural language API. |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
@@ -1,4 +1,4 @@ | |||
# Copyright 2016 Google Inc. | |||
# Copyright 2016-2017 Google Inc. |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
@@ -1,4 +1,4 @@ | |||
# Copyright 2016 Google Inc. | |||
# Copyright 2016-2017 Google Inc. |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
# The sentence may or may not have a sentiment; only attempt the | ||
# typecast if one is present. | ||
sentiment = None | ||
if payload.get('sentiment') is not None: |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
So there's good news and bad news. 👍 The good news is that everyone that needs to sign a CLA (the pull request submitter and all commit authors) have done so. Everything is all good there. 😕 The bad news is that it appears that one or more commits were authored by someone other than the pull request submitter. We need to confirm that they're okay with their commits being contributed to this project. Please have them confirm that here in the pull request. Note to project maintainer: This is a terminal state, meaning the |
@googlebot Why is that a problem, exactly? 😕 Just saying... |
* Return full API responses for Natural Language. * Return full API responses from NL. * Do the hyphenated copyright thing for edited files. * Update usage doc. * Updates based on @dhermes feedback. * Update system tests. * Added system tests and Entity Response tests. Still need explicit SyntaxResponse and SentimentResponse tests. * Remove explcit dict.get('foo', None) * Fix some of the pylint errors. * Finish fixing pylint errors. * It is 2017. * Add SentimentResponse tests. * Unit tests for SyntaxResponse. * Missed a dict.get('foo', None) case. * Use assertIsNone * Remove wikipedia_url as an attribute. * PEP 257 compliance. * Add Sentiment isInstance check. * Add API responses documentation. * Adding sentences to docs. * Fix typo.
* Return full API responses for Natural Language. * Return full API responses from NL. * Do the hyphenated copyright thing for edited files. * Update usage doc. * Updates based on @dhermes feedback. * Update system tests. * Added system tests and Entity Response tests. Still need explicit SyntaxResponse and SentimentResponse tests. * Remove explcit dict.get('foo', None) * Fix some of the pylint errors. * Finish fixing pylint errors. * It is 2017. * Add SentimentResponse tests. * Unit tests for SyntaxResponse. * Missed a dict.get('foo', None) case. * Use assertIsNone * Remove wikipedia_url as an attribute. * PEP 257 compliance. * Add Sentiment isInstance check. * Add API responses documentation. * Adding sentences to docs. * Fix typo.
This pull request makes it so that we no longer drop large portions of the API responses from Natural Language.
(It also changes several copyright years.)