-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Bigtable v2 API #1850
Comments
|
Some additional information:
|
I'd guess that MutateRows, bulk delete, and "bulk read" (a set of random row keys) are not in the current python implementation, since they were not there from day one and we added them along the way. They are useful, but not required functionality. |
We have a java implementation which processes v2 ReadRowsResponse objects. Hopefully, it can help give some insight into how to use the new API. There's also a json file we use to drive automated tests; we can explain more about it offline. |
If we implement happybase's |
Yeah, the new MutateRows is pretty awesome. It has a significant performance improvement over MutateRow. Anything related to v2 comes first due to time sensitivity. We have a few performance improvement (MutateRows, multiple grpc channels) and reliability (retries) improvements that aren't as time sensitive. |
Here's a link to the go read rows implementation. |
I've set up a feature branch for this project: I intend to merge PRs to that branch, and then merge that branch to |
Reviewing open
|
RenameTable was removed in v2. While it existed in v1, it was never actually implemented. Good catch. |
The protos should be at #1895 |
#1895 is bogus because it's targeted at master and breaks everything anyways, but at least look at the Makefile change. I fixed the makefile to find grpc_python_plugin wherever it is in your path but there's also a python script that runs and assumes a location, so I ended up copying grpc_python_plugin to the gcloud-python dir before running |
Setup:
Port modules to V2 protos:
System testing
FInalize
Open issues:
|
@tswast FYI |
Question for the eng team CCed on this thread. I see there are some more streaming APIs. Would it be possible to implement these APIs in the client as iterators (and stream the underlying results)? Right now for PartialRowsData, the client accumulates the results in a semi-hidden list. |
I'm trying to parse the |
Sounds like it should work - what was the exception? On Fri, Jun 24, 2016, 6:14 PM Tres Seaver [email protected] wrote:
|
This session is based on the new generation-from-protos stuff in PR #1903. >>> import json
>>> from gcloud.bigtable._generated_v2.bigtable_pb2 import ReadRowsResponse
>>> with open('gcloud/bigtable/read-rows-acceptance-test.json') as f:
... test_json = json.load(f)
...
>>> chunk_pb = test_json['tests'][0]['chunks'][0]
>>> ReadRowsResponse.CellChunk.FromString(chunk_pb)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/tseaver/projects/agendaless/Google/src/gcloud-python/.tox/py27/lib/python2.7/site-packages/google/protobuf/internal/python_message.py", line 780, in FromString
message.MergeFromString(s)
File "/home/tseaver/projects/agendaless/Google/src/gcloud-python/.tox/py27/lib/python2.7/site-packages/google/protobuf/internal/python_message.py", line 1080, in MergeFromString
if self._InternalParse(serialized, 0, length) != length:
File "/home/tseaver/projects/agendaless/Google/src/gcloud-python/.tox/py27/lib/python2.7/site-packages/google/protobuf/internal/python_message.py", line 1106, in InternalParse
new_pos = local_SkipField(buffer, new_pos, end, tag_bytes)
File "/home/tseaver/projects/agendaless/Google/src/gcloud-python/.tox/py27/lib/python2.7/site-packages/google/protobuf/internal/decoder.py", line 850, in SkipField
return WIRETYPE_TO_SKIPPER[wire_type](buffer, pos, end)
File "/home/tseaver/projects/agendaless/Google/src/gcloud-python/.tox/py27/lib/python2.7/site-packages/google/protobuf/internal/decoder.py", line 799, in _SkipGroup
new_pos = SkipField(buffer, pos, end, tag_bytes)
File "/home/tseaver/projects/agendaless/Google/src/gcloud-python/.tox/py27/lib/python2.7/site-packages/google/protobuf/internal/decoder.py", line 850, in SkipField
return WIRETYPE_TO_SKIPPER[wire_type](buffer, pos, end)
File "/home/tseaver/projects/agendaless/Google/src/gcloud-python/.tox/py27/lib/python2.7/site-packages/google/protobuf/internal/decoder.py", line 820, in _RaiseInvalidWireType
raise _DecodeError('Tag had invalid wire type.')
google.protobuf.message.DecodeError: Tag had invalid wire type. |
FWIW >>> chunk_pb
u'row_key: "RK"\nfamily_name: <\n value: "A"\n>\nqualifier: <\n value: "C"\n>\ntimestamp_micros: 100\nvalue: "value-VAL"\ncommit_row: false\n' |
I don't know the python protobuf APIs, but it looks like there are other On Fri, Jun 24, 2016 at 6:47 PM Tres Seaver [email protected]
|
I missed that those chunks are rendered using >>> from google.protobuf.text_format import Merge
>>> Merge(chunk_pb, chunk) |
Ah, sorry we should have mentioned. Glad it's working. On Fri, Jun 24, 2016, 7:42 PM Tres Seaver [email protected] wrote:
|
@garye Am I meant to neglect the |
@tseaver You mean for cluster administration? |
Yes -- am I supposed to be exposing the methods to manipulate clusters within an instance? |
We don't do that in go or Java. I would think Happybase doesn't have a way to manage clusters in v1 or instances in v2, since HBase doesn't have any similar concepts. If there isn't currently a way to do cluster management, we should not add new functionality. |
@sduskis I was just asking because |
Thanks for clarifying. I'd guess we need to expose operations that are similar to the way the GCP console operates. There is a ClusterService for some operations like resizing a cluster or changing the display name. I don't know enough about that. I'll ping the group to find out more about this. The bottom line is that we need parity with the existing functionality in gcloud.python through the new APIs. |
@dhermes if you can do a braindump here of what you've learned, and then assign to me, that would be great.
The text was updated successfully, but these errors were encountered: