-
Notifications
You must be signed in to change notification settings - Fork 455
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[tests] Add dbnode capable of being started in-process. #3759
Conversation
return nil | ||
} | ||
|
||
func updateDBNodePorts(cfg config.Configuration) (config.Configuration, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note, this is not the complete list of addresses we could watch for based on the config, but one could see how we would extend this to add support
return cfg, nil | ||
} | ||
|
||
func updateDBNodeFilepaths(cfg config.Configuration) (config.Configuration, []string, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto for filepaths
}, res.Datapoints[0]) | ||
} | ||
|
||
func TestWriteTaggedFetchTaggedRoundtrip(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One interesting thing you can see from this test is that, while node setup is pretty painless, reading and writing could be more frictionless. Adding a layer of helpers after we reach dtest parity could be really nice here.
host, err := dbnode.HostDetails(9000) | ||
require.NoError(t, err) | ||
|
||
_, err = coord.CreateDatabase(admin.DatabaseCreateRequest{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would also be nice to simplify database creation as well after we get the foundational stuff in place.
e75f283
to
5251677
Compare
1e56add
to
7d7e51f
Compare
Codecov Report
@@ Coverage Diff @@
## master #3759 +/- ##
========================================
- Coverage 56.9% 56.8% -0.1%
========================================
Files 552 552
Lines 62950 62950
========================================
- Hits 35835 35804 -31
- Misses 23927 23947 +20
- Partials 3188 3199 +11
Flags with carried forward coverage won't be shown. Click here to find out more. Continue to review full report at Codecov.
|
5251677
to
b3ac27f
Compare
7d7e51f
to
b3877b0
Compare
e8a41e4
to
8c7ebab
Compare
b12a4d6
to
0a6960b
Compare
This commit adds an implementation of resources.Node that is capable of being started in-process as opposed to being started via a Docker container.
0a6960b
to
5abce0e
Compare
@@ -166,6 +166,34 @@ func NewCoordinator(cfg config.Configuration, opts CoordinatorOptions) (resource | |||
return coord, nil | |||
} | |||
|
|||
// NewEmbeddedCoordinator creates a coordinator from one embedded within an existing | |||
// db node. This method expects that the DB node has already been started before |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is this expectation guaranteed? Does it error? Should it error if it doesn't?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lazily at the moment. In other words, coordinator API calls will fail since the http server isn't up to handle them. We can validate that here, but that'll take some rework. Will address in a follow up
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah failing fast is preferrable.
|
||
for i := 0; i < d.cfg.Components(); i++ { | ||
select { | ||
case <-d.shutdownCh: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What exactly are we looping over here? And what are pulling from the shutdownCh
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A dbnode can be configured to run as just a dbnode or a dbnode and an embedded coordinator. Therefore, we need to receive shutdown signals from however many components have been started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I am just confused by what we're looping over here? We don't seem to grab anything from the components?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're waiting on N many shutdown notifications before we proceed where N is equal to the number of components we started up, which is what d.cfg.Components()
returns. Each shutdown notification corresponds to a single component telling us they have finished.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. So I think it would just make sense to make this a method for the server interface...
server.ShutdownComponents(waitTime time.Duration) error
with each component implementing an actual Shutdown()
method
errors := make(chan error)
for _, c : range components {
c := c
go func() {
// capture the error here
errors <- c.Shutdown()
} ()
}
// Wait for shutdowns, with appropriate the timeout and error catching.
The two channel setup is just feels awkward and hard to read.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I think moving this to the server package is a good call. Though, fwiw, there are still going to be two channels -- one to interrupt the components and one to get notified when the components are done shutting down. Will try to make this as friendly as possible
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
approved, but yeah I don't love the server + component interactions.
What this PR does / why we need it:
This commit adds an implementation of resources.Node that
is capable of being started in-process as opposed to
being started via a Docker container.
Special notes for your reviewer:
Does this PR introduce a user-facing and/or backwards incompatible change?:
Does this PR require updating code package or user-facing documentation?: