-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SOLR-14613: Autoscaling replacement using placement plugins #1845
Conversation
Looks good to me. My only displeasure is the presence of |
Building on @noblepaul 's concern, I do think we need to reconcile the newly added interfaces in the To recap my understanding from the previous PR comments, the intent of having the Node, SolrCollection, Shard, Replica interfaces in While I agree that is a laudable goal in general esp. for plugins, this particular framework actually needs to have access to internal information about the cluster. In other words, placing replicas is a very internal (and core) concern. The fact we're exposing this as a pluggable implementation is really for operational convenience. Moreover, I do believe implementing a placement strategy requires most of the metadata present in collections, shards, replicas, and nodes, so I don’t know if the cost of having two representations of the same domain objects in two different places is worth the benefit it provides? I think the community needs to decide this is how we want to move forward. So referring back to Ilan's stated goals:
I’d actually argue that goal I believe @murblanc has done a great job with goal For goal Lastly, from our slack conversation, I was only suggesting that instead of the plugin impl introducing the |
Thanks @thelabdude for your long and useful comment. Let me try to give my take on this. By saying writing plugins should be easy in point 1, I meant the boilerplate code should not get in the way and force more code lines than really necessary. It's the ability to write for example things such as My thinking is that once the interfaces in For point 2 I didn't want the API to get in the way of efficiency. The current implementation of the API is definitely not optimized (no multithreading etc) but this can be changed without impact on the API or on existing plugins. I believe we reached a good place. I too prefer how the attribute fetching looks now (@noblepaul's contribution) than what I initially proposed. Point 3 is very important. Any internal Solr interface is relatively easy to change, we have the code using it and adapt it as the interface is modified. Once we start handing these interfaces to external code (external to the lucene-solr github repo really), then changing (or not changing) them is a lot more complex and painful. My assumption here is that placement code might be implemented by outside users to suit their specific needs, and that code might not be contributed back to the project (as opposed to the plugin I wrote and that will be the default one and a possible starting point for custom ones). Therefore, we want to be able to maintain these interfaces unchanged even if internal implementation changes. Of course if internal concepts change then the interfaces will likely have to change. For example if the notion of shard leader goes away (imagine...) then of course that part of the API (be it defined on the Take as example the ongoing discussions about configuration. The plugin writer should not have to change the code based on how and where we decide placement plugin configuration should live. Last, the cluster abstractions for the placement plugins do not necessarily represent the existing cluster! In the initial (current) proposal they do (see All this being said, it would be better to unify cluster abstractions (and possibly other abstractions) that are to be used by external code and have a single set of abstractions (interfaces). External uses of such interfaces include placement code (this PR), event processing (see SOLR-14749) and possibly other external code that needs to interact with the cluster. The interfaces defined here were used to write the plugin, and were changed in the process to simplify the plugin code. I believe if we make them evolve to adapt to event processing we'll have a pretty good coverage of potential uses. |
I plan to commit this code soon, so please comment quickly if needed... Note that this code is disabled by default until a user updates |
* | ||
* <p><b>WARNING:</b> this call will be extremely inefficient on large clusters. Usage is discouraged. | ||
*/ | ||
Set<String> getAllCollectionNames(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC at some point we've considered using an Iterator here instead.
/** | ||
* Representation of a SolrCloud node or server in the SolrCloud cluster. | ||
*/ | ||
public interface Node { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So ... given that there's already a SolrNode
interface in master, which already provides isolation from implementation details, shouldn't we use that here? The same applies to SolrCollection
and ShardReplica
.
* Returns the number of replica to create that is returned by the corresponding method {@link #getCountNrtReplicas()}, | ||
* {@link #getCountTlogReplicas()} or {@link #getCountPullReplicas()}. Might delete the other three. | ||
*/ | ||
int getCountReplicasToCreate(Replica.ReplicaType replicaType); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I slightly prefer this method, as it allows us to modify available replica types without changing the interface.
* Objects of this type are returned by the Solr framework to the plugin, they are not directly built by the plugin. When the | ||
* plugin wants to add a replica it goes through appropriate method in {@link PlacementPlanFactory}). | ||
*/ | ||
public interface Replica { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be merged with the existing ShardReplica
to avoid creating separate abstractions for each subsystem.
/** | ||
* @return the name of the {@link Shard} for which the replica should be created | ||
*/ | ||
String getShardName(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also have the collection name here for completeness?
/** | ||
* Shard in a {@link SolrCollection}, i.e. a subset of the data indexed in that collection. | ||
*/ | ||
public interface Shard { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the other top-level abstractions, this interface should be merged with the existing Shard
interface, after resolving the main differences (the use of iterators vs. SimpleMap
, what getters we absolutely need in this interface, etc).
/** | ||
* Represents a Collection in SolrCloud (unrelated to {@link java.util.Collection} that uses the nicer name). | ||
*/ | ||
public interface SolrCollection { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See my other comments about merging this with the existing SolrCollection
.
@@ -141,6 +141,21 @@ | |||
} | |||
} | |||
} | |||
} | |||
}, | |||
"set-placement-plugin": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@noblepaul do we still need these awful json apispecs if we use the V2 API annotations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, we do not need any more apispecs. I'm planning to eliminate the existing ones. @murblanc please remove this change. Annotations take care of this automatically
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please point me @noblepaul or @sigram to existing code in which some of the commands on a path (here /api/cluster
) use annotations and some use apispec?
I haven't found such a mix, and given existing /api/cluster
commands (add-role
, remove-role
, set-property
, set-obj-property
) are defined in the apispec json file, that's where I've added the two new ones.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or, put differently (if there's no simple way to use annotations for the two new commands): when the existing 4 commands are migrated, migrating with them the two new ones is likely not going to make the task any harder.
Keeping the new definitions in apispecs would therefore make sense for now for the sake of simplicity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll give patch
Please switch to annotations before you commit this |
I need more guidance. I've implemented the new commands the way existing ones under the same path are implemented. I don't know how to "switch to annotations". Can you please point me to developer documentation that would help me here? |
Add your APIs here |
…to push configuration to /clusterprops.json
Thanks! |
…ed /api/cluster API
51f1e46
to
607f164
Compare
Previous PR1684 was too large and too slow.
This new PR takes into account (most) comments made on the old PR.
This code is untested! It wasn't actually run in its current form. It is still work in progress but I want to make sure it is now acceptable and possibly start parallelizing work on it... (post merge?)