Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade command leads to huge file size increase #9431

Closed
yangou opened this issue Jan 27, 2015 · 6 comments
Closed

Upgrade command leads to huge file size increase #9431

yangou opened this issue Jan 27, 2015 · 6 comments

Comments

@yangou
Copy link

yangou commented Jan 27, 2015

Hi I'm not sure if this is expected or not.
I previously run into this issue #9406. Basically, I wanted to upgrade my ES from 1.1.1 to 1.4.2, which hit a bug related to checksums, which will be fixed in 1.4.3.

However, since I'm in the middle of upgrade. Two of my 3-node cluster nodes have been upgraded to 1.4.2 without issues. And as suggested in #9406, if I don't want to lose any data, and wait for 1.4.3, it seems the only solution for me is to upgrade the node that's running 1.1.1 to 1.4.1 first.

And then use the segment upgrade command to upgrade the version of segments. The command is documented here:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-upgrade.html

So I did that, and I'm using the monitoring command to check the upgrade status. However, the status stays something like:

{
  my_index: {
    size: "894gb",
    size_in_bytes: 960014592499,
    size_to_upgrade: "840.8gb",
    size_to_upgrade_in_bytes: 902832892179
  }
}

The size keeps increasing, and it has been increase for about 40gb, but the size_to_upgrade stays the same. And all the queries are timing out.

I'm not sure what's happening. Is this behavior expected?

@yangou
Copy link
Author

yangou commented Jan 27, 2015

Eventually, it killed one of my node. And when I bring it back to cluster, all the shards on that node were unassigned.

@yangou
Copy link
Author

yangou commented Jan 27, 2015

Any way to stop the upgrade process?

@rjernst
Copy link
Member

rjernst commented Jan 27, 2015

@yangou No way to kill an upgrade in progress, except to restart the cluster.

Regarding the sizing you see in the upgrade status, are you indexing documents while this upgrade is running? That would explain size increasing. Also, do you have anything like an optimize going on simultaneously?

@yangou
Copy link
Author

yangou commented Jan 27, 2015

@rjernst Thanks for the quick response. I don't have optimize going and I queued up all the incomming traffic for that index. So there shouldn't be any more data in.

Good news is that, after a disaster recovery, I found maybe one of the shards finished upgrading and the size of that shard dropped from 180GB to 80GB. Is this also expected?

And one guess on the upgrade process monitoring, is it true that the number of remaining bytes to upgrade will change only after a shard finishes the upgrade?

After kill&restart the dead node, will the upgrade progress continue on that node?

Thanks to my replicas, so far no data lose. I have 4shards remaining to upgrade. Hope everything will be fine. Our traffic spike will actually come in 3 hours. Finger crossed.

@rjernst
Copy link
Member

rjernst commented Jan 27, 2015

After kill&restart the dead node, will the upgrade progress continue on that node?

No, restarting a node will result in cancelling the either running or scheduled merge. You would need to run the upgrade request again (which will result in a no-op for indexes already upgraded).

I would like to eventually make upgrading a "true" long running task once we have the task management api (see #6914).

@yangou
Copy link
Author

yangou commented Jan 27, 2015

Upgrade was successful, but I think I'm doing it in an incorrect way which I mixed version 1.4.2, and 1.4.1 in the same cluster (I had no choice.) Due to that, it errors out and killed one of my node.

I think I can close this issue, since it doesn't seem to be a bug.
Please help close this issue. Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants