Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use current time as training data end time #547

Merged
merged 2 commits into from
May 17, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
92 changes: 39 additions & 53 deletions src/main/java/org/opensearch/ad/ml/EntityColdStarter.java
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Optional;
Expand All @@ -46,7 +47,6 @@
import org.opensearch.ad.feature.FeatureManager;
import org.opensearch.ad.feature.SearchFeatureDao;
import org.opensearch.ad.model.AnomalyDetector;
import org.opensearch.ad.model.AnomalyDetectorJob;
import org.opensearch.ad.model.Entity;
import org.opensearch.ad.model.IntervalTimeConfiguration;
import org.opensearch.ad.ratelimit.CheckpointWriteWorker;
Expand Down Expand Up @@ -220,6 +220,22 @@ private void coldStart(
) {
logger.debug("Trigger cold start for {}", modelId);

if (modelState == null || entity == null) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If data isn't present currently but will get ingested in the future and we are expecting a long initialization would either of this condition be met until then?

Copy link
Collaborator Author

@kaituo kaituo May 17, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is method invariant and we don't expect any of them to be true, regardless of whether data is present or not.

listener
.onFailure(
new IllegalArgumentException(
String
.format(
Locale.ROOT,
"Cannot have empty model state or entity: model state [%b], entity [%b]",
modelState == null,
entity == null
)
)
);
return;
}

if (lastThrottledColdStartTime.plus(Duration.ofMinutes(coolDownMinutes)).isAfter(clock.instant())) {
listener.onResponse(null);
return;
Expand Down Expand Up @@ -252,7 +268,7 @@ private void coldStart(
try {
if (trainingData.isPresent()) {
List<double[][]> dataPoints = trainingData.get();
combineTrainSamples(dataPoints, modelId, modelState);
extractTrainSamples(dataPoints, modelId, modelState);
Queue<double[]> samples = modelState.getModel().getSamples();
// only train models if we have enough samples
if (samples.size() >= numMinSamples) {
Expand All @@ -272,7 +288,6 @@ private void coldStart(
} catch (Exception e) {
listener.onFailure(e);
}

}, exception -> {
try {
logger.error(new ParameterizedMessage("Error while cold start {}", modelId), exception);
Expand Down Expand Up @@ -404,42 +419,23 @@ private void getEntityColdStartData(String detectorId, Entity entity, ActionList
ActionListener<Optional<Long>> minTimeListener = ActionListener.wrap(earliest -> {
if (earliest.isPresent()) {
long startTimeMs = earliest.get().longValue();
nodeStateManager.getAnomalyDetectorJob(detectorId, ActionListener.wrap(jobOp -> {
if (!jobOp.isPresent()) {
listener.onFailure(new EndRunException(detectorId, "AnomalyDetector job is not available.", false));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before it seems like we used cold start data and samples for the training data. Can you further explain why we are moving away from this or was no sample data actually ever being used because we didn't add samples until success anyways (as you mention in description)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No sample data actually ever being used because we didn't add samples until success.

return;
}

AnomalyDetectorJob job = jobOp.get();
// End time uses milliseconds as start time is assumed to be in milliseconds.
// Opensearch uses a set of preconfigured formats to recognize and parse these strings into a long value
// representing milliseconds-since-the-epoch in UTC.
// More on https://tinyurl.com/wub4fk92

// Existing samples either predates or coincide with cold start data. In either case,
// combining them without reordering based on time stamps is not ok. We might introduce
// anomalies in the process.
// An ideal solution would be to record time stamps of data points and combine existing
// samples and cold start samples and do interpolation afterwards. Recording time stamps
// requires changes across the board like bwc in checkpoints. A pragmatic solution is to use
// job enabled time as the end time of cold start period as it is easier to combine
// existing samples with cold start data. We just need to appends existing samples after
// cold start data as existing samples all happen after job enabled time. There might
// be some gaps in between the last cold start sample and the first accumulated sample.
// We will need to accept that precision loss in current solution.
long endTimeMs = job.getEnabledTime().toEpochMilli();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If HC detector realtime job not restarted, the enabled time won't change. If there is no enough data before job enabled time and user don't backfill historical data, there is no chance to pass cold start, right? Seems a critical bug if that's true. We'd better backfill to 1.x too.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right. yes, will backfill.

Pair<Integer, Integer> params = selectRangeParam(detector);
int stride = params.getLeft();
int numberOfSamples = params.getRight();

// we start with round 0
getFeatures(listener, 0, coldStartData, detector, entity, stride, numberOfSamples, startTimeMs, endTimeMs);

}, listener::onFailure));
// End time uses milliseconds as start time is assumed to be in milliseconds.
// Opensearch uses a set of preconfigured formats to recognize and parse these
// strings into a long value
// representing milliseconds-since-the-epoch in UTC.
// More on https://tinyurl.com/wub4fk92

long endTimeMs = clock.millis();
Pair<Integer, Integer> params = selectRangeParam(detector);
int stride = params.getLeft();
int numberOfSamples = params.getRight();

// we start with round 0
getFeatures(listener, 0, coldStartData, detector, entity, stride, numberOfSamples, startTimeMs, endTimeMs);
} else {
listener.onResponse(Optional.empty());
}

}, listener::onFailure);

searchFeatureDao
Expand Down Expand Up @@ -694,40 +690,30 @@ public void trainModelFromExistingSamples(ModelState<EntityModel> modelState, in
}

/**
* Precondition: we don't have enough training data.
* Combine training data with existing sample data.
* Existing samples either predates or coincide with cold start data. In either case,
* combining them without reordering based on time stamps is not ok. We might introduce
* anomalies in the process.
* An ideal solution would be to record time stamps of data points and combine existing
* samples and cold start samples and do interpolation afterwards. Recording time stamps
* requires changes across the board like bwc in checkpoints. A pragmatic solution is to use
* job enabled time as the end time of cold start period as it is easier to combine
* existing samples with cold start data. We just need to appends existing samples after
* cold start data as existing samples all happen after job enabled time. There might
* be some gaps in between the last cold start sample and the first accumulated sample.
* We will need to accept that precision loss in current solution.
* Extract training data and put them into ModelState
*
* @param coldstartDatapoints training data generated from cold start
* @param modelId model Id
* @param entityState entity State
* @param modelState entity State
*/
private void combineTrainSamples(List<double[][]> coldstartDatapoints, String modelId, ModelState<EntityModel> entityState) {
if (coldstartDatapoints == null || coldstartDatapoints.size() == 0) {
private void extractTrainSamples(List<double[][]> coldstartDatapoints, String modelId, ModelState<EntityModel> modelState) {
if (coldstartDatapoints == null || coldstartDatapoints.size() == 0 || modelState == null) {
return;
}

EntityModel model = entityState.getModel();
EntityModel model = modelState.getModel();
if (model == null) {
model = new EntityModel(null, new ArrayDeque<>(), null);
modelState.setModel(model);
}

Queue<double[]> newSamples = new ArrayDeque<>();
for (double[][] consecutivePoints : coldstartDatapoints) {
for (int i = 0; i < consecutivePoints.length; i++) {
newSamples.add(consecutivePoints[i]);
}
}
newSamples.addAll(model.getSamples());

model.setSamples(newSamples);
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -354,14 +354,8 @@ private ActionListener<Optional<AnomalyDetector>> onGetDetector(
ModelState<EntityModel> modelState = modelManager
.processEntityCheckpoint(checkpoint, entity, modelId, detectorId, detector.getShingleSize());

EntityModel entityModel = modelState.getModel();

ThresholdingResult result = null;
if (entityModel.getTrcf().isPresent()) {
result = modelManager.score(origRequest.getCurrentFeature(), modelId, modelState);
} else {
entityModel.addSample(origRequest.getCurrentFeature());
}
ThresholdingResult result = modelManager
.getAnomalyResultForEntity(origRequest.getCurrentFeature(), modelState, modelId, entity, detector.getShingleSize());

if (result != null && result.getRcfScore() > 0) {
AnomalyResult resultToSave = result
Expand Down
63 changes: 63 additions & 0 deletions src/test/java/org/opensearch/ad/NodeStateManagerTests.java
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@
import org.opensearch.action.get.GetResponse;
import org.opensearch.action.search.SearchRequest;
import org.opensearch.ad.model.AnomalyDetector;
import org.opensearch.ad.model.AnomalyDetectorJob;
import org.opensearch.ad.settings.AnomalyDetectorSettings;
import org.opensearch.ad.transport.AnomalyResultTests;
import org.opensearch.ad.util.ClientUtil;
Expand Down Expand Up @@ -79,6 +80,7 @@ public class NodeStateManagerTests extends AbstractADTest {
private GetResponse checkpointResponse;
private ClusterService clusterService;
private ClusterSettings clusterSettings;
private AnomalyDetectorJob jobToCheck;

@Override
protected NamedXContentRegistry xContentRegistry() {
Expand Down Expand Up @@ -129,6 +131,7 @@ public void setUp() throws Exception {
stateManager = new NodeStateManager(client, xContentRegistry(), settings, clientUtil, clock, duration, clusterService);

checkpointResponse = mock(GetResponse.class);
jobToCheck = TestHelpers.randomAnomalyDetectorJob(true, Instant.ofEpochMilli(1602401500000L), null);
}

@Override
Expand Down Expand Up @@ -381,4 +384,64 @@ public void testSettingUpdateBackOffMin() {
when(clock.millis()).thenReturn(62000L);
assertTrue(!stateManager.isMuted(nodeId, adId));
}

@SuppressWarnings("unchecked")
private String setupJob() throws IOException {
String detectorId = jobToCheck.getName();

doAnswer(invocation -> {
GetRequest request = invocation.getArgument(0);
ActionListener<GetResponse> listener = invocation.getArgument(1);
if (request.index().equals(AnomalyDetectorJob.ANOMALY_DETECTOR_JOB_INDEX)) {
listener.onResponse(TestHelpers.createGetResponse(jobToCheck, detectorId, AnomalyDetectorJob.ANOMALY_DETECTOR_JOB_INDEX));
}
return null;
}).when(client).get(any(), any(ActionListener.class));

return detectorId;
}

public void testGetAnomalyJob() throws IOException, InterruptedException {
String detectorId = setupJob();
final CountDownLatch inProgressLatch = new CountDownLatch(1);
stateManager.getAnomalyDetectorJob(detectorId, ActionListener.wrap(asDetector -> {
assertEquals(jobToCheck, asDetector.get());
inProgressLatch.countDown();
}, exception -> {
assertTrue(false);
inProgressLatch.countDown();
}));
assertTrue(inProgressLatch.await(100, TimeUnit.SECONDS));
}

/**
* Test that we caches anomaly detector job definition after the first call
* @throws IOException if client throws exception
* @throws InterruptedException if the current thread is interrupted while waiting
*/
@SuppressWarnings("unchecked")
public void testRepeatedGetAnomalyJob() throws IOException, InterruptedException {
String detectorId = setupJob();
final CountDownLatch inProgressLatch = new CountDownLatch(2);

stateManager.getAnomalyDetectorJob(detectorId, ActionListener.wrap(asDetector -> {
assertEquals(jobToCheck, asDetector.get());
inProgressLatch.countDown();
}, exception -> {
assertTrue(false);
inProgressLatch.countDown();
}));

stateManager.getAnomalyDetectorJob(detectorId, ActionListener.wrap(asDetector -> {
assertEquals(jobToCheck, asDetector.get());
inProgressLatch.countDown();
}, exception -> {
assertTrue(false);
inProgressLatch.countDown();
}));

assertTrue(inProgressLatch.await(100, TimeUnit.SECONDS));

verify(client, times(1)).get(any(), any(ActionListener.class));
}
}
Loading