From 426da398f10b23e9b7a6ced81b3a5edbf5ae9a9f Mon Sep 17 00:00:00 2001 From: Kalyan Date: Tue, 26 Sep 2023 01:04:40 +0530 Subject: [PATCH] fix spelling (#1363) Signed-off-by: Kalyan (cherry picked from commit 358354c6aff10353ee2c08d64456d339b505ec74) --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 5cf4b5b764..476f2f3f8e 100644 --- a/README.md +++ b/README.md @@ -26,7 +26,7 @@ Machine Learning Commons for OpenSearch is a new solution that make it easy to d Until today, the challenge is significant to build a new machine learning feature inside OpenSearch. The reasons include: * **Disruption to OpenSearch Core features**. Machine learning is very computationally intensive. But currently there is no way to add dedicated computation resources in OpenSearch for machine learning jobs, hence these jobs have to share same resources with Core features, such as: indexing and searching. That might cause the latency increasing on search request, and cause circuit breaker exception on memory usage. To address this, we have to carefully distribute models and limit the data size to run the AD job. When more and more ML features are added into OpenSearch, it will become much harder to manage. -* **Lack of support for machine learning algorithms.** Customers need more algorighms within Opensearch, otherwise the data need be exported to outside of elasticsearch, such as s3 first to do the job, which will bring extra cost and latency. +* **Lack of support for machine learning algorithms.** Customers need more algorithms within Opensearch, otherwise the data need be exported to outside of elasticsearch, such as s3 first to do the job, which will bring extra cost and latency. * **Lack of resource management mechanism between multiple machine learning jobs.** It's hard to coordinate the resources between multi features.