-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: incremental reindex_studio management command [FC-0062] #35864
Merged
bradenmacdonald
merged 7 commits into
openedx:master
from
open-craft:dvz/refactor-reindex-studio
Dec 6, 2024
Merged
Changes from all commits
Commits
Show all changes
7 commits
Select commit
Hold shift + click to select a range
9c457b6
feat: incremental reindex_studio management command
DanielVZ96 f54cbb4
fix: tests, linting and formatting
DanielVZ96 0643c9b
fix: improve output of init_index
DanielVZ96 077e426
fix: address bradens comments
DanielVZ96 4a0e89c
fix: settings name overshadow
DanielVZ96 85d280a
refactor: extract constants to index config module
DanielVZ96 c41c44d
fix: add index config docstring
DanielVZ96 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -5,7 +5,7 @@ | |
|
||
import logging | ||
import time | ||
from contextlib import contextmanager | ||
from contextlib import contextmanager, nullcontext | ||
from datetime import datetime, timedelta, timezone | ||
from functools import wraps | ||
from typing import Callable, Generator | ||
|
@@ -24,7 +24,14 @@ | |
from rest_framework.request import Request | ||
from common.djangoapps.student.role_helpers import get_course_roles | ||
from openedx.core.djangoapps.content.course_overviews.models import CourseOverview | ||
from openedx.core.djangoapps.content.search.models import get_access_ids_for_request | ||
from openedx.core.djangoapps.content.search.models import get_access_ids_for_request, IncrementalIndexCompleted | ||
from openedx.core.djangoapps.content.search.index_config import ( | ||
INDEX_DISTINCT_ATTRIBUTE, | ||
INDEX_FILTERABLE_ATTRIBUTES, | ||
INDEX_SEARCHABLE_ATTRIBUTES, | ||
INDEX_SORTABLE_ATTRIBUTES, | ||
INDEX_RANKING_RULES, | ||
) | ||
from openedx.core.djangoapps.content_libraries import api as lib_api | ||
from xmodule.modulestore.django import modulestore | ||
|
||
|
@@ -217,6 +224,42 @@ def _using_temp_index(status_cb: Callable[[str], None] | None = None) -> Generat | |
_wait_for_meili_task(client.delete_index(temp_index_name)) | ||
|
||
|
||
def _index_is_empty(index_name: str) -> bool: | ||
""" | ||
Check if an index is empty | ||
|
||
Args: | ||
index_name (str): The name of the index to check | ||
""" | ||
client = _get_meilisearch_client() | ||
index = client.get_index(index_name) | ||
return index.get_stats().number_of_documents == 0 | ||
|
||
|
||
def _configure_index(index_name): | ||
""" | ||
Configure the index. The following index settings are best changed on an empty index. | ||
Changing them on a populated index will "re-index all documents in the index", which can take some time. | ||
|
||
Args: | ||
index_name (str): The name of the index to configure | ||
""" | ||
client = _get_meilisearch_client() | ||
|
||
# Mark usage_key as unique (it's not the primary key for the index, but nevertheless must be unique): | ||
client.index(index_name).update_distinct_attribute(INDEX_DISTINCT_ATTRIBUTE) | ||
# Mark which attributes can be used for filtering/faceted search: | ||
client.index(index_name).update_filterable_attributes(INDEX_FILTERABLE_ATTRIBUTES) | ||
# Mark which attributes are used for keyword search, in order of importance: | ||
client.index(index_name).update_searchable_attributes(INDEX_SEARCHABLE_ATTRIBUTES) | ||
# Mark which attributes can be used for sorting search results: | ||
client.index(index_name).update_sortable_attributes(INDEX_SORTABLE_ATTRIBUTES) | ||
|
||
# Update the search ranking rules to let the (optional) "sort" parameter take precedence over keyword relevance. | ||
# cf https://www.meilisearch.com/docs/learn/core_concepts/relevancy | ||
client.index(index_name).update_ranking_rules(INDEX_RANKING_RULES) | ||
|
||
|
||
def _recurse_children(block, fn, status_cb: Callable[[str], None] | None = None) -> None: | ||
""" | ||
Recurse the children of an XBlock and call the given function for each | ||
|
@@ -279,8 +322,75 @@ def is_meilisearch_enabled() -> bool: | |
return False | ||
|
||
|
||
# pylint: disable=too-many-statements | ||
def rebuild_index(status_cb: Callable[[str], None] | None = None) -> None: | ||
def reset_index(status_cb: Callable[[str], None] | None = None) -> None: | ||
""" | ||
Reset the Meilisearch index, deleting all documents and reconfiguring it | ||
""" | ||
if status_cb is None: | ||
status_cb = log.info | ||
|
||
status_cb("Creating new empty index...") | ||
with _using_temp_index(status_cb) as temp_index_name: | ||
_configure_index(temp_index_name) | ||
status_cb("Index recreated!") | ||
status_cb("Index reset complete.") | ||
|
||
|
||
def _is_index_configured(index_name: str) -> bool: | ||
""" | ||
Check if an index is completely configured | ||
|
||
Args: | ||
index_name (str): The name of the index to check | ||
""" | ||
client = _get_meilisearch_client() | ||
index = client.get_index(index_name) | ||
index_settings = index.get_settings() | ||
for k, v in ( | ||
("distinctAttribute", INDEX_DISTINCT_ATTRIBUTE), | ||
("filterableAttributes", INDEX_FILTERABLE_ATTRIBUTES), | ||
("searchableAttributes", INDEX_SEARCHABLE_ATTRIBUTES), | ||
("sortableAttributes", INDEX_SORTABLE_ATTRIBUTES), | ||
("rankingRules", INDEX_RANKING_RULES), | ||
): | ||
setting = index_settings.get(k, []) | ||
if isinstance(v, list): | ||
v = set(v) | ||
setting = set(setting) | ||
if setting != v: | ||
return False | ||
return True | ||
|
||
|
||
def init_index(status_cb: Callable[[str], None] | None = None, warn_cb: Callable[[str], None] | None = None) -> None: | ||
""" | ||
Initialize the Meilisearch index, creating it and configuring it if it doesn't exist | ||
""" | ||
if status_cb is None: | ||
status_cb = log.info | ||
if warn_cb is None: | ||
warn_cb = log.warning | ||
|
||
if _index_exists(STUDIO_INDEX_NAME): | ||
if _index_is_empty(STUDIO_INDEX_NAME): | ||
warn_cb( | ||
"The studio search index is empty. Please run ./manage.py cms reindex_studio" | ||
" --experimental [--incremental]" | ||
) | ||
return | ||
if not _is_index_configured(STUDIO_INDEX_NAME): | ||
warn_cb( | ||
"A rebuild of the index is required. Please run ./manage.py cms reindex_studio" | ||
" --experimental [--incremental]" | ||
) | ||
return | ||
status_cb("Index already exists and is configured.") | ||
return | ||
|
||
reset_index(status_cb) | ||
|
||
|
||
def rebuild_index(status_cb: Callable[[str], None] | None = None, incremental=False) -> None: # lint-amnesty, pylint: disable=too-many-statements | ||
""" | ||
Rebuild the Meilisearch index from scratch | ||
""" | ||
|
@@ -292,96 +402,40 @@ def rebuild_index(status_cb: Callable[[str], None] | None = None) -> None: | |
|
||
# Get the lists of libraries | ||
status_cb("Counting libraries...") | ||
lib_keys = [lib.library_key for lib in lib_api.ContentLibrary.objects.select_related('org').only('org', 'slug')] | ||
keys_indexed = [] | ||
if incremental: | ||
keys_indexed = list(IncrementalIndexCompleted.objects.values_list("context_key", flat=True)) | ||
lib_keys = [ | ||
lib.library_key | ||
for lib in lib_api.ContentLibrary.objects.select_related("org").only("org", "slug").order_by("-id") | ||
if lib.library_key not in keys_indexed | ||
] | ||
num_libraries = len(lib_keys) | ||
|
||
# Get the list of courses | ||
status_cb("Counting courses...") | ||
num_courses = CourseOverview.objects.count() | ||
|
||
# Some counters so we can track our progress as indexing progresses: | ||
num_contexts = num_courses + num_libraries | ||
num_contexts_done = 0 # How many courses/libraries we've indexed | ||
num_libs_skipped = len(keys_indexed) | ||
num_contexts = num_courses + num_libraries + num_libs_skipped | ||
num_contexts_done = 0 + num_libs_skipped # How many courses/libraries we've indexed | ||
num_blocks_done = 0 # How many individual components/XBlocks we've indexed | ||
|
||
status_cb(f"Found {num_courses} courses, {num_libraries} libraries.") | ||
with _using_temp_index(status_cb) as temp_index_name: | ||
with _using_temp_index(status_cb) if not incremental else nullcontext(STUDIO_INDEX_NAME) as index_name: | ||
############## Configure the index ############## | ||
|
||
# The following index settings are best changed on an empty index. | ||
# Changing them on a populated index will "re-index all documents in the index, which can take some time" | ||
# The index settings are best changed on an empty index. | ||
# Changing them on a populated index will "re-index all documents in the index", which can take some time | ||
# and use more RAM. Instead, we configure an empty index then populate it one course/library at a time. | ||
|
||
# Mark usage_key as unique (it's not the primary key for the index, but nevertheless must be unique): | ||
client.index(temp_index_name).update_distinct_attribute(Fields.usage_key) | ||
# Mark which attributes can be used for filtering/faceted search: | ||
client.index(temp_index_name).update_filterable_attributes([ | ||
# Get specific block/collection using combination of block_id and context_key | ||
Fields.block_id, | ||
Fields.block_type, | ||
Fields.context_key, | ||
Fields.usage_key, | ||
Fields.org, | ||
Fields.tags, | ||
Fields.tags + "." + Fields.tags_taxonomy, | ||
Fields.tags + "." + Fields.tags_level0, | ||
Fields.tags + "." + Fields.tags_level1, | ||
Fields.tags + "." + Fields.tags_level2, | ||
Fields.tags + "." + Fields.tags_level3, | ||
Fields.collections, | ||
Fields.collections + "." + Fields.collections_display_name, | ||
Fields.collections + "." + Fields.collections_key, | ||
Fields.type, | ||
Fields.access_id, | ||
Fields.last_published, | ||
Fields.content + "." + Fields.problem_types, | ||
]) | ||
# Mark which attributes are used for keyword search, in order of importance: | ||
client.index(temp_index_name).update_searchable_attributes([ | ||
# Keyword search does _not_ search the course name, course ID, breadcrumbs, block type, or other fields. | ||
Fields.display_name, | ||
Fields.block_id, | ||
Fields.content, | ||
Fields.description, | ||
Fields.tags, | ||
Fields.collections, | ||
# If we don't list the following sub-fields _explicitly_, they're only sometimes searchable - that is, they | ||
# are searchable only if at least one document in the index has a value. If we didn't list them here and, | ||
# say, there were no tags.level3 tags in the index, the client would get an error if trying to search for | ||
# these sub-fields: "Attribute `tags.level3` is not searchable." | ||
Fields.tags + "." + Fields.tags_taxonomy, | ||
Fields.tags + "." + Fields.tags_level0, | ||
Fields.tags + "." + Fields.tags_level1, | ||
Fields.tags + "." + Fields.tags_level2, | ||
Fields.tags + "." + Fields.tags_level3, | ||
Fields.collections + "." + Fields.collections_display_name, | ||
Fields.collections + "." + Fields.collections_key, | ||
Fields.published + "." + Fields.display_name, | ||
Fields.published + "." + Fields.published_description, | ||
]) | ||
# Mark which attributes can be used for sorting search results: | ||
client.index(temp_index_name).update_sortable_attributes([ | ||
Fields.display_name, | ||
Fields.created, | ||
Fields.modified, | ||
Fields.last_published, | ||
]) | ||
|
||
# Update the search ranking rules to let the (optional) "sort" parameter take precedence over keyword relevance. | ||
# cf https://www.meilisearch.com/docs/learn/core_concepts/relevancy | ||
client.index(temp_index_name).update_ranking_rules([ | ||
"sort", | ||
"words", | ||
"typo", | ||
"proximity", | ||
"attribute", | ||
"exactness", | ||
]) | ||
if not incremental: | ||
_configure_index(index_name) | ||
|
||
############## Libraries ############## | ||
status_cb("Indexing libraries...") | ||
|
||
def index_library(lib_key: str) -> list: | ||
def index_library(lib_key: LibraryLocatorV2) -> list: | ||
docs = [] | ||
for component in lib_api.get_library_components(lib_key): | ||
try: | ||
|
@@ -396,7 +450,7 @@ def index_library(lib_key: str) -> list: | |
if docs: | ||
try: | ||
# Add all the docs in this library at once (usually faster than adding one at a time): | ||
_wait_for_meili_task(client.index(temp_index_name).add_documents(docs)) | ||
_wait_for_meili_task(client.index(index_name).add_documents(docs)) | ||
except (TypeError, KeyError, MeilisearchError) as err: | ||
status_cb(f"Error indexing library {lib_key}: {err}") | ||
return docs | ||
|
@@ -416,7 +470,7 @@ def index_collection_batch(batch, num_done, library_key) -> int: | |
if docs: | ||
try: | ||
# Add docs in batch of 100 at once (usually faster than adding one at a time): | ||
_wait_for_meili_task(client.index(temp_index_name).add_documents(docs)) | ||
_wait_for_meili_task(client.index(index_name).add_documents(docs)) | ||
except (TypeError, KeyError, MeilisearchError) as err: | ||
status_cb(f"Error indexing collection batch {p}: {err}") | ||
return num_done | ||
|
@@ -439,6 +493,8 @@ def index_collection_batch(batch, num_done, library_key) -> int: | |
num_collections_done, | ||
lib_key, | ||
) | ||
if incremental: | ||
IncrementalIndexCompleted.objects.get_or_create(context_key=lib_key) | ||
status_cb(f"{num_collections_done}/{num_collections} collections indexed for library {lib_key}") | ||
|
||
num_contexts_done += 1 | ||
|
@@ -464,7 +520,7 @@ def add_with_children(block): | |
|
||
if docs: | ||
# Add all the docs in this course at once (usually faster than adding one at a time): | ||
_wait_for_meili_task(client.index(temp_index_name).add_documents(docs)) | ||
_wait_for_meili_task(client.index(index_name).add_documents(docs)) | ||
return docs | ||
|
||
paginator = Paginator(CourseOverview.objects.only('id', 'display_name'), 1000) | ||
|
@@ -473,10 +529,16 @@ def add_with_children(block): | |
status_cb( | ||
f"{num_contexts_done + 1}/{num_contexts}. Now indexing course {course.display_name} ({course.id})" | ||
) | ||
if course.id in keys_indexed: | ||
num_contexts_done += 1 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Skipped courses are still included in the total count and in the |
||
continue | ||
course_docs = index_course(course) | ||
if incremental: | ||
IncrementalIndexCompleted.objects.get_or_create(context_key=course.id) | ||
num_contexts_done += 1 | ||
num_blocks_done += len(course_docs) | ||
|
||
IncrementalIndexCompleted.objects.all().delete() | ||
status_cb(f"Done! {num_blocks_done} blocks indexed across {num_contexts_done} courses, collections and libraries.") | ||
|
||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,70 @@ | ||
"""Configuration for the search index.""" | ||
from .documents import Fields | ||
|
||
|
||
INDEX_DISTINCT_ATTRIBUTE = "usage_key" | ||
|
||
# Mark which attributes can be used for filtering/faceted search: | ||
INDEX_FILTERABLE_ATTRIBUTES = [ | ||
# Get specific block/collection using combination of block_id and context_key | ||
Fields.block_id, | ||
Fields.block_type, | ||
Fields.context_key, | ||
Fields.usage_key, | ||
Fields.org, | ||
Fields.tags, | ||
Fields.tags + "." + Fields.tags_taxonomy, | ||
Fields.tags + "." + Fields.tags_level0, | ||
Fields.tags + "." + Fields.tags_level1, | ||
Fields.tags + "." + Fields.tags_level2, | ||
Fields.tags + "." + Fields.tags_level3, | ||
Fields.collections, | ||
Fields.collections + "." + Fields.collections_display_name, | ||
Fields.collections + "." + Fields.collections_key, | ||
Fields.type, | ||
Fields.access_id, | ||
Fields.last_published, | ||
Fields.content + "." + Fields.problem_types, | ||
] | ||
|
||
# Mark which attributes are used for keyword search, in order of importance: | ||
INDEX_SEARCHABLE_ATTRIBUTES = [ | ||
# Keyword search does _not_ search the course name, course ID, breadcrumbs, block type, or other fields. | ||
Fields.display_name, | ||
Fields.block_id, | ||
Fields.content, | ||
Fields.description, | ||
Fields.tags, | ||
Fields.collections, | ||
# If we don't list the following sub-fields _explicitly_, they're only sometimes searchable - that is, they | ||
# are searchable only if at least one document in the index has a value. If we didn't list them here and, | ||
# say, there were no tags.level3 tags in the index, the client would get an error if trying to search for | ||
# these sub-fields: "Attribute `tags.level3` is not searchable." | ||
Fields.tags + "." + Fields.tags_taxonomy, | ||
Fields.tags + "." + Fields.tags_level0, | ||
Fields.tags + "." + Fields.tags_level1, | ||
Fields.tags + "." + Fields.tags_level2, | ||
Fields.tags + "." + Fields.tags_level3, | ||
Fields.collections + "." + Fields.collections_display_name, | ||
Fields.collections + "." + Fields.collections_key, | ||
Fields.published + "." + Fields.display_name, | ||
Fields.published + "." + Fields.published_description, | ||
] | ||
|
||
# Mark which attributes can be used for sorting search results: | ||
INDEX_SORTABLE_ATTRIBUTES = [ | ||
Fields.display_name, | ||
Fields.created, | ||
Fields.modified, | ||
Fields.last_published, | ||
] | ||
|
||
# Update the search ranking rules to let the (optional) "sort" parameter take precedence over keyword relevance. | ||
INDEX_RANKING_RULES = [ | ||
"sort", | ||
"words", | ||
"typo", | ||
"proximity", | ||
"attribute", | ||
"exactness", | ||
] |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@DanielVZ96 @bradenmacdonald @pomegranited I am concerned about the functionality of this reset.
If we need to create and populate a new index on a large instance, we need to run the
--reset,
which removes the old index, and then I need to run the--incremental
. However, the search results will be broken when the new index is populated.Wouldn't it be better to add an option to
--incremental
, so that it creates a temporary index and does swap, so as not to break the search results, as the non-incremental form does?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ChrisChV I'm assuming that most times an administrator is running this incremental build, they either have no existing index or their existing index is from an old version and is missing some configuration/columns/etc. (so it actually is broken). In that case, removing the old index and starting an incremental build will result in incomplete search results for a while, but the search will be working without errors. And, the incremental index rebuilds the newest courses first, so the results should fill in relatively quickly.
I think for large instances, (where the reindex can take several days) it's better to have a working search with incomplete results, than to have a totally broken search (that displays errors because the old index does not exist or has the wrong configuration).
For Teak I would like to find a way to simplify this, maybe by only having the incremental option and allowing it to be either using a temporary index or not. But I think we'll need to see how this works first and hear from people testing it out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK that's fine for me 👍