Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat/contain nltk assets in docker image #3853

Merged
merged 39 commits into from
Jan 8, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
39 commits
Select commit Hold shift + click to select a range
3d9e013
feat:contain nltk data in docker image
christinestraub Dec 24, 2024
3c7985f
fix test dockerfile errors
christinestraub Dec 24, 2024
8ee6a2b
feat:using nltk assets in docker image
christinestraub Dec 26, 2024
2ed5675
feat:fix makefile error
christinestraub Dec 26, 2024
c482698
test:fix lint errors
christinestraub Dec 26, 2024
9d1a57d
test:fix test_dockerfile error
christinestraub Dec 27, 2024
27eea08
feat:integrate nltk data into docker image
christinestraub Jan 2, 2025
6b1ee55
test:fix version
christinestraub Jan 3, 2025
03a0adf
feat:fix nltk data path
christinestraub Jan 3, 2025
9c42660
feat:fix ingest test errors
christinestraub Jan 3, 2025
efe1167
feat:update dockerfile
christinestraub Jan 3, 2025
b961399
test: fix lint errors
christinestraub Jan 3, 2025
e7333e6
test:fix dockerfile errors
christinestraub Jan 3, 2025
dad9f87
feat:update ability to validate nltk assets
christinestraub Jan 3, 2025
f37a922
test:fix lint errors
christinestraub Jan 3, 2025
908c9e7
feat:fix nltk path error
christinestraub Jan 6, 2025
494b0e0
fix conflicts error
christinestraub Jan 6, 2025
30198a7
Merge branch 'main' into feat/contain-nltk-assets-in-docker-image
christinestraub Jan 6, 2025
39187d0
feat:add nltk model installation logic
christinestraub Jan 6, 2025
915b0ce
feat:revert ci.yml
christinestraub Jan 6, 2025
f4611a1
feat:revert Makefile
christinestraub Jan 6, 2025
2471633
feat:fix ingest test errors
christinestraub Jan 6, 2025
28262a4
feat:fix ingest test errors
christinestraub Jan 6, 2025
f778c0a
test:fix lint errors
christinestraub Jan 6, 2025
648c24a
test:fix lint errors
christinestraub Jan 6, 2025
a456d4e
feat: revert test function
christinestraub Jan 6, 2025
c2616cc
test:fix unit errors
christinestraub Jan 6, 2025
a4c4cbb
commented ondrive sh
christinestraub Jan 6, 2025
26b42d8
commented outlook sh
christinestraub Jan 6, 2025
82f16ec
commented outlook sh
christinestraub Jan 6, 2025
8b2f950
feat:update dockerfile and tokenize.py
christinestraub Jan 6, 2025
c7942ad
feat:update download_nltk_packages()
christinestraub Jan 6, 2025
8d57220
added changes as per the suggestion
christinestraub Jan 7, 2025
95358e0
updated code as per suggestion
christinestraub Jan 7, 2025
6e9931a
modified .gitignore
christinestraub Jan 7, 2025
7eeb411
removed if statement in tokenize
christinestraub Jan 7, 2025
26854e5
removed unused vars and function
christinestraub Jan 7, 2025
f340d65
remove download check function from test_tokenize
christinestraub Jan 7, 2025
2830248
Update CHANGELOG.md
christinestraub Jan 7, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
nltk_data/
.installed.cfg
*.egg
MANIFEST
Expand Down
11 changes: 11 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,14 @@
## 0.16.13-dev0

### Enhancements

### Features

### Fixes

- **Fix NLTK Download** to use nltk assets in docker image
- removed the ability to automatically download nltk package if missing

## 0.16.12

### Enhancements
Expand Down
25 changes: 15 additions & 10 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
FROM quay.io/unstructured-io/base-images:wolfi-base-latest as base
FROM quay.io/unstructured-io/base-images:wolfi-base-latest AS base

ARG PYTHON=python3.11
ARG PIP=pip3.11

USER root

Expand All @@ -10,18 +13,20 @@ COPY test_unstructured test_unstructured
COPY example-docs example-docs

RUN chown -R notebook-user:notebook-user /app && \
apk add font-ubuntu git && \
fc-cache -fv && \
if [ "$(readlink -f /usr/bin/python3)" != "/usr/bin/python3.11" ]; then \
ln -sf /usr/bin/python3.11 /usr/bin/python3; \
fi
apk add font-ubuntu git && \
fc-cache -fv && \
[ -e /usr/bin/python3 ] || ln -s /usr/bin/$PYTHON /usr/bin/python3

USER notebook-user

RUN find requirements/ -type f -name "*.txt" -exec pip3.11 install --no-cache-dir --user -r '{}' ';' && \
python3.11 -c "from unstructured.nlp.tokenize import download_nltk_packages; download_nltk_packages()" && \
python3.11 -c "from unstructured.partition.model_init import initialize; initialize()" && \
python3.11 -c "from unstructured_inference.models.tables import UnstructuredTableTransformerModel; model = UnstructuredTableTransformerModel(); model.initialize('microsoft/table-transformer-structure-recognition')"
ENV NLTK_DATA=/home/notebook-user/nltk_data

# Install Python dependencies and download required NLTK packages
RUN find requirements/ -type f -name "*.txt" -exec $PIP install --no-cache-dir --user -r '{}' ';' && \
mkdir -p ${NLTK_DATA} && \
$PYTHON -m nltk.downloader -d ${NLTK_DATA} punkt_tab averaged_perceptron_tagger_eng && \
$PYTHON -c "from unstructured.partition.model_init import initialize; initialize()" && \
$PYTHON -c "from unstructured_inference.models.tables import UnstructuredTableTransformerModel; model = UnstructuredTableTransformerModel(); model.initialize('microsoft/table-transformer-structure-recognition')"

ENV PATH="${PATH}:/home/notebook-user/.local/bin"
ENV TESSDATA_PREFIX=/usr/local/share/tessdata
Expand Down
20 changes: 0 additions & 20 deletions test_unstructured/nlp/test_tokenize.py
Original file line number Diff line number Diff line change
@@ -1,29 +1,9 @@
from typing import List, Tuple
from unittest.mock import patch

import nltk

from test_unstructured.nlp.mock_nltk import mock_sent_tokenize, mock_word_tokenize
from unstructured.nlp import tokenize


def test_nltk_packages_download_if_not_present():
tokenize._download_nltk_packages_if_not_present.cache_clear()
with patch.object(nltk, "find", side_effect=LookupError):
with patch.object(tokenize, "download_nltk_packages") as mock_download:
tokenize._download_nltk_packages_if_not_present()

mock_download.assert_called_once()


def test_nltk_packages_do_not_download_if():
tokenize._download_nltk_packages_if_not_present.cache_clear()
with patch.object(nltk, "find"), patch.object(nltk, "download") as mock_download:
tokenize._download_nltk_packages_if_not_present()

mock_download.assert_not_called()


def mock_pos_tag(tokens: List[str]) -> List[Tuple[str, str]]:
pos_tags: List[Tuple[str, str]] = []
for token in tokens:
Expand Down
4 changes: 2 additions & 2 deletions test_unstructured_ingest/test-ingest-src.sh
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,8 @@ all_tests=(
'against-api.sh'
'gcs.sh'
'kafka-local.sh'
'onedrive.sh'
'outlook.sh'
#'onedrive.sh'
#'outlook.sh'
christinestraub marked this conversation as resolved.
Show resolved Hide resolved
'elasticsearch.sh'
'confluence-diff.sh'
'confluence-large.sh'
Expand Down
2 changes: 1 addition & 1 deletion unstructured/__version__.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "0.16.12" # pragma: no cover
__version__ = "0.16.13-dev0" # pragma: no cover
27 changes: 2 additions & 25 deletions unstructured/nlp/tokenize.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ def download_nltk_packages():


def check_for_nltk_package(package_name: str, package_category: str) -> bool:
"""Checks to see if the specified NLTK package exists on the file system"""
"""Checks to see if the specified NLTK package exists on the image."""
paths: list[str] = []
for path in nltk.data.path:
if not path.endswith("nltk_data"):
Expand All @@ -32,45 +32,22 @@ def check_for_nltk_package(package_name: str, package_category: str) -> bool:
return False


# We cache this because we do not want to attempt
# downloading the packages multiple times
@lru_cache()
def _download_nltk_packages_if_not_present():
"""If required NLTK packages are not available, download them."""

tagger_available = check_for_nltk_package(
package_category="taggers",
package_name="averaged_perceptron_tagger_eng",
)
tokenizer_available = check_for_nltk_package(
package_category="tokenizers", package_name="punkt_tab"
)

if (not tokenizer_available) or (not tagger_available):
download_nltk_packages()


@lru_cache(maxsize=CACHE_MAX_SIZE)
def sent_tokenize(text: str) -> List[str]:
"""A wrapper around the NLTK sentence tokenizer with LRU caching enabled."""
_download_nltk_packages_if_not_present()
return _sent_tokenize(text)


@lru_cache(maxsize=CACHE_MAX_SIZE)
def word_tokenize(text: str) -> List[str]:
"""A wrapper around the NLTK word tokenizer with LRU caching enabled."""
_download_nltk_packages_if_not_present()
return _word_tokenize(text)


@lru_cache(maxsize=CACHE_MAX_SIZE)
def pos_tag(text: str) -> List[Tuple[str, str]]:
"""A wrapper around the NLTK POS tagger with LRU caching enabled."""
_download_nltk_packages_if_not_present()
# NOTE(robinson) - Splitting into sentences before tokenizing. The helps with
# situations like "ITEM 1A. PROPERTIES" where "PROPERTIES" can be mistaken
# for a verb because it looks like it's in verb form an "ITEM 1A." looks like the subject.
# Splitting into sentences before tokenizing.
sentences = _sent_tokenize(text)
parts_of_speech: list[tuple[str, str]] = []
for sentence in sentences:
Expand Down
Loading