Skip to content

Commit

Permalink
Website & API Doc site generator using DocFx script (#206)
Browse files Browse the repository at this point in the history
* Initial commit of powershell build to create an API doc site with docfx

* Updates styles, etc... for the docs

* updates build script to serve website

* updates build to properly serve with an option to not clean cache files

* adds index file for api docs

* fixes a couple of crefs

* creates custom docs files

* updates script to ensure it only includes csproj's that are in the sln file

* Adds wiki example docs, fixes up some toc, adds logging to build, fixes filter config, updates to latest docfx version, updates to correct LUCENENET TODO

* Removes use of custom template files since we can just use the built in new metadata instead

* Adds test files, fixing up some doc refs

* Fixes namespace overwrite issue, adds solution for custom markdown plugin for parsing lucene tokens.

* fixes exposed name of plugin

* Moves source code for docs formatter to the 'src' folder

* Updates build script to ensure the custom DocFx plugin is built for the custom tag parsing, adds readme, updates to latest docfx, includes more projects including the CLI project

* Updates to latest docfx version

* Splitting build into separate projects so we can browse APIs per project, fixes build issues with VS 2017 15.3, removes other test docs - will focus purely on the API docs part for now.

* Gets projects all building separately, added a custom toc, and now we can browse by 'package'

* updates build, ignore and toc

* OK, gets projects -> namespace api docs working but the breadcrumb isn't working so need to figure that out

* turns it into a 3 level toc for now which is better than before, awaiting feedback on gh

* updates to latest docfx including the references in the docs plugin package

* Gets CLI docs building and included as a header link and adds toc files for each folder

* fixes some csproj refs

* adds the Kuromoji package

* Gets more building, includes the markdown docs for use as the namespace documentation

* removes the replicator from the docs since that was erroring for some reason

* Moves the docfx build yml files to a better temporary folder making it easier to cleanup, fixes the docfx build so that it properly builds - this was due to a change with the docfx version, puts the Replicator build back in

* fixes the sln file since there was a duplicate project declared

* fixes toc references

* ensure the docfx log location is absolute

* Adds demo, removes old unused doc example files, updates and includes a few more package.md files and updates the home page with correct linking

* re-organizes the files that are included as files vs namespace overrides, updates docfx version and proj versions

* Get the correct /api URI paths for the generated api docs

* fix whitespace for the @lucene.experimental thing to work

* Updates build to include TestFramework, updates index to match the java lucene docs with the propery xref's

* Gets the index page back to normal with the deep links to API docs, fixes up a bunch of xref links

* removes duplicate entry

* removes the test framework docs from building because this causes collision issues with the same namespaces in the classes... the test framework classes would need a namespace change

* Gets the website up and running with a nice template, updates styles across both websites

* moves the quick start into a partial

* Gets most info and links all ready for the website

* Updates more docs for the website and fixes some invalid links

* commits whitespace changes as a result of a slightly different doc converting logic

* Revert "commits whitespace changes as a result of a slightly different doc converting logic"

This reverts commit c7847af.

* Updates docs based on the new output of the converter

* Gets more docs converted properly with the converter

* Updates the doc converter to append yaml headers correctly

* Fixes most of the xref links

* Fixes link parsing in the doc converter

* removes breadcrumb from download doc, more xrefs fixed

* Attempting to modify the markdig markdown engine to process special tags inside the triple-slash comments ... but can't get it to work so will revert back to dfm, just keeping this here for history

* Revert "Attempting to modify the markdig markdown engine to process special tags inside the triple-slash comments ... but can't get it to work so will revert back to dfm, just keeping this here for history"

This reverts commit efb0b00.

* Gets the DFM markdown engine running again so the @lucene.experimental tags are replaced.

* Updates some website info

* Adds separate docs page to link to the various docs for different versions

* fix typo

* bumps the date, small change to the source code doc

* Gets the download page all working for the different versions with checksums, etc...

* Fixing links to the download-package
  • Loading branch information
Shazwazza authored and laimis committed Feb 26, 2019
1 parent de4032a commit 0d56d20
Show file tree
Hide file tree
Showing 171 changed files with 3,423 additions and 793 deletions.
11 changes: 10 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -49,4 +49,13 @@ release/
.tools/

# NUnit test result file produced by nunit3-console.exe
[Tt]est[Rr]esult.xml
[Tt]est[Rr]esult.xml
websites/**/_site/*
websites/**/tools/*
websites/**/_exported_templates/*
websites/**/api/.manifest
websites/**/docfx.log
websites/**/lucenetemplate/plugins/*
websites/apidocs/api/**/*.yml
websites/apidocs/api/**/*.manifest
!websites/apidocs/api/toc.yml
11 changes: 10 additions & 1 deletion Lucene.Net.sln
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,15 @@ Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Lucene.Net.Tests.Join", "sr
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Lucene.Net.Tests.Memory", "src\Lucene.Net.Tests.Memory\Lucene.Net.Tests.Memory.csproj", "{3BE7B6EA-8DBC-45E2-947C-1CA7E63B5603}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "apidocs", "apidocs", "{58FD6E39-F30F-4566-90E5-B7C9D6BC0660}"
ProjectSection(SolutionItems) = preProject
apidocs\docfx.filter.yml = apidocs\docfx.filter.yml
apidocs\docfx.json = apidocs\docfx.json
apidocs\docs.ps1 = apidocs\docs.ps1
apidocs\index.md = apidocs\index.md
apidocs\toc.yml = apidocs\toc.yml
EndProjectSection
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Lucene.Net.Tests.Misc", "src\Lucene.Net.Tests.Misc\Lucene.Net.Tests.Misc.csproj", "{F8DDC5B7-A621-4B67-AB4B-BBE083C05BB8}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Lucene.Net.Tests.Queries", "src\Lucene.Net.Tests.Queries\Lucene.Net.Tests.Queries.csproj", "{AC750DC0-05A3-4F96-8CC5-CFC8FD01D4CF}"
Expand Down Expand Up @@ -357,8 +366,8 @@ Global
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(NestedProjects) = preSolution
{EFB2E31A-5917-49D5-A808-FE5061A550B4} = {8CA61D33-3590-4024-A304-7B1F75B50653}
{4DF7EACE-2B25-43F6-B558-8520BF20BD76} = {8CA61D33-3590-4024-A304-7B1F75B50653}
{EFB2E31A-5917-49D5-A808-FE5061A550B4} = {8CA61D33-3590-4024-A304-7B1F75B50653}
{119BBACD-D4DB-4E3B-922F-3DA83E0B29E2} = {4DF7EACE-2B25-43F6-B558-8520BF20BD76}
{CF3A74CA-FEFD-4F41-961B-CC8CF8D96286} = {8CA61D33-3590-4024-A304-7B1F75B50653}
{4B054831-5275-44E2-A4D4-CA0B19BEE19A} = {8CA61D33-3590-4024-A304-7B1F75B50653}
Expand Down
2 changes: 1 addition & 1 deletion src/Lucene.Net.Analysis.Common/Analysis/Cjk/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
limitations under the License.
-->

<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">


Analyzer for Chinese, Japanese, and Korean, which indexes bigrams.
This analyzer generates bigram terms, which are overlapping groups of two adjacent Han, Hiragana, Katakana, or Hangul characters.
Expand Down
2 changes: 1 addition & 1 deletion src/Lucene.Net.Analysis.Common/Analysis/Cn/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
limitations under the License.
-->

<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">


Analyzer for Chinese, which indexes unigrams (individual chinese characters).

Expand Down
8 changes: 4 additions & 4 deletions src/Lucene.Net.Analysis.Common/Analysis/Compound/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,8 +74,8 @@ filter available:

#### HyphenationCompoundWordTokenFilter

The [](xref:Lucene.Net.Analysis.Compound.HyphenationCompoundWordTokenFilter
HyphenationCompoundWordTokenFilter) uses hyphenation grammars to find
The [
HyphenationCompoundWordTokenFilter](xref:Lucene.Net.Analysis.Compound.HyphenationCompoundWordTokenFilter) uses hyphenation grammars to find
potential subwords that a worth to check against the dictionary. It can be used
without a dictionary as well but then produces a lot of "nonword" tokens.
The quality of the output tokens is directly connected to the quality of the
Expand All @@ -101,8 +101,8 @@ Credits for the hyphenation code go to the

#### DictionaryCompoundWordTokenFilter

The [](xref:Lucene.Net.Analysis.Compound.DictionaryCompoundWordTokenFilter
DictionaryCompoundWordTokenFilter) uses a dictionary-only approach to
The [
DictionaryCompoundWordTokenFilter](xref:Lucene.Net.Analysis.Compound.DictionaryCompoundWordTokenFilter) uses a dictionary-only approach to
find subwords in a compound word. It is much slower than the one that
uses the hyphenation grammars. You can use it as a first start to
see if your dictionary is good or not because it is much simpler in design.
Expand Down
11 changes: 4 additions & 7 deletions src/Lucene.Net.Analysis.Common/Analysis/Payloads/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,8 @@
See the License for the specific language governing permissions and
limitations under the License.
-->
<HTML>
<HEAD>
<TITLE>org.apache.lucene.analysis.payloads</TITLE>
</HEAD>
<BODY>



Provides various convenience classes for creating payloads on Tokens.
</BODY>
</HTML>

15 changes: 6 additions & 9 deletions src/Lucene.Net.Analysis.Common/Analysis/Sinks/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,10 @@
See the License for the specific language governing permissions and
limitations under the License.
-->
<HTML>
<HEAD>
<TITLE>org.apache.lucene.analysis.sinks</TITLE>
</HEAD>
<BODY>
[](xref:Lucene.Net.Analysis.Sinks.TeeSinkTokenFilter) and implementations
of [](xref:Lucene.Net.Analysis.Sinks.TeeSinkTokenFilter.SinkFilter) that



<xref:Lucene.Net.Analysis.Sinks.TeeSinkTokenFilter> and implementations
of <xref:Lucene.Net.Analysis.Sinks.TeeSinkTokenFilter.SinkFilter> that
might be useful.
</BODY>
</HTML>

Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
limitations under the License.
-->

[](xref:Lucene.Net.Analysis.TokenFilter) and [](xref:Lucene.Net.Analysis.Analyzer) implementations that use Snowball
<xref:Lucene.Net.Analysis.TokenFilter> and <xref:Lucene.Net.Analysis.Analyzer> implementations that use Snowball
stemmers.

This project provides pre-compiled version of the Snowball stemmers based on revision 500 of the Tartarus Snowball repository, together with classes integrating them with the Lucene search engine.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
limitations under the License.
-->

Backwards-compatible implementation to match [](xref:Lucene.Net.Util.Version.LUCENE_31)
Backwards-compatible implementation to match [#LUCENE_31](xref:Lucene.Net.Util.Version)
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
limitations under the License.
-->

Backwards-compatible implementation to match [](xref:Lucene.Net.Util.Version.LUCENE_34)
Backwards-compatible implementation to match [#LUCENE_34](xref:Lucene.Net.Util.Version)
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
limitations under the License.
-->

Backwards-compatible implementation to match [](xref:Lucene.Net.Util.Version.LUCENE_36)
Backwards-compatible implementation to match [#LUCENE_36](xref:Lucene.Net.Util.Version)
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
limitations under the License.
-->

Backwards-compatible implementation to match [](xref:Lucene.Net.Util.Version.LUCENE_40)
Backwards-compatible implementation to match [#LUCENE_40](xref:Lucene.Net.Util.Version)
38 changes: 19 additions & 19 deletions src/Lucene.Net.Analysis.Common/Analysis/Standard/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,40 +20,40 @@

The `org.apache.lucene.analysis.standard` package contains three fast grammar-based tokenizers constructed with JFlex:

* [](xref:Lucene.Net.Analysis.Standard.StandardTokenizer):
* <xref:Lucene.Net.Analysis.Standard.StandardTokenizer>:
as of Lucene 3.1, implements the Word Break rules from the Unicode Text
Segmentation algorithm, as specified in
[Unicode Standard Annex #29](http://unicode.org/reports/tr29/).
Unlike `UAX29URLEmailTokenizer`, URLs and email addresses are
**not** tokenized as single tokens, but are instead split up into
tokens according to the UAX#29 word break rules.

[](xref:Lucene.Net.Analysis.Standard.StandardAnalyzer StandardAnalyzer) includes
[](xref:Lucene.Net.Analysis.Standard.StandardTokenizer StandardTokenizer),
[](xref:Lucene.Net.Analysis.Standard.StandardFilter StandardFilter),
[](xref:Lucene.Net.Analysis.Core.LowerCaseFilter LowerCaseFilter)
and [](xref:Lucene.Net.Analysis.Core.StopFilter StopFilter).
[StandardAnalyzer](xref:Lucene.Net.Analysis.Standard.StandardAnalyzer) includes
[StandardTokenizer](xref:Lucene.Net.Analysis.Standard.StandardTokenizer),
[StandardFilter](xref:Lucene.Net.Analysis.Standard.StandardFilter),
[LowerCaseFilter](xref:Lucene.Net.Analysis.Core.LowerCaseFilter)
and [StopFilter](xref:Lucene.Net.Analysis.Core.StopFilter).
When the `Version` specified in the constructor is lower than
3.1, the [](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer ClassicTokenizer)
3.1, the [ClassicTokenizer](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer)
implementation is invoked.
* [](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer ClassicTokenizer):
* [ClassicTokenizer](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer):
this class was formerly (prior to Lucene 3.1) named
`StandardTokenizer`. (Its tokenization rules are not
based on the Unicode Text Segmentation algorithm.)
[](xref:Lucene.Net.Analysis.Standard.ClassicAnalyzer ClassicAnalyzer) includes
[](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer ClassicTokenizer),
[](xref:Lucene.Net.Analysis.Standard.StandardFilter StandardFilter),
[](xref:Lucene.Net.Analysis.Core.LowerCaseFilter LowerCaseFilter)
and [](xref:Lucene.Net.Analysis.Core.StopFilter StopFilter).
[ClassicAnalyzer](xref:Lucene.Net.Analysis.Standard.ClassicAnalyzer) includes
[ClassicTokenizer](xref:Lucene.Net.Analysis.Standard.ClassicTokenizer),
[StandardFilter](xref:Lucene.Net.Analysis.Standard.StandardFilter),
[LowerCaseFilter](xref:Lucene.Net.Analysis.Core.LowerCaseFilter)
and [StopFilter](xref:Lucene.Net.Analysis.Core.StopFilter).

* [](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer UAX29URLEmailTokenizer):
* [UAX29URLEmailTokenizer](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer):
implements the Word Break rules from the Unicode Text Segmentation
algorithm, as specified in
[Unicode Standard Annex #29](http://unicode.org/reports/tr29/).
URLs and email addresses are also tokenized according to the relevant RFCs.

[](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailAnalyzer UAX29URLEmailAnalyzer) includes
[](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer UAX29URLEmailTokenizer),
[](xref:Lucene.Net.Analysis.Standard.StandardFilter StandardFilter),
[](xref:Lucene.Net.Analysis.Core.LowerCaseFilter LowerCaseFilter)
and [](xref:Lucene.Net.Analysis.Core.StopFilter StopFilter).
[UAX29URLEmailAnalyzer](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailAnalyzer) includes
[UAX29URLEmailTokenizer](xref:Lucene.Net.Analysis.Standard.UAX29URLEmailTokenizer),
[StandardFilter](xref:Lucene.Net.Analysis.Standard.StandardFilter),
[LowerCaseFilter](xref:Lucene.Net.Analysis.Core.LowerCaseFilter)
and [StopFilter](xref:Lucene.Net.Analysis.Core.StopFilter).
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
limitations under the License.
-->

Custom [](xref:Lucene.Net.Util.AttributeImpl) for indexing collation keys as index terms.
Custom <xref:Lucene.Net.Util.AttributeImpl> for indexing collation keys as index terms.
4 changes: 2 additions & 2 deletions src/Lucene.Net.Analysis.Common/Collation/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,8 @@
very slow.)

* Effective Locale-specific normalization (case differences, diacritics, etc.).
([](xref:Lucene.Net.Analysis.Core.LowerCaseFilter) and
[](xref:Lucene.Net.Analysis.Miscellaneous.ASCIIFoldingFilter) provide these services
(<xref:Lucene.Net.Analysis.Core.LowerCaseFilter> and
<xref:Lucene.Net.Analysis.Miscellaneous.ASCIIFoldingFilter> provide these services
in a generic way that doesn't take into account locale-specific needs.)

## Example Usages
Expand Down
11 changes: 8 additions & 3 deletions src/Lucene.Net.Analysis.Common/overview.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
<!--
---
uid: Lucene.Net.Analysis.Common
summary: *content
---

<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
Expand All @@ -17,6 +22,6 @@

Analyzers for indexing content in different languages and domains.

For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysis) package documentation.
For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation.

This module contains concrete components ([](xref:Lucene.Net.Analysis.CharFilter)s, [](xref:Lucene.Net.Analysis.Tokenizer)s, and ([](xref:Lucene.Net.Analysis.TokenFilter)s) for analyzing different types of content. It also provides a number of [](xref:Lucene.Net.Analysis.Analyzer)s for different languages that you can use to get started quickly.
This module contains concrete components (<xref:Lucene.Net.Analysis.CharFilter>s, <xref:Lucene.Net.Analysis.Tokenizer>s, and (<xref:Lucene.Net.Analysis.TokenFilter>s) for analyzing different types of content. It also provides a number of <xref:Lucene.Net.Analysis.Analyzer>s for different languages that you can use to get started quickly.
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
limitations under the License.
-->

Custom [](xref:Lucene.Net.Util.AttributeImpl) for indexing collation keys as index terms.
Custom <xref:Lucene.Net.Util.AttributeImpl> for indexing collation keys as index terms.
21 changes: 12 additions & 9 deletions src/Lucene.Net.Analysis.ICU/overview.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
<!--
---
uid: Lucene.Net.Analysis.Icu
summary: *content
---

<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
Expand All @@ -16,18 +21,16 @@
-->
<!-- :Post-Release-Update-Version.LUCENE_XY: - several mentions in this file -->

<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>
Apache Lucene ICU integration module
</title>



This module exposes functionality from
[ICU](http://site.icu-project.org/) to Apache Lucene. ICU4J is a Java
library that enhances Java's internationalization support by improving
performance, keeping current with the Unicode Standard, and providing richer
APIs.

For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysis) package documentation.
For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation.

This module exposes the following functionality:

Expand Down Expand Up @@ -84,8 +87,8 @@ For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysi
very slow.)

* Effective Locale-specific normalization (case differences, diacritics, etc.).
([](xref:Lucene.Net.Analysis.Core.LowerCaseFilter) and
[](xref:Lucene.Net.Analysis.Miscellaneous.ASCIIFoldingFilter) provide these services
(<xref:Lucene.Net.Analysis.Core.LowerCaseFilter> and
<xref:Lucene.Net.Analysis.Miscellaneous.ASCIIFoldingFilter> provide these services
in a generic way that doesn't take into account locale-specific needs.)

## Example Usages
Expand Down Expand Up @@ -266,7 +269,7 @@ For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysi

# [Backwards Compatibility]()

This module exists to provide up-to-date Unicode functionality that supports the most recent version of Unicode (currently 6.3). However, some users who wish for stronger backwards compatibility can restrict [](xref:Lucene.Net.Analysis.Icu.ICUNormalizer2Filter) to operate on only a specific Unicode Version by using a {@link com.ibm.icu.text.FilteredNormalizer2}.
This module exists to provide up-to-date Unicode functionality that supports the most recent version of Unicode (currently 6.3). However, some users who wish for stronger backwards compatibility can restrict <xref:Lucene.Net.Analysis.Icu.ICUNormalizer2Filter> to operate on only a specific Unicode Version by using a {@link com.ibm.icu.text.FilteredNormalizer2}.

## Example Usages

Expand Down
13 changes: 8 additions & 5 deletions src/Lucene.Net.Analysis.Kuromoji/overview.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
<!--
---
uid: Lucene.Net.Analysis.Kuromoji
summary: *content
---

<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
Expand All @@ -15,12 +20,10 @@
limitations under the License.
-->

<title>
Apache Lucene Kuromoji Analyzer
</title>


Kuromoji is a morphological analyzer for Japanese text.

This module provides support for Japanese text analysis, including features such as part-of-speech tagging, lemmatization, and compound word analysis.

For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysis) package documentation.
For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation.
13 changes: 8 additions & 5 deletions src/Lucene.Net.Analysis.Phonetic/overview.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
<!--
---
uid: Lucene.Net.Analysis.Phonetic
summary: *content
---

<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
Expand All @@ -15,12 +20,10 @@
limitations under the License.
-->

<title>
analyzers-phonetic
</title>


Analysis for indexing phonetic signatures (for sounds-alike search)

For an introduction to Lucene's analysis API, see the [](xref:Lucene.Net.Analysis) package documentation.
For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis> package documentation.

This module provides analysis components (using encoders from [Apache Commons Codec](http://commons.apache.org/codec/)) that index and search phonetic signatures.
2 changes: 1 addition & 1 deletion src/Lucene.Net.Analysis.SmartCn/HHMM/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
limitations under the License.
-->

<META http-equiv="Content-Type" content="text/html; charset=UTF-8">


SmartChineseAnalyzer Hidden Markov Model package.
@lucene.experimental
Loading

0 comments on commit 0d56d20

Please sign in to comment.