This is the repository hosting the public DocSearch configurations.
Update You can check the DocSearch dedicated documentation website
DocSearch is composed of 3 different projects:
- The front-end JavaScript library: https://github.com/algolia/docsearch
- The scraper which browses & indexes web pages: https://github.com/algolia/docsearch-scraper
- The configurations for the scraper: https://github.com/algolia/docsearch-configs
If you want to run your own DocSearch instance on those configuration files, please get familiar with the scraper setup guidelines.
The DocSearch scraper will use a configuration file specifying:
- the Algolia index name that will store the records resulting from the crawling
- the URLs it needs to crawl
- the URLs it shouldn't crawl
- the (hierarchical) CSS selectors to use to extract the relevant content from your webpages
- the CSS selectors to skip
- An optional sitemap URL that will be crawled and then scraped
- additional options you might provide to fine-tune the scraping
Once you run the DocSearch scraper on a specific configuration, it will:
- crawl all the URLs you specified (from the start_urls or the sitemap)
- follow all the hyperlinks mentioned in the page, and continue the crawling there
- stop the crawling as soon as you've reached a URL that is not specified in your configuration or affiliated to a start url
- extract the content of every single crawled page following the logic you defined using the CSS selectors
- push the resulting records to the Algolia index you configured
A configuration file looks like:
{
"index_name": "stripe",
"start_urls": [
"https://stripe.com/docs"
],
"stop_urls": [
"https://stripe.com/docs/api"
],
"selectors": {
"lvl0": "#content header h1",
"lvl1": "#content article h1",
"lvl2": "#content section h3",
"lvl3": "#content section h4",
"lvl4": "#content section h5",
"lvl5": "#content section h6",
"text": "#content header p,#content section p,#content section ol"
},
"selectors_exclude": [
".method-list",
"aside.note"
],
// additional options
[...]
}
Name of the Algolia index where all the data will be pushed. If the PREFIX
environment variable is defined, it will be prefixed
with it.
This name must be equal to the configuration file name
You can pass either a string or an array of urls. The crawler will go to each
page in order, following every link it finds on the page. It will only stop if
the domain is outside of the allowed_domains
or if the link is blacklisted in
stop_urls
.
Strings will be considered as regex.
Note that it currently does not follow 301 redirects.
This object contains all the CSS selectors that will be used to create the
record hierarchy. It contains 6 levels (lvl0
, lvl1
, lvl2
, lvl3
, lvl4
,
lvl5
) and text
. You should fill at least the three first levels for better
relevance.
A default config would be to target the page title
or h1
as lvl0
, the h2
as lvl1
and h3
as lvl2
. text
is usually any p
of text.
Our crawler offers you to crawl a site by discovering the URLs using Sitemaps. Thus, you will need to define the direct url(s) to your sitemap XML file, sitemap_urls
, and then establish regex(s), sitemap_urls_regexs
, which will match the URLs to crawl. Otherwise it will use the start_urls
pattern in order to match the URLs available within the site map.
For sites that use Sitemap index files that point to other sitemap files, all those sitemaps will be followed.
A list of urls pointing to the sitemaps (or sitemap index) you want to crawl. Must be provided if you want to discover though sitemap.
A list of regular expression that will be applied to each URL from the sitemap. If the pattern match an URL, this link will be scrapped. If none regular expressions is defined, the start_urls will be taken as pattern.
Specifies if matched URL should not respect the same rules as the hyperlink crawled. If set to true, each URL will be scraped no matter if it suited the start_urls
or stop_urls
. Default is force_sitemap_urls_crawling
disabled
[...]
"sitemap_urls": [
"https://www.mySite.com/sitemap.xml"
],
"": [
"/doc/"
],
"force_sitemap_urls_crawling": true,
[...]
Given this configuration, each webpage whose the URL contains '/doc/' will be scrapped even if they don't complied the start_urls
or stop_urls
It's possible to make a selector global which means that all records for the page will have this value. This is useful when you have a title that is in the right sidebar and the sidebar is placed after the content in the dom.
"selectors": {
"lvl0": {
"selector": "#content header h1",
"global": true
}
}
By default selectors
is considered to be a css selector but you can specify that a selector is a xpath query.
This is useful when you want to do more complex selection like selecting the parent of a node.
"selectors": {
"lvl0": {
"selector": "//li[@class=\"chapter active done\"]/../../a",
"type": "xpath"
}
}
You have the possibility to add a default value. If the given selector doesn't match anything in a page then for each record the default value will be set
"selectors": {
"lvl0": {
"selector": "#content article h1",
"default_value": "Documentation"
}
}
You can override the default strip_chars
per level
"selectors": {
"lvl0": {
"selector": "#content article h1",
"strip_chars": " .,;:"
}
}
You can pass an array of strings. This is the whitelist of domains the crawler will browse. If a link targets a page that is not in the whitelist, the crawler will not follow it.
Default is the domain of the first element in the start_urls
This is the blacklist of urls on which the crawler should stop. If a link in
a crawled webpage targets one the elements in the stop_urls
list, the crawler
will not follow the link.
Note that you can use regexps as well as plain urls.
Note: It is sometimes needed to add http://www.example.com/index.html
pages to
the stop_urls
list if you set http://www.example.com
as a start_urls
, to
avoid duplicated content.
By default, the selectors
search is applied page-wide. If there are some parts
of the page that you do not want to include (like a header, sidebar or footer),
you can add them to the selectors_exclude
key.
This object is any custom Algolia settings you would like to pass to the index settings.
Lets you define the minimum level at which you want records to be indexed. For
example, with a min_indexed_level: 1
, you will only index records that have at
least a lvl1
field.
This is especially useful when the documentation is split into several pages, but all pages duplicates the main title (see [this issue][1]).
The HTML code that we crawl is sometimes generated using Javascript. In those
cases, the js_render
option must be set to true
. It will enable our
internal proxy (Selenium) to render pages before crawling them.
This parameter is optional and is set to false
by default.
The js_wait
parameter lets you change the default waiting time to render the
webpage with the Selenium proxy.
This parameter is optional and is set to 0
s by default.
The use_anchors
is need to be set to True for javascript doc when the hash is
used to route the query. Internally this will disable the canonicalize feature that
is removing the hash from the url.
This parameter is optional and is set to False by default.
A list of character to remove from the text that is indexed.
Default is " .,;:§¶"
Default is false
Default is false
Default is false
Each time the configuration is locally run, this attribute is set to the number of records indexed.
This attribute is used for purposed monitoring. We keep a track of its evolution in order to detect main changes.
Default is 0
.
It could happen that the crawling populates duplicated data from your website. This is mostly because we have crawled the same page several times (e.g. from different urls). If we have URLs like http://website.com/page
and http://website.com/page/
(notice the trailing /
from the second one), the scraper will consider them as different. This can be fixed by adding a regex to the stop_urls
in your config.json
:
"stop_urls": [
"/$"
]
In this attribute, you can also list the pages you want to skip:
"stop_urls": [
"http://website.com/page/"
]
The scraper will also consider pages with anchors as different pages. Make sure you remove any hashsign from the urls you put in the stop & start URLs:
Bad:
"stop_urls": [
"http://website.com/page/#foo"
]
Good:
"stop_urls": [
"/$"
]
Or :
"stop_urls": [
"http://website.com/page/"
]