Meilisearch | Meilisearch Cloud | Documentation | Discord | Roadmap | Website | FAQ
🚨 IMPORTANT NOTICE: Reduced Maintenance & Support 🚨
Dear Community,
We'd like to share some updates regarding the future maintenance of this repository:
Our team is small, and our availability will be reduced in the upcoming times. As such, response times might be slower, and we will not be accepting enhancements for this repository moving forward.
If you're looking for reliable alternatives, consider using Cloud Service. It offers a robust solution for those seeking an alternative to this repository by providing a crawler for your convenience.
Seeking immediate support? Please join us on our Discord channel.
docs-scraper is a scraper for your documentation website that indexes the scraped content into a Meilisearch instance.
Meilisearch is an open-source search engine. Discover what Meilisearch is!
This scraper is used in production and runs on the Meilisearch documentation on each deployment.
💡 If you already have your own scraper but you still want to use Meilisearch and our front-end tools, check out this discussion.
- ⚡ Supercharge your Meilisearch experience
- ⚙️ Usage
- 🖌 And for the front-end search bar?
- 🛠 More Configurations
- More About the Selectors
- All the Config File Settings
index_uid
start_urls
stop_urls
(optional)selectors_key
(optional)scrape_start_urls
(optional)sitemap_urls
(optional)sitemap_alternate_links
(optional)selectors_exclude
(optional)custom_settings
(optional)min_indexed_level
(optional)only_content_level
(optional)js_render
(optional)js_wait
(optional)allowed_domains
(optional)
- Authentication
- Installing Chrome Headless
- 🤖 Compatibility with Meilisearch
- ⚙️ Development Workflow and Contributing
- Credits
Say goodbye to server deployment and manual updates with Meilisearch Cloud. No credit card required.
Here are the 3 steps to use docs-scraper
:
Your documentation content needs to be scraped and pushed into a Meilisearch instance.
You can install and run Meilisearch on your machine using curl
.
curl -L https://install.meilisearch.com | sh
./meilisearch --master-key=myMasterKey
There are other ways to install Meilisearch.
The host URL and the API key you will provide in the next steps correspond to the credentials of this Meilisearch instance.
In the example above, the host URL is http://localhost:7700
and the API key is myMasterKey
.
Meilisearch is open-source and can run either on your server or on any cloud provider. Here is a tutorial to run Meilisearch in production.
The scraper tool needs a config file to know which content you want to scrape. This is done by providing selectors (e.g. the HTML tag/id/class). The config file is passed as an argument. It follows no naming convention and may be named as you want.
Here is an example of a basic config file:
{
"index_uid": "docs",
"start_urls": ["https://www.example.com/doc/"],
"sitemap_urls": ["https://www.example.com/sitemap.xml"],
"stop_urls": [],
"selectors": {
"lvl0": {
"selector": ".docs-lvl0",
"global": true,
"default_value": "Documentation"
},
"lvl1": {
"selector": ".docs-lvl1",
"global": true,
"default_value": "Chapter"
},
"lvl2": ".docs-content .docs-lvl2",
"lvl3": ".docs-content .docs-lvl3",
"lvl4": ".docs-content .docs-lvl4",
"lvl5": ".docs-content .docs-lvl5",
"lvl6": ".docs-content .docs-lvl6",
"text": ".docs-content p, .docs-content li"
}
}
The index_uid
field is the index identifier in your Meilisearch instance in which your website content is stored. The scraping tool will create a new index if it does not exist.
The docs-content
class (the .
means this is a class) is the main container of the textual content in this example. Most of the time, this tag is a <main>
or an <article>
HTML element.
lvlX
selectors should use the standard title tags like h1
, h2
, h3
, etc. You can also use static classes. Set a unique id or name attribute to these elements.
Every searchable lvl
elements outside this main documentation container (for instance, in a sidebar) must be global
selectors. They will be globally picked up and injected to every document built from your page.
You can also check out the config file we use in production for our own documentation site.
💡 To better understand the selectors, go to this section.
🔨 There are many other fields you can set in the config file that allow you to adapt the scraper to your need. Check out this section.
This project supports Python 3.8 and above.
The pipenv
command must be installed.
Set both environment variables MEILISEARCH_HOST_URL
and MEILISEARCH_API_KEY
.
Following on from the example in the first step, they are respectively http://localhost:7700
and myMasterKey
.
Then, run:
pipenv install
pipenv run ./docs_scraper <path-to-your-config-file>
<path-to-your-config-file>
should be the path of your configuration file defined at the previous step.
docker run -t --rm \
-e MEILISEARCH_HOST_URL=<your-meilisearch-host-url> \
-e MEILISEARCH_API_KEY=<your-meilisearch-api-key> \
-v <absolute-path-to-your-config-file>:/docs-scraper/<path-to-your-config-file> \
getmeili/docs-scraper:latest pipenv run ./docs_scraper <path-to-your-config-file>
<absolute-path-to-your-config-file>
should be the absolute path of your configuration file defined at the previous step.
--network=host
option to this Docker command.
To run after your deployment job:
run-scraper:
needs: <your-deployment-job>
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@master
- name: Run scraper
env:
HOST_URL: ${{ secrets.MEILISEARCH_HOST_URL }}
API_KEY: ${{ secrets.MEILISEARCH_API_KEY }}
CONFIG_FILE_PATH: <path-to-your-config-file>
run: |
docker run -t --rm \
-e MEILISEARCH_HOST_URL=$HOST_URL \
-e MEILISEARCH_API_KEY=$API_KEY \
-v $CONFIG_FILE_PATH:/docs-scraper/<path-to-your-config-file> \
getmeili/docs-scraper:latest pipenv run ./docs_scraper <path-to-your-config-file>
latest
image in production. Use the release tags instead.
Here is the GitHub Action file we use in production for the Meilisearch documentation.
The API key you must provide should have the permissions to add documents into your Meilisearch instance.
In a production environment, we recommend providing the private key instead of the master key, as it is safer and it has enough permissions to perform such requests.
_More about Meilisearch authentication. _
After having scraped your documentation, you might need a search bar to improve your user experience!
About the front part:
- If your website is a VuePress application, check out the vuepress-plugin-meilisearch repository.
- For all kinds of documentation, check out the docs-searchbar.js library.
Both of these libraries provide a front-end search bar perfectly adapted for documentation.
Very simply, selectors are needed to tell the scraper "I want to get the content in this HTML tag".
This HTML tag is a selector.
A selector can be:
- a class (e.g.
.main-content
) - an id (e.g.
#main-article
) - an HTML tag (e.g.
h1
)
With a more concrete example:
"lvl0": {
"selector": ".navbar-nav .active",
"global": true,
"default_value": "Documentation"
},
.navbar-nav .active
means "take the content in the class active
that is itself in the class navbar-nav
".
global: true
means you want the same lvl0
(so, the same main title) for all the contents extracted from the same page.
"default_value": "Documentation"
will be the displayed value if no content in .navbar-nav .active
was found.
NB: You can set the global
and default_value
attributes for every selector level (lvlX
) and not only for the lvl0
.
You can notice different levels of selectors (0 to 6 maximum) in the config file. They correspond to different levels of titles, and will be displayed this way:
Your data will be displayed with a main title (lvl0
), sub-titles (lvl1
), sub-sub-titles (lvl2
) and so on...
The index_uid
field is the index identifier in your Meilisearch instance in which your website content is stored. The scraping tool will create a new index if it does not exist.
{
"index_uid": "example"
}
This array contains the list of URLs that will be used to start scraping your website.
The scraper will recursively follow any links (<a>
tags) from those pages. It will not follow links that are on another domain.
{
"start_urls": ["https://www.example.com/docs"]
}
This parameter gives more weight to some pages and helps to boost records built from the page.
Pages with highest page_rank
will be returned before pages with a lower page_rank
.
{
"start_urls": [
{
"url": "http://www.example.com/docs/concepts/",
"page_rank": 5
},
{
"url": "http://www.example.com/docs/contributors/",
"page_rank": 1
}
]
}
In this example, records built from the Concepts page will be ranked higher than results extracted from the Contributors page.
The scraper will not follow links that match stop_urls
.
{
"start_urls": ["https://www.example.com/docs"],
"stop_urls": ["https://www.example.com/about-us"]
}
This allows you to use custom selectors per page.
If the markup of your website is so different from one page to another that you can't have generic selectors, you can namespace your selectors and specify which set of selectors should be applied to specific pages.
{
"start_urls": [
"http://www.example.com/docs/",
{
"url": "http://www.example.com/docs/concepts/",
"selectors_key": "concepts"
},
{
"url": "http://www.example.com/docs/contributors/",
"selectors_key": "contributors"
}
],
"selectors": {
"default": {
"lvl0": ".main h1",
"lvl1": ".main h2",
"lvl2": ".main h3",
"lvl3": ".main h4",
"lvl4": ".main h5",
"text": ".main p"
},
"concepts": {
"lvl0": ".header h2",
"lvl1": ".main h1.title",
"lvl2": ".main h2.title",
"lvl3": ".main h3.title",
"lvl4": ".main h5.title",
"text": ".main p"
},
"contributors": {
"lvl0": ".main h1",
"lvl1": ".contributors .name",
"lvl2": ".contributors .title",
"text": ".contributors .description"
}
}
}
Here, all documentation pages will use the selectors defined in selectors.default
while the page under ./concepts
will use selectors.concepts
and those under ./contributors
will use selectors.contributors
.
By default, the scraper will extract content from the pages defined in start_urls
. If you do not have any valuable content on your starts_urls or if it's a duplicate of another page, you should set this to false
.
{
"scrape_start_urls": false
}
You can pass an array of URLs pointing to your sitemap(s) files. If this value is set, the scraper will try to read URLs from your sitemap(s)
{
"sitemap_urls": ["http://www.example.com/docs/sitemap.xml"]
}
Sitemaps can contain alternative links for URLs. Those are other versions of the same page, in a different language, or with a different URL. By default docs-scraper will ignore those URLs.
Set this to true if you want those other versions to be scraped as well.
{
"sitemap_urls": ["http://www.example.com/docs/sitemap.xml"],
"sitemap_alternate_links": true
}
With the above configuration and the sitemap.xml
below, both http://www.example.com/docs/
and http://www.example.com/docs/de/
will be scraped.
<url>
<loc>http://www.example.com/docs/</loc>
<xhtml:link rel="alternate" hreflang="de" href="http://www.example.com/de/"/>
</url>
This expects an array of CSS selectors. Any element matching one of those selectors will be removed from the page before any data is extracted from it.
This can be used to remove a table of content, a sidebar, or a footer, to make other selectors easier to write.
{
"selectors_exclude": [".footer", "ul.deprecated"]
}
This field can be used to add Meilisearch settings.
"custom_settings": {
"synonyms": {
"static site generator": [
"ssg"
],
"ssg": [
"static site generator"
]
},
"stopWords": ["of", "the"],
"filterableAttributes": ["genres", "type"]
}
Learn more about filterableAttributes
, synonyms
, stop-words
and all available settings in the Meilisearch documentation.
The default value is 0. By increasing it, you can choose not to index some records if they don't have enough lvlX
matching. For example, with a min_indexed_level: 2
, the scraper indexes temporary records having at least lvl0, lvl1 and lvl2 set.
This is useful when your documentation has pages that share the same lvl0
and lvl1
for example. In that case, you don't want to index all the shared records, but want to keep the content different across pages.
{
"min_indexed_level": 2
}
When only_content_level
is set to true
, then the scraper won't create records for the lvlX
selectors.
If used, min_indexed_level
is ignored.
{
"only_content_level": true
}
When js_render
is set to true
, the scraper will use ChromeDriver. This is needed for pages that are rendered with JavaScript, for example, pages generated with React, Vue, or applications that are running in development mode: autoreload
watch
.
After installing ChromeDriver, provide the path to the bin using the following environment variable CHROMEDRIVER_PATH
(default value is /usr/bin/chromedriver
).
The default value of js_render
is false
.
{
"js_render": true
}
This setting can be used when js_render
is set to true
and the pages need time to fully load. js_wait
takes an integer is specifies the number of seconds the scraper should wait for the page to load.
{
"js_render": true,
"js_wait": 1
}
This setting specifies the domains that the scraper is allowed to access. In most cases the allowed_domains
will be automatically set using the start_urls
and stop_urls
. When scraping a domain that contains a port, for example http://localhost:8080
, the domain needs to be manually added to the configuration.
{
"allowed_domains": ["localhost"]
}
WARNING: Please be aware that the scraper will send authentication headers to every scraped site, so use allowed_domains
to adjust the scope accordingly!
Basic HTTP authentication is supported by setting these environment variables:
DOCS_SCRAPER_BASICAUTH_USERNAME
DOCS_SCRAPER_BASICAUTH_PASSWORD
If it happens to you to scrape sites protected by Cloudflare Access, you have to set appropriate HTTP headers.
Values for these headers are taken from env variables CF_ACCESS_CLIENT_ID
and CF_ACCESS_CLIENT_SECRET
.
In case of Google Cloud Identity-Aware Proxy, please specify these env variables:
IAP_AUTH_CLIENT_ID
- # pick client ID of the application you are connecting toIAP_AUTH_SERVICE_ACCOUNT_JSON
- # generate in Actions -> Create key -> JSON
If you need to scrape site protected by Keycloak (Gatekeeper), you have to provide a valid access token.
If you set the environment variables KC_URL
, KC_REALM
, KC_CLIENT_ID
, and KC_CLIENT_SECRET
the scraper authenticates itself against Keycloak using Client Credentials Grant and adds the resulting access token as Authorization
HTTP header to each scraping request.
Websites that need JavaScript for rendering are passed through ChromeDriver.
Download the version suited to your OS and then set the environment variable CHROMEDRIVER_PATH
.
This package guarantees compatibility with version v1.x of Meilisearch, but some features may not be present. Please check the issues for more info.
Any new contribution is more than welcome in this project!
If you want to know more about the development workflow or want to contribute, please visit our contributing guidelines for detailed instructions!
Based on Algolia's docsearch scraper repository from this commit.
Due to a lot of future changes on this repository compared to the original one, we don't maintain it as an official fork.
Meilisearch provides and maintains many SDKs and Integration tools like this one. We want to provide everyone with an amazing search experience for any kind of project. If you want to contribute, make suggestions, or just know what's going on right now, visit us in the integration-guides repository.