Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates docs build #229

Merged
merged 18 commits into from
Aug 21, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 31 additions & 0 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ name: 'vNext$(rev:.r)' # Format for build number (will be overridden)
# ArtifactFeedID: (Optional - set to your Azure DevOps Artifact (NuGet) feed. If not provided, publish job will be skipped.)
# BuildConfiguration: (Optional. Defaults to 'Release')
# BuildPlatform: (Optional. Defaults to 'Any CPU')
# GenerateDocs: (Optional. Only builds documentation website if set to 'true'.)
# IsRelease: (Optional. By default the Release job is disabled, setting this to 'true' will enable it)
# RunTests: 'true' (Optional - set to 'false' to disable test jobs - useful for debugging. If not provided, tests will be run.)

Expand All @@ -35,6 +36,10 @@ name: 'vNext$(rev:.r)' # Format for build number (will be overridden)
variables:
- name: BuildCounter
value: $[counter(variables['VersionSuffix'],coalesce(variables['BuildCounterSeed'], 1250))]
- name: DocumenationArtifactName
value: 'docs'
- name: DocumentationArtifactZipFileName
value: 'documentation.zip'
- name: BinaryArtifactName
value: 'testbinaries'
- name: NuGetArtifactName
Expand Down Expand Up @@ -160,6 +165,32 @@ stages:
PathtoPublish: '$(Build.ArtifactStagingDirectory)/$(NuGetArtifactName)'
ArtifactName: '$(NuGetArtifactName)'

- job: Docs
condition: and(succeeded(), eq(variables['GenerateDocs'], 'true'))
pool:
vmImage: 'vs2017-win2016'

steps:
- powershell: |
$(Build.SourcesDirectory)/websites/apidocs/docs.ps1 0 1
errorActionPreference: 'continue'
ignoreLASTEXITCODE: true
failOnStderr: false
displayName: 'Generate Documentation'

- task: ArchiveFiles@2
displayName: 'Zip Documenation Files'
inputs:
rootFolderOrFile: '$(Build.SourcesDirectory)/websites/apidocs/_site'
includeRootFolder: false
archiveFile: '$(Build.ArtifactStagingDirectory)/$(DocumenationArtifactName)/$(DocumentationArtifactZipFileName)'

- task: PublishBuildArtifacts@1
displayName: 'Publish Artifact: $(DocumenationArtifactName)'
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)/$(DocumenationArtifactName)'
ArtifactName: '$(DocumenationArtifactName)'


- stage: Test_Stage
displayName: 'Test Stage:'
Expand Down
28 changes: 14 additions & 14 deletions src/Lucene.Net.Analysis.Common/Analysis/Compound/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,13 +145,13 @@ This decision matrix should help you:
public void testHyphenationCompoundWordsDE() throws Exception {
String[] dict = { "Rind", "Fleisch", "Draht", "Schere", "Gesetz",
"Aufgabe", "Überwachung" };

Reader reader = new FileReader("de_DR.xml");

HyphenationTree hyphenator = HyphenationCompoundWordTokenFilter
Reader reader = new FileReader("de_DR.xml");
HyphenationTree hyphenator = HyphenationCompoundWordTokenFilter
.getHyphenationTree(reader);

HyphenationCompoundWordTokenFilter tf = new HyphenationCompoundWordTokenFilter(
HyphenationCompoundWordTokenFilter tf = new HyphenationCompoundWordTokenFilter(
new WhitespaceTokenizer(new StringReader(
"Rindfleischüberwachungsgesetz Drahtschere abba")), hyphenator,
dict, CompoundWordTokenFilterBase.DEFAULT_MIN_WORD_SIZE,
Expand All @@ -163,14 +163,14 @@ This decision matrix should help you:
System.out.println(t);
}
}

public void testHyphenationCompoundWordsWithoutDictionaryDE() throws Exception {
public void testHyphenationCompoundWordsWithoutDictionaryDE() throws Exception {
Reader reader = new FileReader("de_DR.xml");

HyphenationTree hyphenator = HyphenationCompoundWordTokenFilter
HyphenationTree hyphenator = HyphenationCompoundWordTokenFilter
.getHyphenationTree(reader);

HyphenationCompoundWordTokenFilter tf = new HyphenationCompoundWordTokenFilter(
HyphenationCompoundWordTokenFilter tf = new HyphenationCompoundWordTokenFilter(
new WhitespaceTokenizer(new StringReader(
"Rindfleischüberwachungsgesetz Drahtschere abba")), hyphenator);

Expand All @@ -184,8 +184,8 @@ This decision matrix should help you:
String[] dict = { "Bil", "Dörr", "Motor", "Tak", "Borr", "Slag", "Hammar",
"Pelar", "Glas", "Ögon", "Fodral", "Bas", "Fiol", "Makare", "Gesäll",
"Sko", "Vind", "Rute", "Torkare", "Blad" };

DictionaryCompoundWordTokenFilter tf = new DictionaryCompoundWordTokenFilter(
DictionaryCompoundWordTokenFilter tf = new DictionaryCompoundWordTokenFilter(
new WhitespaceTokenizer(
new StringReader(
"Bildörr Bilmotor Biltak Slagborr Hammarborr Pelarborr Glasögonfodral Basfiolsfodral Basfiolsfodralmakaregesäll Skomakare Vindrutetorkare Vindrutetorkarblad abba")),
Expand Down
4 changes: 2 additions & 2 deletions src/Lucene.Net.Analysis.Common/Collation/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,8 +47,8 @@
writer.close();
IndexReader ir = DirectoryReader.open(ramDir);
IndexSearcher is = new IndexSearcher(ir);

QueryParser aqp = new QueryParser(version, "content", analyzer);
QueryParser aqp = new QueryParser(version, "content", analyzer);
aqp.setAnalyzeRangeTerms(true);

// Unicode order would include U+0633 in [ U+062F - U+0698 ], but Farsi
Expand Down
4 changes: 2 additions & 2 deletions src/Lucene.Net.Analysis.ICU/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,8 +105,8 @@ For an introduction to Lucene's analysis API, see the <xref:Lucene.Net.Analysis>
writer.addDocument(doc);
writer.close();
IndexSearcher is = new IndexSearcher(ramDir, true);

QueryParser aqp = new QueryParser(Version.LUCENE_48, "content", analyzer);
QueryParser aqp = new QueryParser(Version.LUCENE_48, "content", analyzer);
aqp.setAnalyzeRangeTerms(true);

// Unicode order would include U+0633 in [ U+062F - U+0698 ], but Farsi
Expand Down
2 changes: 1 addition & 1 deletion src/Lucene.Net.Analysis.Kuromoji/overview.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
uid: Lucene.Net.Analysis.Kuromoji
uid: Lucene.Net.Analysis.Ja
summary: *content
---

Expand Down
7 changes: 6 additions & 1 deletion src/Lucene.Net.Analysis.SmartCn/overview.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
<!--
---
uid: Lucene.Net.Analysis.Cn.Smart
summary: *content
---

<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
Expand Down
8 changes: 2 additions & 6 deletions src/Lucene.Net.Analysis.SmartCn/package.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,4 @@
---
uid: Lucene.Net.Analysis.Smartcn
summary: *content
---



<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
Expand Down Expand Up @@ -32,6 +27,7 @@ Three analyzers are provided for Chinese, each of which treats Chinese text in a
CJKAnalyzer (in the analyzers/cjk package): Index bigrams (overlapping groups of two adjacent Chinese characters) as tokens.
SmartChineseAnalyzer (in this package): Index words (attempt to segment Chinese text into words) as tokens.


Example phrase: "我是中国人"

1. StandardAnalyzer: 我-是-中-国-人
Expand Down
6 changes: 4 additions & 2 deletions src/Lucene.Net.Benchmark/ByTask/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ Benchmarking Lucene By Tasks.

Contained packages:


<table border="1" cellpadding="4">
<tr>
<td>**Package**</td>
Expand Down Expand Up @@ -54,7 +55,7 @@ Benchmarking Lucene By Tasks.

## Table Of Contents

1. [Benchmarking By Tasks](#concept) 2. [How to use](#usage) 3. [Benchmark "algorithm"](#algorithm) 4. [Supported tasks/commands](#tasks) 5. [Benchmark properties](#properties) 6. [Example input algorithm and the result benchmark report.](#example) 7. [Results record counting clarified](#recsCounting)
1. [Benchmarking By Tasks](#concept) 2. [How to use](#usage) 3. [Benchmark "algorithm"](#algorithm) 4. [Supported tasks/commands](#tasks) 5. [Benchmark properties](#properties) 6. [Example input algorithm and the result benchmark report.](#example) 7. [Results record counting clarified](#recscounting)

## Benchmarking By Tasks

Expand Down Expand Up @@ -212,7 +213,7 @@ Example - <font color="#FF0066">{ AddDoc } : 100 : 200/min</font> - would
10. **Disable Counting**: Each task executed contributes to the records count.
This count is reflected in reports under recs/s and under recsPerRun.
Most tasks count 1, some count 0, and some count more.
(See [Results record counting clarified](#recsCounting) for more details.)
(See [Results record counting clarified](#recscounting) for more details.)
It is possible to disable counting for a task by preceding it with <font color="#FF0066">-</font>.

Example - <font color="#FF0066"> -CreateIndex </font> - would count 0 while
Expand Down Expand Up @@ -491,6 +492,7 @@ Example: max.buffered=buf:10:10:100:100 -

Confusing? this might help: always examine the `elapsedSec` column, and always compare "apples to apples", .i.e. it is interesting to check how the `rec/s` changed for the same task (or sequence) between two different runs, but it is not very useful to know how the `rec/s` differs between `Search` and `SearchTrav` tasks. For the latter, `elapsedSec` would bring more insight.


</div>
<div> </div>

8 changes: 4 additions & 4 deletions src/Lucene.Net.Benchmark/Quality/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,13 +36,13 @@ Here is a sample code used to run the TREC 2006 queries 701-850 on the .Gov2 col
File qrelsFile = new File("qrels-701-850.txt");
IndexReader ir = DirectoryReader.open(directory):
IndexSearcher searcher = new IndexSearcher(ir);

int maxResults = 1000;
int maxResults = 1000;
String docNameField = "docname";

PrintWriter logger = new PrintWriter(System.out,true);

// use trec utilities to read trec topics into quality queries
// use trec utilities to read trec topics into quality queries
TrecTopicsReader qReader = new TrecTopicsReader();
QualityQuery qqs[] = qReader.readQueries(new BufferedReader(new FileReader(topicsFile)));

Expand Down
64 changes: 26 additions & 38 deletions src/Lucene.Net.Demo/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,49 +24,37 @@ The demo module offers simple example code to show the features of Lucene.

# Apache Lucene - Building and Installing the Basic Demo

<div id="minitoc-area">

* [About this Document](#About_this_Document)
* [About the Demo](#About_the_Demo)
* [Setting your CLASSPATH](#Setting_your_CLASSPATH)
* [Indexing Files](#Indexing_Files)
* [About the code](#About_the_code)
* [Location of the source](#Location_of_the_source)
* [IndexFiles](#IndexFiles)
* [Searching Files](#Searching_Files)
</div>

## About this Document

<div class="section">
* [About this Document](#about-this-document)
* [About the Demo](#about-the-demo)
* [Setting your CLASSPATH](#setting-your-classpath)
* [Indexing Files](#indexing-files)
* [About the code](#about-the-code)
* [Location of the source](#location-of-the-source)
* [IndexFiles](xref:Lucene.Net.Demo.IndexFiles)
* [Searching Files](#searching-files)

This document is intended as a "getting started" guide to using and running the Lucene demos. It walks you through some basic installation and configuration.

</div>
## About this Document

## About the Demo

<div class="section">

The Lucene command-line demo code consists of an application that demonstrates various functionalities of Lucene and how you can add Lucene to your applications.
This document is intended as a "getting started" guide to using and running the Lucene demos. It walks you through some basic installation and configuration.


</div>

## Setting your CLASSPATH
## About the Demo

<div class="section">

First, you should [download](http://www.apache.org/dyn/closer.cgi/lucene/java/) the latest Lucene distribution and then extract it to a working directory.

You need four JARs: the Lucene JAR, the queryparser JAR, the common analysis JAR, and the Lucene demo JAR. You should see the Lucene JAR file in the core/ directory you created when you extracted the archive -- it should be named something like <span class="codefrag">lucene-core-{version}.jar</span>. You should also see files called <span class="codefrag">lucene-queryparser-{version}.jar</span>, <span class="codefrag">lucene-analyzers-common-{version}.jar</span> and <span class="codefrag">lucene-demo-{version}.jar</span> under queryparser, analysis/common/ and demo/, respectively.
The Lucene command-line demo code consists of an application that demonstrates various functionalities of Lucene and how you can add Lucene to your applications.

Put all four of these files in your Java CLASSPATH.

</div>

## Indexing Files

<div class="section">


Once you've gotten this far you're probably itching to go. Let's **build an index!** Assuming you've set your CLASSPATH correctly, just type:

Expand All @@ -84,29 +72,29 @@ You'll be prompted for a query. Type in a gibberish or made up word (for example
You'll see that there are no maching results in the lucene source code.
Now try entering the word "string". That should return a whole bunch
of documents. The results will page at every tenth result and ask you whether
you want more results.</div>
you want more results.

## About the code

<div class="section">


In this section we walk through the sources behind the command-line Lucene demo: where to find them, their parts and their function. This section is intended for Java developers wishing to understand how to use Lucene in their applications.

</div>


## Location of the source

<div class="section">

The files discussed here are linked into this documentation directly: * [IndexFiles.java](https://github.com/apache/lucenenet/blob/{tag}/src/Lucene.Net.Demo/IndexFiles.cs): code to create a Lucene index. [SearchFiles.java](https://github.com/apache/lucenenet/blob/{tag}/src/Lucene.Net.Demo/SearchFiles.cs): code to search a Lucene index.

</div>
The files discussed here are linked into this documentation directly: * [IndexFiles](xref:Lucene.Net.Demo.IndexFiles): code to create a Lucene index. [SearchFiles](xref:Lucene.Net.Demo.SearchFiles): code to search a Lucene index.



## IndexFiles

<div class="section">

As we discussed in the previous walk-through, the [IndexFiles](https://github.com/apache/lucenenet/blob/{tag}/src/Lucene.Net.Demo/IndexFiles.cs) class creates a Lucene Index. Let's take a look at how it does this.

As we discussed in the previous walk-through, the [IndexFiles](xref:Lucene.Net.Demo.IndexFiles) class creates a Lucene Index. Let's take a look at how it does this.

The <span class="codefrag">main()</span> method parses the command-line parameters, then in preparation for instantiating [IndexWriter](xref:Lucene.Net.Index.IndexWriter), opens a [Directory](xref:Lucene.Net.Store.Directory), and instantiates [StandardAnalyzer](xref:Lucene.Net.Analysis.Standard.StandardAnalyzer) and [IndexWriterConfig](xref:Lucene.Net.Index.IndexWriterConfig).

Expand All @@ -124,14 +112,14 @@ The <span class="codefrag">IndexWriterConfig</span> instance holds all configura

Looking further down in the file, after <span class="codefrag">IndexWriter</span> is instantiated, you should see the <span class="codefrag">indexDocs()</span> code. This recursive function crawls the directories and creates [Document](xref:Lucene.Net.Documents.Document) objects. The <span class="codefrag">Document</span> is simply a data object to represent the text content from the file as well as its creation time and location. These instances are added to the <span class="codefrag">IndexWriter</span>. If the <span class="codefrag">-update</span> command-line parameter is given, the <span class="codefrag">IndexWriterConfig</span> <span class="codefrag">OpenMode</span> will be set to [OpenMode.CREATE_OR_APPEND](xref:Lucene.Net.Index.IndexWriterConfig.OpenMode#methods), and rather than adding documents to the index, the <span class="codefrag">IndexWriter</span> will **update** them in the index by attempting to find an already-indexed document with the same identifier (in our case, the file path serves as the identifier); deleting it from the index if it exists; and then adding the new document to the index.

</div>


## Searching Files

<div class="section">

The [SearchFiles](https://github.com/apache/lucenenet/blob/{tag}/src/Lucene.Net.Demo/SearchFiles.cs) class is quite simple. It primarily collaborates with an [IndexSearcher](xref:Lucene.Net.Search.IndexSearcher), [StandardAnalyzer](xref:Lucene.Net.Analysis.Standard.StandardAnalyzer), (which is used in the [IndexFiles](https://github.com/apache/lucenenet/blob/{tag}/src/Lucene.Net.Demo/IndexFiles.cs) class as well) and a [QueryParser](xref:Lucene.Net.QueryParsers.Classic.QueryParser). The query parser is constructed with an analyzer used to interpret your query text in the same way the documents are interpreted: finding word boundaries, downcasing, and removing useless words like 'a', 'an' and 'the'. The <xref:Lucene.Net.Search.Query> object contains the results from the [QueryParser](xref:Lucene.Net.QueryParsers.Classic.QueryParser) which is passed to the searcher. Note that it's also possible to programmatically construct a rich <xref:Lucene.Net.Search.Query> object without using the query parser. The query parser just enables decoding the [ Lucene query syntax](../queryparser/org/apache/lucene/queryparser/classic/package-summary.html#package_description) into the corresponding [Query](xref:Lucene.Net.Search.Query) object.

The [SearchFiles](xref:Lucene.Net.Demo.SearchFiles) class is quite simple. It primarily collaborates with an [IndexSearcher](xref:Lucene.Net.Search.IndexSearcher), [StandardAnalyzer](xref:Lucene.Net.Analysis.Standard.StandardAnalyzer), (which is used in the [IndexFiles](xref:Lucene.Net.Demo.IndexFiles) class as well) and a [QueryParser](xref:Lucene.Net.QueryParsers.Classic.QueryParser). The query parser is constructed with an analyzer used to interpret your query text in the same way the documents are interpreted: finding word boundaries, downcasing, and removing useless words like 'a', 'an' and 'the'. The <xref:Lucene.Net.Search.Query> object contains the results from the [QueryParser](xref:Lucene.Net.QueryParsers.Classic.QueryParser) which is passed to the searcher. Note that it's also possible to programmatically construct a rich <xref:Lucene.Net.Search.Query> object without using the query parser. The query parser just enables decoding the [ Lucene query syntax](../queryparser/org/apache/lucene/queryparser/classic/package-summary.html#package_description) into the corresponding [Query](xref:Lucene.Net.Search.Query) object.

<span class="codefrag">SearchFiles</span> uses the [IndexSearcher.search](xref:Lucene.Net.Search.IndexSearcher#methods) method that returns [TopDocs](xref:Lucene.Net.Search.TopDocs) with max <span class="codefrag">n</span> hits. The results are printed in pages, sorted by score (i.e. relevance).

</div>

2 changes: 1 addition & 1 deletion src/Lucene.Net.Facet/SortedSet/package.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,4 +15,4 @@
limitations under the License.
-->

Provides faceting capabilities over facets that were indexed with <xref:Lucene.Net.Facet.Sortedset.SortedSetDocValuesFacetField>.
Provides faceting capabilities over facets that were indexed with <xref:Lucene.Net.Facet.Sortedset.SortedSetDocValuesFacetField>.
Loading