Skip to content

Commit b570c0a

Browse files
committed
Update documentation
1 parent 21f618f commit b570c0a

31 files changed

+138
-36
lines changed

.paket/Paket.Restore.targets

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -53,10 +53,10 @@
5353
</PropertyGroup>
5454

5555
<!-- If shasum and awk exist get the hashes -->
56-
<Exec Condition=" '$(PaketRestoreCachedHasher)' != '' " Command="$(PaketRestoreCachedHasher)" ConsoleToMSBuild='true'>
56+
<Exec StandardOutputImportance="Low" Condition=" '$(PaketRestoreCachedHasher)' != '' " Command="$(PaketRestoreCachedHasher)" ConsoleToMSBuild='true'>
5757
<Output TaskParameter="ConsoleOutput" PropertyName="PaketRestoreCachedHash" />
5858
</Exec>
59-
<Exec Condition=" '$(PaketRestoreLockFileHasher)' != '' " Command="$(PaketRestoreLockFileHasher)" ConsoleToMSBuild='true'>
59+
<Exec StandardOutputImportance="Low" Condition=" '$(PaketRestoreLockFileHasher)' != '' " Command="$(PaketRestoreLockFileHasher)" ConsoleToMSBuild='true'>
6060
<Output TaskParameter="ConsoleOutput" PropertyName="PaketRestoreLockFileHash" />
6161
</Exec>
6262

docs/aggregations/writing-aggregations.asciidoc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -204,7 +204,7 @@ s => s
204204

205205
An advanced scenario may involve an existing collection of aggregation functions that should be set as aggregations
206206
on the request. Using LINQ's `.Aggregate()` method, each function can be applied to the aggregation descriptor
207-
`childAggs` below) in turn, returning the descriptor after each function application.
207+
(`childAggs` below) in turn, returning the descriptor after each function application.
208208

209209
[source,csharp]
210210
----
@@ -230,6 +230,7 @@ return s => s
230230
);
231231
----
232232
<1> a list of aggregation functions to apply
233+
233234
<2> Using LINQ's `Aggregate()` function to accumulate/apply all of the aggregation functions
234235

235236
[[aggs-vs-aggregations]]
@@ -277,5 +278,6 @@ var maxPerChild = childAggregation.Max("max_per_child");
277278
maxPerChild.Should().NotBeNull(); <2>
278279
----
279280
<1> Do something with the average per child. Here we just assert it's not null
281+
280282
<2> Do something with the max per child. Here we just assert it's not null
281283

docs/client-concepts/certificates/working-with-certificates.asciidoc

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,8 @@ that generated the certificate is trusted by the machine running the client code
2323
to the cluster over HTTPS with the client.
2424

2525
If you are using your own CA which is not trusted however, .NET won't allow you to make HTTPS calls to that endpoint by default. With .NET,
26-
you can pre-empt this though a custom validation callback on the global static`ServicePointManager.ServerCertificateValidationCallback`. Most examples you will find doing this this will simply return `true` from the
26+
you can pre-empt this though a custom validation callback on the global static
27+
`ServicePointManager.ServerCertificateValidationCallback`. Most examples you will find doing this this will simply return `true` from the
2728
validation callback and merrily whistle off into the sunset. **This is not advisable** as it allows *any* HTTPS traffic through in the
2829
current `AppDomain` *without* any validation. Here's a concrete example:
2930

@@ -41,8 +42,9 @@ validation will not be performed for HTTPS connections to *both* Elasticsearch *
4142
==== Validation configuration
4243

4344
It's possible to also set a callback per service endpoint with .NET, and both Elasticsearch.NET and NEST expose this through
44-
connection settings `ConnectionConfiguration` with Elasticsearch.Net and `ConnectionSettings` with NEST). You can do
45-
your own validation in that handler or use one of the baked in handlers that we ship with out of the box, on the static class`CertificateValidations`.
45+
connection settings (`ConnectionConfiguration` with Elasticsearch.Net and `ConnectionSettings` with NEST). You can do
46+
your own validation in that handler or use one of the baked in handlers that we ship with out of the box, on the static class
47+
`CertificateValidations`.
4648

4749
The two most basic ones are `AllowAll` and `DenyAll`, which accept or deny all SSL traffic to our nodes, respectively. Here's
4850
a couple of examples.
@@ -83,7 +85,8 @@ If your client application has access to the public CA certificate locally, Elas
8385
that can assert that a certificate the server presents is one that came from the local CA.
8486

8587
If you use X-Pack's `certgen` tool to {xpack_current}/ssl-tls.html[generate SSL certificates], the generated node certificate
86-
does not include the CA in the certificate chain, in order to cut down on SSL handshake size. In those case you can use`CertificateValidations.AuthorityIsRoot` and pass it your local copy of the CA public key to assert that
88+
does not include the CA in the certificate chain, in order to cut down on SSL handshake size. In those case you can use
89+
`CertificateValidations.AuthorityIsRoot` and pass it your local copy of the CA public key to assert that
8790
the certificate the server presented was generated using it
8891

8992
[source,csharp]
@@ -121,7 +124,7 @@ through client certificates. The `certgen` tool included with X-Pack allows you
121124
{ref_current}/certgen.html[generate client certificates as well] and assign the distinguished name (DN) of the
122125
certificate to a user with a certain role.
123126

124-
certgen by default only generates a public certificate `.cer`) and a private key `.key`. To authenticate with client certificates, you need to present both
127+
certgen by default only generates a public certificate (`.cer`) and a private key `.key`. To authenticate with client certificates, you need to present both
125128
as one certificate. The easiest way to do this is to generate a `pfx` or `p12` file from the `.cer` and `.key`
126129
and attach these to requests using `new X509Certificate(pathToPfx)`.
127130

@@ -152,8 +155,11 @@ public class PkiCluster : CertgenCaCluster
152155
}
153156
----
154157
<1> Set the client certificate on `ConnectionSettings`
158+
155159
<2> The path to the `.cer` file
160+
156161
<3> The path to the `.key` file
162+
157163
<4> The password for the private key
158164

159165
Or per request on `RequestConfiguration` which will take precedence over the ones defined on `ConnectionConfiguration`

docs/client-concepts/connection-pooling/building-blocks/connection-pooling.asciidoc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,8 @@ NEST can use to issue client calls on.
2222

2323
[IMPORTANT]
2424
--
25-
Despite the name, a connection pool in NEST is **not** like connection pooling that you may be familiar with from https://msdn.microsoft.com/en-us/library/bb399543(v=vs.110).aspx[interacting with a database using ADO.Net]; for example,
25+
Despite the name, a connection pool in NEST is **not** like connection pooling that you may be familiar with from
26+
https://msdn.microsoft.com/en-us/library/bb399543(v=vs.110).aspx[interacting with a database using ADO.Net]; for example,
2627
a connection pool in NEST is **not** responsible for managing an underlying pool of TCP connections to Elasticsearch,
2728
this is https://blogs.msdn.microsoft.com/adarshk/2005/01/02/understanding-system-net-connection-management-and-servicepointmanager/[handled by the ServicePointManager in Desktop CLR]
2829
and can be controlled by <<servicepoint-behaviour,changing the ServicePoint behaviour>> on `HttpConnection`.

docs/client-concepts/connection-pooling/building-blocks/request-pipelines.asciidoc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,8 @@ sniffingPipeline.FirstPoolUsageNeedsSniffing.Should().BeFalse();
103103

104104
==== Wait for first sniff
105105

106-
All threads wait for the sniff on startup to finish, waiting the request timeout period. A https://msdn.microsoft.com/en-us/library/system.threading.semaphoreslim(v=vs.110).aspx[`SemaphoreSlim`]
106+
All threads wait for the sniff on startup to finish, waiting the request timeout period. A
107+
https://msdn.microsoft.com/en-us/library/system.threading.semaphoreslim(v=vs.110).aspx[`SemaphoreSlim`]
107108
is used to block threads until the sniff finishes and waiting threads release the `SemaphoreSlim` appropriately.
108109

109110
We can demonstrate this with the following example. First, let's configure

docs/client-concepts/connection-pooling/exceptions/unexpected-exceptions.asciidoc

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -60,8 +60,11 @@ audit = await audit.TraceUnexpectedException(
6060
);
6161
----
6262
<1> set up a cluster with 10 nodes
63+
6364
<2> where node 2 on port 9201 always throws an exception
65+
6466
<3> The first call to 9200 returns a healthy response
67+
6568
<4> ...but the second call, to 9201, returns a bad response
6669

6770
Sometimes, an unexpected exception happens further down in the pipeline. In this scenario, we
@@ -100,7 +103,9 @@ audit = await audit.TraceUnexpectedException(
100103
);
101104
----
102105
<1> calls on 9200 set up to throw a `WebException`
106+
103107
<2> calls on 9201 set up to throw an `Exception`
108+
104109
<3> Assert that the audit trail for the client call includes the bad response from 9200 and 9201
105110

106111
An unexpected hard exception on ping and sniff is something we *do* try to recover from and failover to retrying on the next node.
@@ -145,6 +150,8 @@ audit = await audit.TraceUnexpectedException(
145150
);
146151
----
147152
<1> `InnerException` is the exception that brought the request down
153+
148154
<2> The hard exception that happened on ping is still available though
155+
149156
<3> An exception can be hard to relate back to a point in time, so the exception is also available on the audit trail
150157

docs/client-concepts/connection-pooling/exceptions/unrecoverable-exceptions.asciidoc

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -68,6 +68,7 @@ var audit = new Auditor(() => Framework.Cluster
6868
);
6969
----
7070
<1> Always succeed on ping
71+
7172
<2> ...but always fail on calls with a 401 Bad Authentication response
7273

7374
Now, let's make a client call. We'll see that the first audit event is a successful ping
@@ -88,7 +89,9 @@ audit = await audit.TraceElasticsearchException(
8889
);
8990
----
9091
<1> First call results in a successful ping
92+
9193
<2> Second call results in a bad response
94+
9295
<3> The reason for the bad response is Bad Authentication
9396

9497
When a bad authentication response occurs, the client does not attempt to deserialize the response body returned;
@@ -122,6 +125,7 @@ audit = await audit.TraceElasticsearchException(
122125
);
123126
----
124127
<1> Always return a 401 bad response with a HTML response on client calls
128+
125129
<2> Assert that the response body bytes are null
126130

127131
Now in this example, by turning on `DisableDirectStreaming()` on `ConnectionSettings`, we see the same behaviour exhibited
@@ -154,5 +158,6 @@ audit = await audit.TraceElasticsearchException(
154158
);
155159
----
156160
<1> Response bytes are set on the response
161+
157162
<2> Assert that the response contains `"nginx/"`
158163

docs/client-concepts/connection-pooling/request-overrides/disable-sniff-ping-per-request.asciidoc

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -67,8 +67,11 @@ audit = await audit.TraceCalls(
6767
);
6868
----
6969
<1> disable sniffing
70+
7071
<2> first call is a successful ping
72+
7173
<3> sniff on startup call happens here, on the second call
74+
7275
<4> No sniff on startup again
7376

7477
Now, let's disable pinging on the request
@@ -92,6 +95,7 @@ audit = await audit.TraceCall(
9295
);
9396
----
9497
<1> disable ping
98+
9599
<2> No ping after sniffing
96100

97101
Finally, let's demonstrate disabling both sniff and ping on the request
@@ -113,5 +117,6 @@ audit = await audit.TraceCall(
113117
);
114118
----
115119
<1> diable ping and sniff
120+
116121
<2> no ping or sniff before the call
117122

docs/client-concepts/connection-pooling/round-robin/skip-dead-nodes.asciidoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -142,7 +142,9 @@ await audit.TraceCalls(
142142
);
143143
----
144144
<1> The first call goes to 9200 which succeeds
145+
145146
<2> The 2nd call does a ping on 9201 because its used for the first time. It fails so we wrap over to node 9202
147+
146148
<3> The next call goes to 9203 which fails so we should wrap over
147149

148150
A cluster with 2 nodes where the second node fails on ping
@@ -192,5 +194,6 @@ await audit.TraceCalls(
192194
);
193195
----
194196
<1> All the calls fail
197+
195198
<2> After all our registered nodes are marked dead we want to sample a single dead node each time to quickly see if the cluster is back up. We do not want to retry all 4 nodes
196199

docs/client-concepts/connection-pooling/sniffing/role-detection.asciidoc

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -140,6 +140,7 @@ var audit = new Auditor(() => Framework.Cluster
140140
};
141141
----
142142
<1> Before the sniff, assert we only see three master only nodes
143+
143144
<2> After the sniff, assert we now know about the existence of 20 nodes.
144145

145146
After the sniff has happened on 9200 before the first API call, assert that the subsequent API
@@ -220,7 +221,9 @@ var audit = new Auditor(() => Framework.Cluster
220221
};
221222
----
222223
<1> for testing simplicity, disable pings
224+
223225
<2> We only want to execute API calls to nodes in rack_one
226+
224227
<3> After sniffing on startup, assert that the pool of nodes that the client will execute API calls against only contains the three nodes that are in `rack_one`
225228

226229
With the cluster set up, assert that the sniff happens on 9200 before the first API call
@@ -297,6 +300,8 @@ await audit.TraceUnexpectedElasticsearchException(new ClientCall
297300
});
298301
----
299302
<1> The audit trail indicates a sniff for the very first time on startup
303+
300304
<2> The sniff succeeds because the node predicate is ignored when sniffing
305+
301306
<3> when trying to do an actual API call however, the predicate prevents any nodes from being attempted
302307

0 commit comments

Comments
 (0)