forked from vert-x/vert-x.github.io
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathmanual.html
521 lines (504 loc) · 40.3 KB
/
manual.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<link href='http://fonts.googleapis.com/css?family=Lato' rel='stylesheet' type='text/css'>
<!-- <link rel="stylesheet/less" href="bootstrap/less/bootstrap.less">
<script src="bootstrap/less/less-1.3.3.min.js"></script>
-->
<link href="bootstrap/bootstrap.css" type="text/css" rel="stylesheet"/>
<link href="google-code-prettify/prettify.css" type="text/css" rel="stylesheet"/>
<script type="text/javascript" src="google-code-prettify/prettify.js"></script>
<link href="css/vertx.css" type="text/css" rel="stylesheet"/>
<link href="css/sunburst.css" type="text/css" rel="stylesheet"/>
<title>Vert.x</title>
<script>
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-30144458-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</head>
<body onload="prettyPrint()" class="hp">
<div class="navbar navbar-fixed-top">
<div class="navbar-inner">
<div class="container">
<a class="btn btn-navbar" data-toggle="collapse"
data-target=".nav-collapse">
<span class="i-bar"></span>
<span class="i-bar"></span>
<span class="i-bar"></span>
</a>
<a class="brand" href="/">Vert.x</a>
<div class="nav-collapse">
<ul class="nav">
<li><a href="/">Home</a></li>
<li><a href="downloads.html">Download</a></li>
<li><a href="install.html">Install</a></li>
<li><a href="docs.html">Documentation</a></li>
<li><a href="examples.html">Examples</a></li>
<li><a href="community.html">Project Info</a></li>
<li><a href="https://github.com/vert-x/vert.x">Github</a></li>
<li><a href="http://modulereg.vertx.io/">Module Registry</a></li>
<li><a href="http://groups.google.com/group/vertx">Google Group</a></li>
<li><a href="http://vertxproject.wordpress.com/">Blog</a></li>
</ul>
</div>
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="span12">
<div class="well">
<h1>Main Manual</h1>
</div>
</div>
</div>
<div class="row">
<div class="span12">
<div class="well">
<div>
<div class="toc">
<ul>
<li><a href="#introduction">Introduction</a><ul>
<li><a href="#what-is-vertx">What is Vert.x?</a></li>
<li><a href="#concepts-in-vertx">Concepts in Vert.x</a><ul>
<li><a href="#verticle">Verticle</a></li>
<li><a href="#module">Module</a></li>
<li><a href="#vertx-instances">Vert.x Instances</a></li>
<li><a href="#polyglot">Polyglot</a></li>
<li><a href="#concurrency">Concurrency</a></li>
<li><a href="#asynchronous-programming-model">Asynchronous Programming Model</a></li>
<li><a href="#event-loops">Event Loops</a><ul>
<li><a href="#the-golden-rule-dont-block-the-event-loop">The Golden Rule - Don't block the event loop!</a></li>
</ul>
</li>
<li><a href="#writing-blocking-code-introducing-worker-verticles">Writing blocking code - introducing Worker Verticles</a></li>
<li><a href="#shared-data">Shared data</a></li>
<li><a href="#vertx-apis">Vert.x APIs</a><ul>
<li><a href="#container-api">Container API</a></li>
<li><a href="#core-api">Core API</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li><a href="#using-vertx-from-the-command-line">Using Vert.x from the command line</a><ul>
<li><a href="#running-verticles-directly">Running Verticles directly</a><ul>
<li><a href="#forcing-language-implementation-to-use">Forcing language implementation to use</a></li>
</ul>
</li>
<li><a href="#running-modules-from-the-command-line">Running modules from the command line</a></li>
<li><a href="#running-modules-directory-from-zip-files">Running modules directory from .zip files</a></li>
<li><a href="#running-modules-as-executable-jars-fat-jars">Running modules as executable jars (fat jars)</a></li>
<li><a href="#displaying-version-of-vertx">Displaying version of Vert.x</a></li>
<li><a href="#installing-and-uninstalling-modules">Installing and uninstalling modules</a></li>
</ul>
</li>
<li><a href="#high-availability-with-vertx">High availability with Vert.x</a><ul>
<li><a href="#automatic-failover">Automatic failover</a></li>
<li><a href="#ha-groups">HA groups</a></li>
<li><a href="#dealing-with-network-partitions-quora">Dealing with network partitions - Quora</a></li>
</ul>
</li>
<li><a href="#logging">Logging</a></li>
<li><a href="#configuring-thread-pool-sizes">Configuring thread pool sizes</a><ul>
<li><a href="#the-event-loop-pool">The event loop pool</a></li>
<li><a href="#the-background-pool">The background pool</a></li>
</ul>
</li>
<li><a href="#configuring-clustering">Configuring clustering</a></li>
<li><a href="#performance-tuning">Performance Tuning</a><ul>
<li><a href="#improving-connection-time">Improving connection time</a></li>
<li><a href="#handling-large-numbers-of-connections">Handling large numbers of connections</a><ul>
<li><a href="#increase-number-of-available-file-handles">Increase number of available file handles</a></li>
<li><a href="#tune-tcp-buffer-size">Tune TCP buffer size</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="#internals">Internals</a></li>
</ul>
</div>
<h1 id="introduction">Introduction</h1><br/>
<h2 id="what-is-vertx">What is Vert.x?</h2><br/>
<p>Vert.x is a polyglot, non-blocking, event-driven application platform that runs on the JVM.</p>
<p>Some of the key highlights include:</p>
<ul>
<li>
<p>Polyglot. You can write your application components in JavaScript, Ruby, Groovy, Java or Python, and you can mix and match several programming languages in a single application.</p>
</li>
<li>
<p>Simple <em>actor-like</em> concurrency model. Vert.x allows you to write all your code as single threaded, freeing you from many of the pitfalls of multi-threaded programming. (No more <code>synchronized</code>, <code>volatile</code> or explicit locking). </p>
</li>
<li>
<p>Vert.x takes advantage of the JVM and scales seamlessly over available cores without having to manually fork multiple servers and handle inter process communication between them.</p>
</li>
<li>
<p>Vert.x has a simple, asynchronous programming model for writing scalable non-blocking applications that can scale to 10s, 100s or even millions of concurrent connections using a minimal number of operating system threads.</p>
</li>
<li>
<p>Vert.x includes a distributed event bus that spans the client and server side so your applications components can communicate easily. The event bus even penetrates into in-browser JavaScript allowing you to create effortless so-called <em>real-time</em> web applications.</p>
</li>
<li>
<p>Vert.x provides real power and simplicity, without being simplistic. Configuration and boiler-plate is kept to a minimum.</p>
</li>
<li>
<p>Vert.x includes a powerful module system and public module registry, so you can easily re-use and share Vert.x modules with others.</p>
</li>
<li>
<p>Vert.x can be embedded in your existing Java applications.</p>
</li>
</ul>
<h2 id="concepts-in-vertx">Concepts in Vert.x</h2><br/>
<p>In this section we'll give an overview of the main concepts in Vert.x. Many of these concepts will be discussed in more depth later on in this manual.</p>
<p><a id="verticle"> </a></p>
<h3 id="verticle">Verticle</h3><br/>
<p>The packages of code that Vert.x executes are called <em>verticles</em> (think of a particle, for Vert.x).</p>
<p>Verticles can be written in JavaScript, Ruby, Java, Groovy or Python (Scala and Clojure support is in the pipeline).</p>
<p>Many verticles can be executing concurrently in the same Vert.x instance.</p>
<p>An application might be composed of multiple verticles deployed on different nodes of your network communicating by exchanging messages over the Vert.x event bus.</p>
<p>For trivial applications verticles can be run directly from the command line, but more usually they are packaged up into modules.</p>
<h3 id="module">Module</h3><br/>
<p>Vert.x applications are usually composed of one or more modules. Modules can contain multiple verticles, potentially written in different languages. Modules allow functionality to be encapsulated and reused.</p>
<p>Modules can be placed into any Maven or <a href="http://bintray.com">Bintray</a> repository, and registered in the Vert.x <a href="http://modulereg.vertx.io">module registry</a>.</p>
<p>The Vert.x module system enables an eco-system of Vert.x modules managed by the Vert.x community.</p>
<p>For more information on modules, please consult the <a href="mods_manual.html">Modules manual</a>.</p>
<h3 id="vertx-instances">Vert.x Instances</h3><br/>
<p>Verticles run inside a Vert.x <em>instance</em>. A single Vert.x instance runs inside its own JVM instance. There can be many verticles running inside a single Vert.x instance at any one time. </p>
<p>There can be many Vert.x instances running on the same host, or on different hosts on the network at the same time. The instances can be configured to cluster with each other forming a distributed event bus over which verticle instances can communicate.</p>
<h3 id="polyglot">Polyglot</h3><br/>
<p>We want you to be able to develop your verticles in a choice of programming languages. Never have developers had such a choice of great languages, and we want that to be reflected in the languages we support.</p>
<p>Vert.x allows you to write verticles in JavaScript, Ruby, Java, Groovy and Python and we aim to support Clojure and Scala before long. These verticles can seamlessly interoperate with other verticles irrespective of what language they are written in.</p>
<h3 id="concurrency">Concurrency</h3><br/>
<p>Vert.x guarantees that a particular verticle instance is never executed by more than one thread concurrently. This gives you a huge advantage as a developer, since you can program all your code as single threaded.</p>
<p>If you're used to traditional multi-threaded concurrency this may come as a relief since you don't have to synchronize access to your state. This means a whole class of race conditions disappear, and OS thread deadlocks are a thing of the past.</p>
<p>Different verticle instances communicate with each other over the event bus by exchanging messages. Vert.x applications are concurrent because there are multiple single threaded verticle instances concurrently executing and sending messages to each other, and not by having particular verticle instances being executed concurrently by multiple threads.</p>
<p>As such the Vert.x concurrency model resembles the <a href="http://en.wikipedia.org/wiki/Actor_model">Actor Model</a> where verticle instances correspond to actors. There are are however differences, for example, verticles tend to be more coarse grained than actors.</p>
<h3 id="asynchronous-programming-model">Asynchronous Programming Model</h3><br/>
<p>Vert.x provides a set of asynchronous core APIs. This means that most things you do in Vert.x involve setting event handlers. For example, to receive data from a TCP socket you set a handler - the handler is then called when data arrives.</p>
<p>You also set handlers to receive messages from the event bus, to receive HTTP requests and responses, to be notified when a connection is closed, or to be notified when a timer fires. This is a common pattern throughout the Vert.x API.</p>
<p>We use an asynchronous API so that we can scale to handle many verticles using a small number of operating system threads. In fact Vert.x sets the number of threads to be equal to the number of available cores on the machine. With a perfectly non blocking application you would never need any more threads than that.</p>
<p>With a traditional synchronous API, threads block on API operations, and while they are blocked they cannot do any other work. A good example is a blocking read from a socket. While code is waiting for data to arrive on a socket it cannot do anything else. This means that if we want to support 1 million concurrent connections (not a crazy idea for the next generation of mobile applications) then we would need 1 million threads. This approach clearly doesn't scale.</p>
<p>Asynchronous APIs are sometimes criticised as being hard to develop with, especially when you have to co-ordinate results from more than one event handler.</p>
<p>There are ways to mitigate this, for example, by using a module such as <a href="https://github.com/vert-x/mod-rxvertx">mod-rx-vertx</a> which allows you to compose asynchronous event streams in powerful ways. This module uses the <a href="https://github.com/Netflix/RxJava">RxJava</a> library which is inspired from .net <a href="http://msdn.microsoft.com/en-us/data/gg577609.aspx">"Reactive extensions"</a>.</p>
<h3 id="event-loops">Event Loops</h3><br/>
<p>Internally, a Vert.x instance manages a small set of threads, matching the number of threads to the available cores on the server. We call these threads <em>event loops</em>, since they more or less just loop around seeing if there are any events to deliver and if so, delivering them to the appropriate handler. Examples of events might be some data has been read from a socket, a timer has fired, or an HTTP response has ended.</p>
<p>When a standard verticle instance is deployed, the server chooses an event loop which will be assigned to that instance. Any subsequent work to be done for that instance will always be dispatched using that exact thread. Of course, since there are potentially many thousands of verticles running at any one time, a single event loop will be assigned to many verticles at the same time.</p>
<p>We call this the <em>multi-reactor pattern</em>. It's like the <a href="http://en.wikipedia.org/wiki/Reactor_pattern">reactor pattern</a> but there's more than one event loop.</p>
<h4 id="the-golden-rule-dont-block-the-event-loop">The Golden Rule - Don't block the event loop!</h4><br/>
<p>A particular event loop is potentially used to service many verticle instances, so it's critical that you don't block it in your verticle. If you block it it means it can't deliver events to any other handlers, and your application will grind to a halt.</p>
<p>Blocking an event loop means doing anything that ties up the event loop in the verticle and doesn't allow it to quickly continue to handle other events, this includes:</p>
<ul>
<li><code>Thread.sleep()</code></li>
<li><code>Object.wait()</code></li>
<li><code>CountDownLatch.await()</code> or any other blocking operating from <code>java.util.concurrent</code>.</li>
<li>Spinning in a loop</li>
<li>Executing a long-lived computationally intensive operation - number crunching.</li>
<li>Calling a blocking third party library operation that might take some time to complete (e.g. executing a JDBC query)</li>
</ul>
<p><a id="worker-verticles"> </a></p>
<h3 id="writing-blocking-code-introducing-worker-verticles">Writing blocking code - introducing Worker Verticles</h3><br/>
<p>In a standard verticle you should never block the event loop, however there are cases where you really can't avoid blocking, or you genuinely have computationally intensive operations to perform. An example would calling a <code>traditional</code> Java API like JDBC.</p>
<p>You also might want to write direct-style blocking code, for example, if you want to write a simple web server, but you know you won't have a lot of traffic and you don't need to scale to many connections.</p>
<p>For cases like these, Vert.x allows you to mark a particular verticle instance as a <em>worker verticle</em>. A worker verticle differs from a standard verticle in that it is not assigned a Vert.x event loop thread, instead it executes on a thread from an internal thread pool called the <em>worker pool</em>. </p>
<p>Like standard verticles, worker verticles are never executed concurrently by more than one thread, but unlike standard verticles they can be executed by different threads at different times - whereas a standard verticle is always executed by the <em>exact same</em> thread.</p>
<p>In a worker verticle it is acceptable to perform operations that might block the thread.</p>
<p>By supporting both standard non blocking verticles and blocking worker verticles, Vert.x provides a hybrid threading model so you can use the appropriate approach for your application. This is much more practical than platforms that mandate that a blocking, or a non blocking approach must be <em>always</em> used.</p>
<p>By careful when using worker verticles - a blocking approach doesn't scale if you want to deal with many concurrent connections.</p>
<h3 id="shared-data">Shared data</h3><br/>
<p>Message passing is extremely useful, but it's not always the best approach to concurrency for all types of applications. Some use cases are better solved by providing shared data structures that can be accessed directly by different verticle instances in the same Vert.x instance. </p>
<p>Vert.x provides a shared map and shared set facility. We insist that the data stored is <em>immutable</em> in order to prevent race conditions that might occur if concurrent access to shared state was allowed.</p>
<h3 id="vertx-apis">Vert.x APIs</h3><br/>
<p>Vert.x provides a small and fairly static set of APIs that can be called directly from verticles. We provide the APIs in each of the languages that Vert.x supports.</p>
<p>We envisage that the Vert.x APIs won't change much over time and new functionality will be added by the community and the Vert.x core team in the form of modules which can be published and re-used by anyone.</p>
<p>This means the Vert.x core can remain very small and compact, and you only install those extra modules that you need to use.</p>
<p>The Vert.x APIs can be divided in the <em>container API</em> and the <em>core API</em></p>
<h4 id="container-api">Container API</h4><br/>
<p>This is the verticle's view of the Vert.x container in which it is running. It contains operations to do things like:</p>
<ul>
<li>Deploy and undeploy verticles</li>
<li>Deploy and undeploy modules</li>
<li>Retrieve verticle configuration</li>
<li>Logging</li>
</ul>
<h4 id="core-api">Core API</h4><br/>
<p>This API provides functionality for:</p>
<ul>
<li>TCP/SSL servers and clients</li>
<li>HTTP/HTTPS servers and clients</li>
<li>WebSockets servers and clients</li>
<li>The distributed event bus</li>
<li>Periodic and one-off timers</li>
<li>Buffers</li>
<li>Flow control</li>
<li>File-system access</li>
<li>Shared map and sets</li>
<li>Accessing configuration</li>
<li>SockJS</li>
</ul>
<p><a id="running-vertx"> </a></p>
<h1 id="using-vertx-from-the-command-line">Using Vert.x from the command line</h1><br/>
<p>The <code>vertx</code> command is used to interact with Vert.x from the command line. It's main use is to run Vert.x modules and raw verticles.</p>
<p>If you just type <code>vertx</code> at a command line you can see the different options the command takes.</p>
<h2 id="running-verticles-directly">Running Verticles directly</h2><br/>
<p>You can run raw Vert.x verticles directly from the command line using 'vertx run'.</p>
<p>Running raw verticles is useful for quickly prototyping code or for trivial applications, but for anything non trivial it's highly recommended to package your application as a <a href="mods_manual.html">module</a> instead. Packaging as module makes the module easier to run, encapsulate and reuse.</p>
<p>At minimum <code>vertx run</code> takes a single parameter - the name of the verticle to run.</p>
<p>If you're running a verticle written in JavaScript, Ruby, Groovy or Python then it's just the name of the script, e.g. <code>server.js</code>, <code>server.rb</code>, or <code>server.groovy</code>. (It doesn't have to be called <code>server</code>, you can name it anything as long as it has the right extension).</p>
<p>If the verticle is written in Java the name can either be the fully qualified class name of the Main class, <em>or</em> you can specify the Java Source file directly and Vert.x will compile it for you.</p>
<p>Here are some examples:</p>
<pre class="prettyprint">vertx run app.js
vertx run server.rb
vertx run accounts.py
vertx run MyApp.java
vertx run com.mycompany.widgets.Widget
vertx run SomeScript.groovy
</pre>
<p>You can also prefix the verticle with the name of the language implementation to use. For example if the verticle is a compiled Groovy class, you prefix it with <code>groovy</code> so that Vert.x knows it's a Groovy class not a Java class.</p>
<pre class="prettyprint">vertx run groovy:com.mycompany.MyGroovyMainClass
</pre>
<p>The <code>vertx run</code> command can take a few optional parameters, they are:</p>
<ul>
<li>
<p><code>-conf <config_file></code> Provides some configuration to the verticle. <code>config_file</code> is the name of a text file containing a JSON object that represents the configuration for the verticle. This is optional.</p>
</li>
<li>
<p><code>-cp <path></code> The path on which to search for the verticle and any other resources used by the verticle. This defaults to <code>.</code> (current directory). If your verticle references other scripts, classes or other resources (e.g. jar files) then make sure these are on this path. The path can contain multiple path entries separated by <code>:</code> (colon). Each path entry can be an absolute or relative path to a directory containing scripts, or absolute or relative filenames for jar or zip files.
An example path might be <code>-cp classes:lib/otherscripts:jars/myjar.jar:jars/otherjar.jar</code>
Always use the path to reference any resources that your verticle requires. Please, <strong>do not</strong> put them on the system classpath as this can cause isolation issues between deployed verticles.</p>
</li>
<li>
<p><code>-instances <instances></code> The number of instances of the verticle to instantiate in the Vert.x server. Each verticle instance is strictly single threaded so to scale your application across available cores you might want to deploy more than one instance. If omitted a single instance will be deployed. We'll talk more about scaling later on in this user manual.</p>
</li>
<li>
<p><code>-includes <mod_list></code> A comma separated list of module names to include in the classpath of this verticle.
For more information on what including a module means please see the <a href="mods_manual.html">modules manual</a>.</p>
</li>
<li>
<p><code>-worker</code> This options determines whether the verticle is a <a href="#worker-verticles">worker verticle</a> or not. </p>
</li>
<li>
<p><code>-cluster</code> This option determines whether the Vert.x instance will attempt to form a cluster with other Vert.x instances on the network. Clustering Vert.x instances allows Vert.x to form a distributed event bus with other nodes. Default is false (not clustered).</p>
</li>
<li>
<p><code>-cluster-port</code> If the <code>cluster</code> option has also been specified then this determines which port will be used for cluster communication with other Vert.x instances. Default is <code>0</code> -which means 'chose a free ephemeral port. You don't usually need to specify this parameter unless you really need to bind to a specific port.</p>
</li>
<li>
<p><code>-cluster-host</code> If the <code>cluster</code> option has also been specified then this determines which host address will be used for cluster communication with other Vert.x instances. By default it will try and pick one from the available interfaces. If you have more than one interface and you want to use a specific one, specify it here.</p>
</li>
</ul>
<p>Here are some more examples of <code>vertx run</code>:</p>
<p>Run a JavaScript verticle server.js with default settings</p>
<pre class="prettyprint">vertx run server.js
</pre>
<p>Run 10 instances of a pre-compiled Java verticle specifying classpath</p>
<pre class="prettyprint">vertx run com.acme.MyVerticle -cp "classes:lib/myjar.jar" -instances 10
</pre>
<p>Run 10 instances of a Java verticle by <em>source file</em></p>
<pre class="prettyprint">vertx run MyVerticle.java -instances 10
</pre>
<p>Run 20 instances of a ruby worker verticle </p>
<pre class="prettyprint">vertx run order_worker.rb -instances 20 -worker
</pre>
<p>Run two JavaScript verticles on the same machine and let them cluster together with each other and any other servers on the network</p>
<pre class="prettyprint">vertx run handler.js -cluster
vertx run sender.js -cluster
</pre>
<p>Run a Ruby verticle passing it some config</p>
<pre class="prettyprint">vertx run my_vert.rb -conf my_vert.conf
</pre>
<p>Where <code>my_vert.conf</code> might contain something like:</p>
<pre class="prettyprint">{
"name": "foo",
"num_widgets": 46
}
</pre>
<p>The config will be available inside the verticle via the core API. </p>
<h3 id="forcing-language-implementation-to-use">Forcing language implementation to use</h3><br/>
<p>Vert.x works out what language implementation module to use based on the file prefix using the mapping in the file <code>langs.properties</code> in the Vert.x distribution. If there is some ambiguity, e.g. you want to specify a class as a verticle, but it's a Groovy class, not a Java class, then you can prefix the main with the language implementation name, e.g. to run a compiled class as a Groovy verticle:</p>
<pre class="prettyprint">vertx run groovy:com.mycompany.MyGroovyMainVerticle
</pre>
<p><a id="running-mods"> </a> </p>
<h2 id="running-modules-from-the-command-line">Running modules from the command line</h2><br/>
<p>It's highly recommended that you package any non trivial Vert.x functionality into a module. For detailed information on how to package your code as a module please see the <a href="mods_manual.html">modules manual</a>.</p>
<p>To run a module, instead of <code>vertx run</code> you use <code>vertx runmod <module name></code>.</p>
<p>This takes some of the same options as <code>vertx run</code>. They are:</p>
<ul>
<li>
<p><code>-conf <config_file></code> - same meaning as in <code>vertx run</code></p>
</li>
<li>
<p><code>-instances <instances></code> - same meaning as in <code>vertx run</code></p>
</li>
<li>
<p><code>-cluster</code> - same meaning as in <code>vertx run</code></p>
</li>
<li>
<p><code>-cluster-host</code> - same meaning as in <code>vertx run</code></p>
</li>
<li>
<p><code>-cp</code> If this option is specified for a <em>module</em> then it overrides the standard module classpath and Vert.x will search for the <code>mod.json</code> and other module resources using the specified classpath instead. This can be really useful when, for example, developing a module in an IDE - you can run the module in a different classpath and specify the classpath to point to where the idea stores the project resources. Couple this with auto-redeploy of modules and you can have your module immediately reloaded and reflecting the changes in your IDE as you make them.</p>
</li>
</ul>
<p>If you attempt to run a module and it hasn't been installed locally, then Vert.x will attempt to install it from one of the configured repositories. Out of the box Vert.x is configured to install modules from Maven Central, Sonatype Nexus, Bintray and your local Maven repository. You can also configure it to use any other Maven or bintray repository by configuring the <code>repos.txt</code> file in the Vert.x <code>conf</code> directory. See the modules manual for more on this.</p>
<p>Some examples of running modules directly:</p>
<p>Run an module called <code>com.acme~my-mod~2.1</code></p>
<pre class="prettyprint">vertx runmod com.acme~my-mod~2.1
</pre>
<p>Run a module called <code>com.acme~other-mod~1.0.beta1</code> specifying number of instances and some config</p>
<pre class="prettyprint">vertx runmod com.acme~other-mod~1.0.beta1 -instances 10 -conf other-mod.conf
</pre>
<h2 id="running-modules-directory-from-zip-files">Running modules directory from .zip files</h2><br/>
<p>The command <code>vertx runzip</code> can be used to run a module directly from a module zip file, i.e. the module doesn't have to be pre-installed either locally or in a module repository somewhere. To do this just type</p>
<pre class="prettyprint">vertx runzip <zip_file_name>
</pre>
<p>For example</p>
<pre class="prettyprint">vertx runzip my-mod~2.0.1.zip
</pre>
<p>Vert.x will unzip the module into the system temporary directory and run it from there.</p>
<h2 id="running-modules-as-executable-jars-fat-jars">Running modules as executable jars (fat jars)</h2><br/>
<p>Vert.x also supports assembling 'fat jars'. These are executable jars which contain the Vert.x binaries along with your module so the module can be run by just executing the jar</p>
<pre class="prettyprint">java -jar mymodule-1.0-fat.jar
</pre>
<p>This means you don't have to have Vert.x pre-installed on the machine on which you execute the jar.</p>
<p>You can also provide the usual command line arguments that you would pass to <code>vertx runmod</code> when executing the jar, e.g.</p>
<pre class="prettyprint">java -jar mymodule-1.0-fat.jar -cluster -conf myconf.json
</pre>
<p>You can also specify a <code>-cp</code> argument to specify extra classpath to pass to the Vert.x platform. This is useful if you, say, want to use a custom <code>cluster.xml</code> when running the module, e.g.</p>
<pre class="prettyprint">java -jar mymodule-1.0-fat.jar -cluster -conf myconf.json -cp path/to/dir/containiner/cluster_xml
</pre>
<p>To create a fat jar you can run</p>
<pre class="prettyprint">vertx fatjar <module_name>
</pre>
<p>Or you can use the Gradle task in the standard Gradle build or the Maven plugin to build them.</p>
<p>If you want to override any Vert.x platform configuration, e.g. <code>langs.properties</code>, <code>cluster.xml</code> or logging configuration, you can add those files to the directory <code>platform_lib</code> inside your module that you're making into a fat jar. When executing your fat jar Vert.x will recognise this directory and use it to configure Vert.x with.</p>
<h2 id="displaying-version-of-vertx">Displaying version of Vert.x</h2><br/>
<p>To display the installed version of Vert.x type</p>
<pre class="prettyprint">vertx version
</pre>
<h2 id="installing-and-uninstalling-modules">Installing and uninstalling modules</h2><br/>
<p>Please see the <a href="mods_manual.html">modules manual</a> for a detailed description of this.</p>
<h1 id="high-availability-with-vertx">High availability with Vert.x</h1><br/>
<p>Vert.x allows you to run your modules with high availability (HA) support.</p>
<h2 id="automatic-failover">Automatic failover</h2><br/>
<p>When a module is run with HA, if the Vert.x instance where it is running fails, it will be re-started automatically on another node of the cluster. We call this <em>module fail-over</em>.</p>
<p>To run a module with HA, you simply add the <code>-ha</code> switch when running it on the command line, for example:</p>
<pre class="prettyprint">vertx runmod com.acme~my-mod~2.1 -ha
</pre>
<p>Now for HA to work you need more than one Vert.x instances in the cluster, so let's say you have another Vert.x instance that you have already started, for example:</p>
<pre class="prettyprint">vertx runmod com.acme~my-other-mod~1.1 -ha
</pre>
<p>If the Vert.x instance that is running com.acme~my-mod~2.1 now dies (you can test this by killing the process with <code>kill -9</code>), the Vert.x instance that is running <code>com.acme~my-other-mod~1.1</code> will automatic deploy <code>com.acme~my-mod~2.1</code> so now that Vert.x instance is running both module <code>com.acme~my-mod~2.1</code> and <code>com.acme~my-other-mod~1.1</code>.</p>
<p>Please note that cleanly closing a Vert.x instance will not cause failover to occur, e.g. <code>CTRL-C</code> or <code>kill -SIGINT</code></p>
<p>You can also start "bare" Vert.x instances - i.e. instances that are not initially running any modules, they will also failover for nodes in the cluster. To start a bare instance you simply do:</p>
<pre class="prettyprint">vertx -ha
</pre>
<p>When using the <code>ha</code> switch you do not need to provide the <code>-cluster</code> switch, as a cluster is assumed if you want HA.</p>
<h2 id="ha-groups">HA groups</h2><br/>
<p>When running a Vert.x instance with HA you can also optional specify an HA group. An HA group denotes a logical grouping of nodes in the cluster. Only nodes with the same HA group will failover onto one another. If you don't specify an HA group the default group <code>__DEFAULT__</code> is used.</p>
<p>To specify an HA group you use the <code>-hagroup</code> switch when running the module, e.g.</p>
<pre class="prettyprint">vertx runmod com.acme~my-mod~2.1 -ha -hagroup somegroup
</pre>
<p>Let's look at an example:</p>
<p>In console 1:</p>
<pre class="prettyprint">vertx runmod com.mycompany~my-mod1~1.0 -ha -hagroup g1
</pre>
<p>In console 2:</p>
<pre class="prettyprint">vertx runmod com.mycompany~my-mod2~1.0 -ha -hagroup g1
</pre>
<p>In console 3:</p>
<pre class="prettyprint">vertx runmod com.mycompany~my-mod3~1.0 -ha -hagroup g2
</pre>
<p>If we kill the instance in console 1, it will fail over to the instance in console 2, not the instance in console 3 as that has a different group.</p>
<p>If we kill the instance in console 3, it won't get failed over as there is no other vert.x instance in that group.</p>
<h2 id="dealing-with-network-partitions-quora">Dealing with network partitions - Quora</h2><br/>
<p>The HA implementation also supports quora.</p>
<p>When starting a Vert.x instance you can instruct it that it requires a "quorum" before any HA deployments will be deployed. A quorum is a minimum number of nodes for a particular group in the cluster. Typically you chose your quorum size to <code>Q = 1 + N/2</code> where <code>N</code> is the number of nodes in the group.</p>
<p>If there are less than <code>Q</code> nodes in the cluster the HA deployments will undeploy. They will redeploy again if/when a quorum is re-attained. By doing this you can prevent against network partitions, a.k.a. <code>split brain</code>.</p>
<p>There is more informaton on quora <a href="http://en.wikipedia.org/wiki/Quorum_(distributed_computing)">here</a></p>
<p>To run vert.x instances with a quorum you specify <code>-quorum</code> on the command line, e.g.</p>
<p>E.g.</p>
<p>In console 1:</p>
<pre class="prettyprint">vertx runmod com.mycompany~my-mod1~1.0 -ha -quorum 3
</pre>
<p>At this point the Vert.x instance will start but not deploy the module (yet) because there is only one node in the cluster, not 3.</p>
<p>In console 2:</p>
<pre class="prettyprint">vertx runmod com.mycompany~my-mod2~1.0 -ha -quorum 3
</pre>
<p>At this point the Vert.x instance will start but not deploy the module (yet) because there are only two nodes in the cluster, not 3.</p>
<p>In console 3:</p>
<pre class="prettyprint">vertx runmod com.mycompany~my-mod3~1.0 -ha -quorum 3
</pre>
<p>Yay! - we have three nodes, that's a quorum. At this point the modules will automatically deploy on all three instances.</p>
<p>If we now close or kill one of the nodes the modules will automatially undeploy on the other nodes, as there is no longer a quorum.</p>
<p>Quora can also be used in conjunction with ha groups.</p>
<p><a id="logging"> </a></p>
<h1 id="logging">Logging</h1><br/>
<p>Each verticle instance gets its own logger which can be retrieved from inside the verticle. For information on how to get the logger please see the API reference guide for the language you are using.</p>
<p>The log files by default go in a file called <code>vertx.log</code> in the system temp directory. On my Linux box this is <code>\tmp</code>.</p>
<p>By default <a href="http://docs.oracle.com/javase/7/docs/technotes/guides/logging/overview.html">JUL</a> logging is used. This can be configured using the file <code>$VERTX_HOME\conf\logging.properties</code>. Where <code>VERTX_HOME</code> is the directory in which you installed Vert.x.</p>
<p>Advanced note: If you'd rather use a different logging framework, e.g. log4j you can do this by specifying a system property when running Vert.x (edit the vertx.sh script), e.g.</p>
<pre class="prettyprint">-Dorg.vertx.logger-delegate-factory-class-name=org.vertx.java.core.logging.impl.Log4jLogDelegateFactory
</pre>
<p>or</p>
<pre class="prettyprint">-Dorg.vertx.logger-delegate-factory-class-name=org.vertx.java.core.logging.impl.SLF4JLogDelegateFactory
</pre>
<p>If you don't want to use the Vert.x provided logging facilities that's fine. You can just use your preferred logging framework as normal and include the logging jar and config in your module. </p>
<h1 id="configuring-thread-pool-sizes">Configuring thread pool sizes</h1><br/>
<p>Vert.x maintains two thread pools: The event loop pool and the background (worker) thread pool</p>
<h2 id="the-event-loop-pool">The event loop pool</h2><br/>
<p>The event loop pool is used to provide event loops for standard verticles. The default size is determined by the number of cores you have on your machine as returned by <code>Runtime.getRuntime().availableProcessors()</code>.</p>
<p>For a standard setup there should little reason to change this as it should be optimal, however if you do wish to change it you can set the system property <code>vertx.pool.eventloop.size</code>.</p>
<h2 id="the-background-pool">The background pool</h2><br/>
<p>This pool is used to provide threads for worker verticles and other internal blocking tasks. Since worker threads often block it would usually be larger than the event loop pool. The default maximum size is <code>20</code>.</p>
<p>To change the maximum size, you can set the system property <code>vertx.pool.worker.size</code></p>
<h1 id="configuring-clustering">Configuring clustering</h1><br/>
<p>To configure clustering use the file <code>conf/cluster.xml</code> in the distribution.</p>
<p>If you want to receive more info on cluster setup etc, then edit <code>conf/logging.properties</code> to read <code>com.hazelcast.level=INFO</code></p>
<p>In particular when running clustered, and you have more than one network interface to choose from, make sure Hazelcast is using the correct interface by editing the <code>interfaces-enabled</code> element.</p>
<p>If your network does not support multicast you can easily disable multicast and enable tcp-ip in the configuration file.</p>
<h1 id="performance-tuning">Performance Tuning</h1><br/>
<h2 id="improving-connection-time">Improving connection time</h2><br/>
<p>If you're creating a lot of connections to a Vert.x server in a short period of time, you may need to tweak some settings in order to avoid the TCP accept queue getting full. This can result in connections being refused or packets being dropped during the handshake which can then cause the client to retry.</p>
<p>A classic symptom of this is if you see long connection times just over 3000ms at your client.</p>
<p>How to tune this is operating system specific but in Linux you need to increase a couple of settings in the TCP / Net config (10000 is an arbitrarily large number)</p>
<pre class="prettyprint">sudo sysctl -w net.core.somaxconn=10000
sudo sysctl -w net.ipv4.tcp_max_syn_backlog=10000
</pre>
<p>For other operating systems, please consult your operating system documentation.</p>
<p>And you also need to set the accept backlog in your server code, (e.g. in Java:)</p>
<pre class="prettyprint">HttpServer server = vertx.createHttpServer();
server.setAcceptBacklog(10000);
</pre>
<h2 id="handling-large-numbers-of-connections">Handling large numbers of connections</h2><br/>
<h3 id="increase-number-of-available-file-handles">Increase number of available file handles</h3><br/>
<p>In order to handle large numbers of connections on your server you will probably have to increase the maximum number of file handles as each socket requires a file handle. How to do this is operating system specific.</p>
<h3 id="tune-tcp-buffer-size">Tune TCP buffer size</h3><br/>
<p>Each TCP connection allocates memory for its buffer, so to support many connections in limited RAM you may need to reduce the TCP buffer size, e.g.</p>
<pre class="prettyprint">HttpServer server = vertx.createHttpServer();
server.setSendBufferSize(4 * 1024);
server.setReceiveBufferSize(4 * 1024);
</pre>
<h1 id="internals">Internals</h1><br/>
<p>Vert.x uses the following amazing open source projects:</p>
<ul>
<li><a href="https://github.com/netty/netty">Netty</a> for much of its network IO</li>
<li><a href="http://jruby.org/">JRuby</a> for its Ruby engine</li>
<li><a href="http://groovy.codehaus.org/">Groovy</a></li>
<li><a href="http://www.mozilla.org/rhino/">Mozilla Rhino</a> for its JavaScript engine</li>
<li><a href="http://jython.org">Jython</a> for its Python engine</li>
<li><a href="http://www.hazelcast.com/">Hazelcast</a> for group management of cluster members</li>
</ul></div>
</div>
</div>
</div>
</div>
</body>
</html>