diff --git a/docs/books/disa_stig/disa_stig_part1.md b/docs/books/disa_stig/disa_stig_part1.md
index bb0d05ac02..242ba7aa57 100644
--- a/docs/books/disa_stig/disa_stig_part1.md
+++ b/docs/books/disa_stig/disa_stig_part1.md
@@ -92,7 +92,7 @@ DISA STIG partitioning scheme for a 30G disk. My use case is as a simple web ser
![Accept Changes](images/disa_stig_pt1_img9.jpg)
-### Step 5: Configure software for your environment: Server install without a GUI.
+### Step 5: Configure software for your environment: Server install without a GUI
This will matter in **Step 6**, so if you are using a UI or a workstation configuration the security profile will be different.
@@ -132,7 +132,7 @@ In later tutorials we can get into joining this to a FreeIPA enterprise configur
![Reboot](images/disa_stig_pt1_img18.jpg)
-### Step 11: Log in to your STIG'd Rocky Linux 8 System!
+### Step 11: Log in to your STIG'd Rocky Linux 8 System
![DoD Warning](images/disa_stig_pt1_img19.jpg)
diff --git a/docs/books/disa_stig/disa_stig_part2.md b/docs/books/disa_stig/disa_stig_part2.md
index 78b2503557..6773270cdc 100644
--- a/docs/books/disa_stig/disa_stig_part2.md
+++ b/docs/books/disa_stig/disa_stig_part2.md
@@ -22,7 +22,7 @@ Over time, these things could change and you will want to keep an eye on it. Fre
To list the security profiles available, we need to use the command `oscap info` provided by the `openscap-scanner` package. This should already be installed in your system if you've been following along since Part 1. To obtain the security profiles available:
-```
+```bash
oscap info /usr/share/xml/scap/ssg/content/ssg-rl8-ds.xml
```
@@ -48,11 +48,11 @@ DISA is just one of many Security Profiles supported by the Rocky Linux SCAP def
There are two types to choose from here:
* stig - Without a GUI
-* stig_gui - With a GUI
+* stig_gui - With a GUI
Run a scan and create an HTML report for the DISA STIG:
-```
+```bash
sudo oscap xccdf eval --report unit-test-disa-scan.html --profile stig /usr/share/xml/scap/ssg/content/ssg-rl8-ds.xml
```
@@ -69,15 +69,18 @@ And will output an HTML report:
Next, we will generate a scan, and then use the results of the scan to generate a bash script to remediate the system based on the DISA stig profile. I do not recommend using automatic remediation, you should always review the changes before actually running them.
1) Generate a scan on the system:
- ```
+
+ ```bash
sudo oscap xccdf eval --results disa-stig-scan.xml --profile stig /usr/share/xml/scap/ssg/content/ssg-rl8-ds.xml
```
+
2) Use this scan output to generate the script:
- ```
- sudo oscap xccdf generate fix --output draft-disa-remediate.sh --profile stig disa-stig-scan.xml
+
+ ```bash
+ sudo oscap xccdf generate fix --output draft-disa-remediate.sh --profile stig disa-stig-scan.xml
```
-The resulting script will include all the changes it would make the system.
+The resulting script will include all the changes it would make the system.
!!! warning
@@ -90,12 +93,15 @@ The resulting script will include all the changes it would make the system.
You can also generate remediation actions in ansible playbook format. Let's repeat the section above, but this time with ansible output:
1) Generate a scan on the system:
+
+ ```bash
+ sudo oscap xccdf eval --results disa-stig-scan.xml --profile stig /usr/share/xml/scap/ssg/content/ssg-rl8-ds.xml
```
- sudo oscap xccdf eval --results disa-stig-scan.xml --profile stig /usr/share/xml/scap/ssg/content/ssg-rl8-ds.xml
- ```
+
2) Use this scan output to generate the script:
- ```
- sudo oscap xccdf generate fix --fix-type ansible --output draft-disa-remediate.yml --profile stig disa-stig-scan.xml
+
+ ```bash
+ sudo oscap xccdf generate fix --fix-type ansible --output draft-disa-remediate.yml --profile stig disa-stig-scan.xml
```
!!! warning
@@ -109,4 +115,3 @@ You can also generate remediation actions in ansible playbook format. Let's repe
Scott Shinn is the CTO for Atomicorp, and part of the Rocky Linux Security team. He has been involved with federal information systems at
the White House, Department of Defense, and Intelligence Community since 1995. Part of that was creating STIG’s and the requirement
that you use them and I am so very sorry about that.
-
diff --git a/docs/books/disa_stig/disa_stig_part3.md b/docs/books/disa_stig/disa_stig_part3.md
index 46e117f73b..a1eefa17eb 100644
--- a/docs/books/disa_stig/disa_stig_part3.md
+++ b/docs/books/disa_stig/disa_stig_part3.md
@@ -10,9 +10,9 @@ tags:
- enterprise
---
-# Introduction
+# Introduction
-In part 1 of this series we covered how to build our web server with the base RHEL8 DISA STIG applied, and in part 2 we learned how to test the STIG compliance with the OpenSCAP tool. Now we’re going to actually do something with the system, and build a simple web application and apply the DISA web server STIG: https://www.stigviewer.com/stig/web_server/
+In part 1 of this series we covered how to build our web server with the base RHEL8 DISA STIG applied, and in part 2 we learned how to test the STIG compliance with the OpenSCAP tool. Now we’re going to actually do something with the system, and build a simple web application and apply the DISA web server STIG:
First lets compare what we’re getting into here, the RHEL 8 DISA STIG is targeted at a very specific platform so the controls are pretty easy to understand in that context, test, and apply. Application STIGs have to be portable across multiple platforms, so the content here is generic in order to work on different linux distributions (RHEL, Ubuntu, SuSE, etc)**. This means that tools like OpenSCAP won’t help us audit/remediate the configuration, we’re going to have to do this manually. Those STIGs are:
@@ -27,43 +27,43 @@ Before you start, you'll need to refer back to Part 1 and apply the DISA STIG Se
1.) Install `apache` and `mod_ssl`
-```
- dnf install httpd mod_ssl
+```bash
+dnf install httpd mod_ssl
```
2.) Configuration changes
-```
- sed -i 's/^\([^#].*\)**/# \1/g' /etc/httpd/conf.d/welcome.conf
- dnf -y remove httpd-manual
- dnf -y install mod_session
-
- echo “MaxKeepAliveRequests 100” > /etc/httpd/conf.d/disa-apache-stig.conf
- echo “SessionCookieName session path=/; HttpOnly; Secure;” >> /etc/httpd/conf.d/disa-apache-stig.conf
- echo “Session On” >> /etc/httpd/conf.d/disa-apache-stig.conf
- echo “SessionMaxAge 600” >> /etc/httpd/conf.d/disa-apache-stig.conf
- echo “SessionCryptoCipher aes256” >> /etc/httpd/conf.d/disa-apache-stig.conf
- echo “Timeout 10” >> /etc/httpd/conf.d/disa-apache-stig.conf
- echo “TraceEnable Off” >> /etc/httpd/conf.d/disa-apache-stig.conf
- echo “RequestReadTimeout 120” >> /etc/httpd/conf.d/disa-apache-stig.conf
-
- sed -i “s/^#LoadModule usertrack_module/LoadModule usertrack_module/g” /etc/httpd/conf.modules.d/00-optional.conf
- sed -i "s/proxy_module/#proxy_module/g" /etc/httpd/conf.modules.d/00-proxy.conf
- sed -i "s/proxy_ajp_module/#proxy_ajp_module/g" /etc/httpd/conf.modules.d/00-proxy.conf
- sed -i "s/proxy_balancer_module/#proxy_balancer_module/g" /etc/httpd/conf.modules.d/00-proxy.conf
- sed -i "s/proxy_ftp_module/#proxy_ftp_module/g" /etc/httpd/conf.modules.d/00-proxy.conf
- sed -i "s/proxy_http_module/#proxy_http_module/g" /etc/httpd/conf.modules.d/00-proxy.conf
- sed -i "s/proxy_connect_module/#proxy_connect_module/g" /etc/httpd/conf.modules.d/00-proxy.conf
+```bash
+sed -i 's/^\([^#].*\)**/# \1/g' /etc/httpd/conf.d/welcome.conf
+dnf -y remove httpd-manual
+dnf -y install mod_session
+
+echo “MaxKeepAliveRequests 100” > /etc/httpd/conf.d/disa-apache-stig.conf
+echo “SessionCookieName session path=/; HttpOnly; Secure;” >> /etc/httpd/conf.d/disa-apache-stig.conf
+echo “Session On” >> /etc/httpd/conf.d/disa-apache-stig.conf
+echo “SessionMaxAge 600” >> /etc/httpd/conf.d/disa-apache-stig.conf
+echo “SessionCryptoCipher aes256” >> /etc/httpd/conf.d/disa-apache-stig.conf
+echo “Timeout 10” >> /etc/httpd/conf.d/disa-apache-stig.conf
+echo “TraceEnable Off” >> /etc/httpd/conf.d/disa-apache-stig.conf
+echo “RequestReadTimeout 120” >> /etc/httpd/conf.d/disa-apache-stig.conf
+
+sed -i “s/^#LoadModule usertrack_module/LoadModule usertrack_module/g” /etc/httpd/conf.modules.d/00-optional.conf
+sed -i "s/proxy_module/#proxy_module/g" /etc/httpd/conf.modules.d/00-proxy.conf
+sed -i "s/proxy_ajp_module/#proxy_ajp_module/g" /etc/httpd/conf.modules.d/00-proxy.conf
+sed -i "s/proxy_balancer_module/#proxy_balancer_module/g" /etc/httpd/conf.modules.d/00-proxy.conf
+sed -i "s/proxy_ftp_module/#proxy_ftp_module/g" /etc/httpd/conf.modules.d/00-proxy.conf
+sed -i "s/proxy_http_module/#proxy_http_module/g" /etc/httpd/conf.modules.d/00-proxy.conf
+sed -i "s/proxy_connect_module/#proxy_connect_module/g" /etc/httpd/conf.modules.d/00-proxy.conf
```
3.) Update Firewall policy and start `httpd`
-```
- firewall-cmd --zone=public --add-service=https --permanent
- firewall-cmd --zone=public --add-service=https
- firewall-cmd --reload
- systemctl enable httpd
- systemctl start httpd
+```bash
+firewall-cmd --zone=public --add-service=https --permanent
+firewall-cmd --zone=public --add-service=https
+firewall-cmd --reload
+systemctl enable httpd
+systemctl start httpd
```
## Detail Controls Overview
@@ -78,7 +78,7 @@ If you’ve gotten this far, you’re probably interested in knowing more about
### Types
-* Technical - 24 controls
+* Technical - 24 controls
* Operational - 23 controls
We’re not going to cover the “why” for these changes in this article, just what needs to happen if it is a technical control. If there is nothing we can change like in the case of an Operational control, the **Fix:** field will be none. The good news in a lot of these cases, this is already the default in Rocky Linux 8, so you don’t need to change anything at all.
@@ -95,9 +95,9 @@ We’re not going to cover the “why” for these changes in this article, just
**Severity:** Cat I High
**Type:** Technical
-**Fix:**
+**Fix:**
-```
+```bash
sed -i 's/^\([^#].*\)/# \1/g' /etc/httpd/conf.d/welcome.conf
```
@@ -119,133 +119,129 @@ sed -i 's/^\([^#].*\)/# \1/g' /etc/httpd/conf.d/welcome.conf
**Type:** Technical
**Fix:** None, Fixed by default in Rocky Linux 8
-**(V-214245)** The Apache web server must have Web Distributed Authoring (WebDAV) disabled.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:**
+**(V-214245)** The Apache web server must have Web Distributed Authoring (WebDAV) disabled.
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:**
-```
+```bash
sed -i 's/^\([^#].*\)/# \1/g' /etc/httpd/conf.d/welcome.conf
```
**(V-214264)** The Apache web server must be configured to integrate with an organization's security infrastructure.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, forward web server logs to SIEM
-
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, forward web server logs to SIEM
**(V-214243)** The Apache web server must have resource mappings set to disable the serving of certain file types.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:** None, Fixed by default in Rocky Linux 8
-
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:** None, Fixed by default in Rocky Linux 8
**(V-214240)** The Apache web server must only contain services and functions necessary for operation.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:**
-
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:**
-```
+```bash
dnf remove httpd-manual
```
**(V-214238)** Expansion modules must be fully reviewed, tested, and signed before they can exist on a production Apache web server.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, disable all modules not required for the application
-
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, disable all modules not required for the application
**(V-214268)** Cookies exchanged between the Apache web server and the client, such as session cookies, must have cookie properties set to prohibit client-side scripts from reading the cookie data.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:**
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:**
-```
-dnf install mod_session
+```bash
+dnf install mod_session
echo “SessionCookieName session path=/; HttpOnly; Secure;” >> /etc/httpd/conf.d/disa-apache-stig.conf
```
**(V-214269)** The Apache web server must remove all export ciphers to protect the confidentiality and integrity of transmitted information.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:** None, Fixed by default in Rocky Linux 8 DISA STIG security Profile
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:** None, Fixed by default in Rocky Linux 8 DISA STIG security Profile
-**(V-214260)** The Apache web server must be configured to immediately disconnect or disable remote access to the hosted applications.
+**(V-214260)** The Apache web server must be configured to immediately disconnect or disable remote access to the hosted applications.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, this is a procedure to stop the web server
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, this is a procedure to stop the web server
**(V-214249)** The Apache web server must separate the hosted applications from hosted Apache web server management functionality.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, this is related to the web applications rather than the server
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, this is related to the web applications rather than the server
**(V-214246)** The Apache web server must be configured to use a specified IP address and port.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, the web server should be configured to only listen on a specific IP / port
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, the web server should be configured to only listen on a specific IP / port
**(V-214247)** Apache web server accounts accessing the directory tree, the shell, or other operating system functions and utilities must only be administrative accounts.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, all files, and directories served by the web server need to be owned by administrative users, and not the web server user.
-
-**(V-214244)** The Apache web server must allow the mappings to unused and vulnerable scripts to be removed.
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, all files, and directories served by the web server need to be owned by administrative users, and not the web server user.
+
+**(V-214244)** The Apache web server must allow the mappings to unused and vulnerable scripts to be removed.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, any cgi-bin or other Script/ScriptAlias mappings that are not used must be removed
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, any cgi-bin or other Script/ScriptAlias mappings that are not used must be removed
**(V-214263)** The Apache web server must not impede the ability to write specified log record content to an audit log server.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, Work with the SIEM administrator to allow the ability to write specified log record content to an audit log server.
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, Work with the SIEM administrator to allow the ability to write specified log record content to an audit log server.
**(V-214228)** The Apache web server must limit the number of allowed simultaneous session requests.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:**
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:**
-```
+```bash
echo “MaxKeepAliveRequests 100” > /etc/httpd/conf.d/disa-apache-stig.conf
```
**(V-214229)** The Apache web server must perform server-side session management.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:**
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:**
-```
+```bash
sed -i “s/^#LoadModule usertrack_module/LoadModule usertrack_module/g” /etc/httpd/conf.modules.d/00-optional.conf
```
-**(V-214266)** The Apache web server must prohibit or restrict the use of nonsecure or unnecessary ports, protocols, modules, and/or services.
+**(V-214266)** The Apache web server must prohibit or restrict the use of nonsecure or unnecessary ports, protocols, modules, and/or services.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, Ensure the website enforces the use of IANA well-known ports for HTTP and HTTPS.
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, Ensure the website enforces the use of IANA well-known ports for HTTP and HTTPS.
**(V-214241)** The Apache web server must not be a proxy server.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:**
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:**
-```
+```bash
sed -i "s/proxy_module/#proxy_module/g" /etc/httpd/conf.modules.d/00-proxy.conf
sed -i "s/proxy_ajp_module/#proxy_ajp_module/g" /etc/httpd/conf.modules.d/00-proxy.conf
sed -i "s/proxy_balancer_module/#proxy_balancer_module/g" /etc/httpd/conf.modules.d/00-proxy.conf
@@ -256,191 +252,190 @@ sed -i "s/proxy_connect_module/#proxy_connect_module/g" /etc/httpd/conf.modules.
**(V-214265)** The Apache web server must generate log records that can be mapped to Coordinated Universal Time (UTC)** or Greenwich Mean Time (GMT) which are stamped at a minimum granularity of one second.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:** None, Fixed by default in Rocky Linux 8
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:** None, Fixed by default in Rocky Linux 8
**(V-214256)** Warning and error messages displayed to clients must be modified to minimize the identity of the Apache web server, patches, loaded modules, and directory paths.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** Use the "ErrorDocument" directive to enable custom error pages for 4xx or 5xx HTTP status codes.
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** Use the "ErrorDocument" directive to enable custom error pages for 4xx or 5xx HTTP status codes.
**(V-214237)** The log data and records from the Apache web server must be backed up onto a different system or media.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, document the web server backup procedures
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, document the web server backup procedures
**(V-214236)** The log information from the Apache web server must be protected from unauthorized modification or deletion.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, document the web server backup procedures
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, document the web server backup procedures
-**(V-214261)** Non-privileged accounts on the hosting system must only access Apache web server security-relevant information and functions through a distinct administrative account.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, Restrict access to the web administration tool to only the System Administrator, Web Manager, or the Web Manager designees.
+**(V-214261)** Non-privileged accounts on the hosting system must only access Apache web server security-relevant information and functions through a distinct administrative account.
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, Restrict access to the web administration tool to only the System Administrator, Web Manager, or the Web Manager designees.
**(V-214235)** The Apache web server log files must only be accessible by privileged users.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, To protect the integrity of the data that is being captured in the log files, ensure that only the members of the Auditors group, Administrators, and the user assigned to run the web server software is granted permissions to read the log files.
-
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, To protect the integrity of the data that is being captured in the log files, ensure that only the members of the Auditors group, Administrators, and the user assigned to run the web server software is granted permissions to read the log files.
+
**(V-214234)** The Apache web server must use a logging mechanism that is configured to alert the Information System Security Officer (ISSO) and System Administrator (SA) in the event of a processing failure.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, Work with the SIEM administrator to configure an alert when no audit data is received from Apache based on the defined schedule of connections.
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, Work with the SIEM administrator to configure an alert when no audit data is received from Apache based on the defined schedule of connections.
-**(V-214233)** An Apache web server, behind a load balancer or proxy server, must produce log records containing the client IP information as the source and destination and not the load balancer or proxy IP information with each event.
+**(V-214233)** An Apache web server, behind a load balancer or proxy server, must produce log records containing the client IP information as the source and destination and not the load balancer or proxy IP information with each event.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, Access the proxy server through which inbound web traffic is passed and configure settings to pass web traffic to the Apache web server transparently.
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, Access the proxy server through which inbound web traffic is passed and configure settings to pass web traffic to the Apache web server transparently.
-Refer to https://httpd.apache.org/docs/2.4/mod/mod_remoteip.html for additional information on logging options based on your proxy/load balancing setup.
+Refer to for additional information on logging options based on your proxy/load balancing setup.
**(V-214231)** The Apache web server must have system logging enabled.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:** None, Fixed by default in Rocky Linux 8
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:** None, Fixed by default in Rocky Linux 8
**(V-214232)** The Apache web server must generate, at a minimum, log records for system startup and shutdown, system access, and system authentication events.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:** None, Fixed by default in Rocky Linux 8
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:** None, Fixed by default in Rocky Linux 8
-V-214251 Cookies exchanged between the Apache web server and client, such as session cookies, must have security settings that disallow cookie access outside the originating Apache web server and hosted application.
+V-214251 Cookies exchanged between the Apache web server and client, such as session cookies, must have security settings that disallow cookie access outside the originating Apache web server and hosted application.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:**
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:**
-```
+```bash
echo “Session On” >> /etc/httpd/conf.d/disa-apache-stig.conf
```
**(V-214250)** The Apache web server must invalidate session identifiers upon hosted application user logout or other session termination.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:**
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:**
-```
+```bash
echo “SessionMaxAge 600” >> /etc/httpd/conf.d/disa-apache-stig.conf
```
**(V-214252)** The Apache web server must generate a session ID long enough that it cannot be guessed through brute force.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:**
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:**
-```
+```bash
echo “SessionCryptoCipher aes256” >> /etc/httpd/conf.d/disa-apache-stig.conf
```
**(V-214255)** The Apache web server must be tuned to handle the operational requirements of the hosted application.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:**
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:**
-```
+```bash
echo “Timeout 10” >> /etc/httpd/conf.d/disa-apache-stig.conf
```
**(V-214254)** The Apache web server must be built to fail to a known safe state if system initialization fails, shutdown fails, or aborts fail.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, Prepare documentation for disaster recovery methods for the Apache 2.4 web server in the event of the necessity for rollback.
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, Prepare documentation for disaster recovery methods for the Apache 2.4 web server in the event of the necessity for rollback.
-**(V-214257)** Debugging and trace information used to diagnose the Apache web server must be disabled.
+**(V-214257)** Debugging and trace information used to diagnose the Apache web server must be disabled.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:**
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:**
-```
+```bash
echo “TraceEnable Off” >> /etc/httpd/conf.d/disa-apache-stig.conf
```
**(V-214230)** The Apache web server must use cryptography to protect the integrity of remote sessions.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:**
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:**
-```
+```bash
sed -i "s/^#SSLProtocol.*/SSLProtocol -ALL +TLSv1.2/g" /etc/httpd/conf.d/ssl.conf
```
-**(V-214258)** The Apache web server must set an inactive timeout for sessions.
+**(V-214258)** The Apache web server must set an inactive timeout for sessions.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:**
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:**
-```
+```bash
echo “RequestReadTimeout 120” >> /etc/httpd/conf.d/disa-stig-apache.conf
```
**(V-214270)** The Apache web server must install security-relevant software updates within the configured time period directed by an authoritative source (e.g., IAVM, CTOs, DTMs, and STIGs).
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, Install the current version of the web server software and maintain appropriate service packs and patches.
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, Install the current version of the web server software and maintain appropriate service packs and patches.
-**(V-214239)** The Apache web server must not perform user management for hosted applications.
+**(V-214239)** The Apache web server must not perform user management for hosted applications.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:** None, Fixed by default in Rocky Linux 8
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:** None, Fixed by default in Rocky Linux 8
**(V-214274)** The Apache web server htpasswd files (if present) must reflect proper ownership and permissions.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, Ensure the SA or Web Manager account owns the "htpasswd" file. Ensure permissions are set to "550".
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, Ensure the SA or Web Manager account owns the "htpasswd" file. Ensure permissions are set to "550".
**(V-214259)** The Apache web server must restrict inbound connections from nonsecure zones.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** None, Configure the "http.conf" file to include restrictions.
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** None, Configure the "http.conf" file to include restrictions.
Example:
-```
+```bash
Require not ip 192.168.205
Require not host phishers.example.com
```
-**(V-214267)** The Apache web server must be protected from being stopped by a non-privileged user.
+**(V-214267)** The Apache web server must be protected from being stopped by a non-privileged user.
-**Severity:** Cat II Medium
-**Type:** Technical
-**Fix:** None, Fixed by Rocky Linux 8 by default
+**Severity:** Cat II Medium
+**Type:** Technical
+**Fix:** None, Fixed by Rocky Linux 8 by default
**(V-214262)** The Apache web server must use a logging mechanism that is configured to allocate log record storage capacity large enough to accommodate the logging requirements of the Apache web server.
-**Severity:** Cat II Medium
-**Type:** Operational
-**Fix:** none, Work with the SIEM administrator to determine if the SIEM is configured to allocate log record storage capacity large enough to accommodate the logging requirements of the Apache web server.
+**Severity:** Cat II Medium
+**Type:** Operational
+**Fix:** none, Work with the SIEM administrator to determine if the SIEM is configured to allocate log record storage capacity large enough to accommodate the logging requirements of the Apache web server.
**(V-214272)** The Apache web server must be configured in accordance with the security configuration settings based on DoD security configuration or implementation guidance, including STIGs, NSA configuration guides, CTOs, and DTMs.
-**Severity:** Cat III Low
-**Type:** Operational
-**Fix:** None
+**Severity:** Cat III Low
+**Type:** Operational
+**Fix:** None
## About The Author
Scott Shinn is the CTO for Atomicorp, and part of the Rocky Linux Security team. He has been involved with federal information systems at the White House, Department of Defense, and Intelligence Community since 1995. Part of that was creating STIG’s and the requirement th
at you use them and I am so very sorry about that.
-
diff --git a/docs/books/index.md b/docs/books/index.md
index 1e8e6ac52c..47dc900260 100644
--- a/docs/books/index.md
+++ b/docs/books/index.md
@@ -9,6 +9,7 @@ contributors: @fromoz, Ganna Zhyrnova
You have found the **Books** section of the documentation. This is where longer-form documentation is kept. These documents are broken down into sections or **_chapters_** to make it easy for you to work through them at your own pace and keeping track of your progress. These documents were created by people just like you, with a passion for certain subjects.
Would you like to try your hand at writing an addition to this section? If so, That would be GREAT! Simply join the conversation on the [Mattermost Documentation channel](https://chat.rockylinux.org/rocky-linux/channels/documentation) and we will help you on your way.
+
## Download for offline reading
Our books can be downloaded in PDF format for offline reading.
diff --git a/docs/books/learning_ansible/01-basic.md b/docs/books/learning_ansible/01-basic.md
index a2c941c24b..67f667ddf8 100644
--- a/docs/books/learning_ansible/01-basic.md
+++ b/docs/books/learning_ansible/01-basic.md
@@ -13,13 +13,13 @@ In this chapter you will learn how to work with Ansible.
**Objectives**: In this chapter you will learn how to:
-:heavy_check_mark: Implement Ansible;
-:heavy_check_mark: Apply configuration changes on a server;
-:heavy_check_mark: Create first Ansible playbooks;
+:heavy_check_mark: Implement Ansible;
+:heavy_check_mark: Apply configuration changes on a server;
+:heavy_check_mark: Create first Ansible playbooks;
:checkered_flag: **ansible**, **module**, **playbook**
-**Knowledge**: :star: :star: :star:
+**Knowledge**: :star: :star: :star:
**Complexity**: :star: :star:
**Reading time**: 30 minutes
@@ -87,7 +87,7 @@ To offer a graphical interface to your daily use of Ansible, you can install som
Ansible is available in the _EPEL_ repository, but may sometimes be too old for the current version, and you'll want to work with a more recent version.
-We will therefore consider two types of installation:
+We will therefore consider two types of installation:
* the one based on EPEL repositories
* one based on the `pip` python package manager
@@ -96,21 +96,21 @@ The _EPEL_ is required for both versions, so you can go ahead and install that n
* EPEL installation:
-```
-$ sudo dnf install epel-release
+```bash
+sudo dnf install epel-release
```
### Installation from EPEL
If we install Ansible from the _EPEL_, we can do the following:
-```
-$ sudo dnf install ansible
+```bash
+sudo dnf install ansible
```
And then verify the installation:
-```
+```bash
$ ansible --version
ansible [core 2.14.2]
config file = /etc/ansible/ansible.cfg
@@ -138,8 +138,8 @@ As we want to use a newer version of Ansible, we will install it from `python3-p
At this stage, we can choose to install ansible with the version of python we want.
-```
-$ sudo dnf install python38 python38-pip python38-wheel python3-argcomplete rust cargo curl
+```bash
+sudo dnf install python38 python38-pip python38-wheel python3-argcomplete rust cargo curl
```
!!! Note
@@ -149,14 +149,14 @@ $ sudo dnf install python38 python38-pip python38-wheel python3-argcomplete rust
We can now install Ansible:
-```
-$ pip3.8 install --user ansible
-$ activate-global-python-argcomplete --user
+```bash
+pip3.8 install --user ansible
+activate-global-python-argcomplete --user
```
Check your Ansible version:
-```
+```bash
$ ansible --version
ansible [core 2.13.11]
config file = None
@@ -184,7 +184,7 @@ There are two main configuration files:
The configuration file would automatically be created if Ansible was installed with its RPM package. With a `pip` installation, this file does not exist. We'll have to create it by hand thanks to the `ansible-config` command:
-```
+```bash
$ ansible-config -h
usage: ansible-config [-h] [--version] [-v] {list,dump,view,init} ...
@@ -200,7 +200,7 @@ positional arguments:
Example:
-```
+```bash
ansible-config init --disabled > /etc/ansible/ansible.cfg
```
@@ -224,7 +224,7 @@ It is sometimes necessary to think carefully about how to build this file.
Go to the default inventory file, which is located under `/etc/ansible/hosts`. Some examples are provided and commented:
-```
+```text
# This is the default ansible 'hosts' file.
#
# It should live in /etc/ansible/hosts
@@ -278,7 +278,7 @@ The inventory can be generated automatically in production, especially if you ha
As you may have noticed, the groups are declared in square brackets. Then come the elements belonging to the groups. You can create, for example, a `rocky8` group by inserting the following block into this file:
-```
+```bash
[rocky8]
172.16.1.10
172.16.1.11
@@ -286,7 +286,7 @@ As you may have noticed, the groups are declared in square brackets. Then come t
Groups can be used within other groups. In this case, it must be specified that the parent group is composed of subgroups with the `:children` attribute like this:
-```
+```bash
[linux:children]
rocky8
debian9
@@ -310,7 +310,7 @@ Now that our management server is installed and our inventory is ready, it's tim
The `ansible` command launches a task on one or more target hosts.
-```
+```bash
ansible [-m module_name] [-a args] [options]
```
@@ -322,37 +322,37 @@ Examples:
* List the hosts belonging to the rocky8 group:
-```
+```bash
ansible rocky8 --list-hosts
```
* Ping a host group with the `ping` module:
-```
+```bash
ansible rocky8 -m ping
```
* Display facts from a host group with the `setup` module:
-```
+```bash
ansible rocky8 -m setup
```
* Run a command on a host group by invoking the `command` module with arguments:
-```
+```bash
ansible rocky8 -m command -a 'uptime'
```
* Run a command with administrator privileges:
-```
+```bash
ansible ansible_clients --become -m command -a 'reboot'
```
* Run a command using a custom inventory file:
-```
+```bash
ansible rocky8 -i ./local-inventory -m command -a 'date'
```
@@ -360,7 +360,7 @@ ansible rocky8 -i ./local-inventory -m command -a 'date'
As in this example, it is sometimes simpler to separate the declaration of managed devices into several files (by cloud project for example) and provide Ansible with the path to these files, rather than to maintain a long inventory file.
-| Option | Information |
+| Option | Information |
|--------------------------|-------------------------------------------------------------------------------------------------|
| `-a 'arguments'` | The arguments to pass to the module. |
| `-b -K` | Requests a password and runs the command with higher privileges. |
@@ -380,26 +380,26 @@ This user will be used:
On both machines, create an `ansible` user, dedicated to ansible:
-```
-$ sudo useradd ansible
-$ sudo usermod -aG wheel ansible
+```bash
+sudo useradd ansible
+sudo usermod -aG wheel ansible
```
Set a password for this user:
-```
-$ sudo passwd ansible
+```bash
+sudo passwd ansible
```
Modify the sudoers config to allow members of the `wheel` group to sudo without password:
-```
-$ sudo visudo
+```bash
+sudo visudo
```
Our goal here is to comment out the default, and uncomment the NOPASSWD option so that these lines look like this when we are done:
-```
+```bash
## Allows people in group wheel to run all commands
# %wheel ALL=(ALL) ALL
@@ -414,8 +414,8 @@ Our goal here is to comment out the default, and uncomment the NOPASSWD option s
When using management from this point on, start working with this new user:
-```
-$ sudo su - ansible
+```bash
+sudo su - ansible
```
### Test with the ping module
@@ -424,13 +424,13 @@ By default, password login is not allowed by Ansible.
Uncomment the following line from the `[defaults]` section in the `/etc/ansible/ansible.cfg` configuration file and set it to True:
-```
+```bash
ask_pass = True
```
Run a `ping` on each server of the rocky8 group:
-```
+```bash
# ansible rocky8 -m ping
SSH password:
172.16.1.10 | SUCCESS => {
@@ -467,7 +467,7 @@ Password authentication will be replaced by a much more secure private/public ke
The dual-key will be generated with the command `ssh-keygen` on the management station by the `ansible` user:
-```
+```bash
[ansible]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ansible/.ssh/id_rsa):
@@ -494,14 +494,14 @@ The key's randomart image is:
The public key can be copied to the servers:
-```
+```bash
# ssh-copy-id ansible@172.16.1.10
# ssh-copy-id ansible@172.16.1.11
```
Re-comment the following line from the `[defaults]` section in the `/etc/ansible/ansible.cfg` configuration file to prevent password authentication:
-```
+```bash
#ask_pass = True
```
@@ -509,7 +509,7 @@ Re-comment the following line from the `[defaults]` section in the `/etc/ansible
For the next test, the `shell` module, allowing remote command execution, is used:
-```
+```bash
# ansible rocky8 -m shell -a "uptime"
172.16.1.10 | SUCCESS | rc=0 >>
12:36:18 up 57 min, 1 user, load average: 0.00, 0.00, 0.00
@@ -538,7 +538,7 @@ Collections are a distribution format for Ansible content that can include playb
A module is invoked with the `-m` option of the `ansible` command:
-```
+```bash
ansible [-m module_name] [-a args] [options]
```
@@ -562,7 +562,7 @@ Each category of need has its own module. Here is a non-exhaustive list:
The `dnf` module allows for the installation of software on the target clients:
-```
+```bash
# ansible rocky8 --become -m dnf -a name="httpd"
172.16.1.10 | SUCCESS => {
"changed": true,
@@ -586,7 +586,7 @@ The `dnf` module allows for the installation of software on the target clients:
The installed software being a service, it is now necessary to start it with the module `systemd`:
-```
+```bash
# ansible rocky8 --become -m systemd -a "name=httpd state=started"
172.16.1.10 | SUCCESS => {
"changed": true,
@@ -630,7 +630,7 @@ Take a look at the different facts of your clients to get an idea of the amount
We'll see later how to use facts in our playbooks and how to create our own facts.
-```
+```bash
# ansible ansible_clients -m setup | less
192.168.1.11 | SUCCESS => {
"ansible_facts": {
@@ -665,7 +665,7 @@ Ansible's playbooks describe a policy to be applied to remote systems, to force
Learn more about [yaml here](https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html)
-```
+```bash
ansible-playbook ... [options]
```
@@ -694,7 +694,7 @@ The following playbook allows us to install Apache and MariaDB on our target ser
Create a `test.yml` file with the following content:
-```
+```bash
---
- hosts: rocky8 <1>
become: true <2>
@@ -721,7 +721,7 @@ Create a `test.yml` file with the following content:
The execution of the playbook is done with the command `ansible-playbook`:
-```
+```bash
$ ansible-playbook test.yml
PLAY [rocky8] ****************************************************************
@@ -753,7 +753,7 @@ PLAY RECAP *********************************************************************
For more readability, it is recommended to write your playbooks in full yaml format. In the previous example, the arguments are given on the same line as the module, the value of the argument following its name separated by an `=`. Look at the same playbook in full yaml:
-```
+```bash
---
- hosts: rocky8
become: true
@@ -790,14 +790,15 @@ For more readability, it is recommended to write your playbooks in full yaml for
Note about collections: Ansible now provides modules in the form of collections.
Some modules are provided by default within the `ansible.builtin` collection, others must be installed manually via the:
-```
+```bash
ansible-galaxy collection install [collectionname]
```
+
where [collectionname] is the name of the collection (the square brackets here are used to highlight the need to replace this with an actual collection name, and are NOT part of the command).
The previous example should be written like this:
-```
+```bash
---
- hosts: rocky8
become: true
@@ -829,7 +830,7 @@ The previous example should be written like this:
A playbook is not limited to one target:
-```
+```bash
---
- hosts: webservers
become: true
@@ -865,19 +866,19 @@ A playbook is not limited to one target:
You can check the syntax of your playbook:
-```
-$ ansible-playbook --syntax-check play.yml
+```bash
+ansible-playbook --syntax-check play.yml
```
You can also use a "linter" for yaml:
-```
-$ dnf install -y yamllint
+```bash
+dnf install -y yamllint
```
then check the yaml syntax of your playbooks:
-```
+```bash
$ yamllint test.yml
test.yml
8:1 error syntax error: could not find expected ':' (syntax)
@@ -895,7 +896,7 @@ test.yml
* Update your client distribution
* Restart your client
-```
+```bash
ansible ansible_clients --become -m group -a "name=Paris"
ansible ansible_clients --become -m group -a "name=Tokio"
ansible ansible_clients --become -m group -a "name=NewYork"
diff --git a/docs/books/learning_ansible/02-advanced.md b/docs/books/learning_ansible/02-advanced.md
index cbcffc508b..b93e26d418 100644
--- a/docs/books/learning_ansible/02-advanced.md
+++ b/docs/books/learning_ansible/02-advanced.md
@@ -10,14 +10,14 @@ In this chapter you will continue to learn how to work with Ansible.
**Objectives**: In this chapter you will learn how to:
-:heavy_check_mark: work with variables;
-:heavy_check_mark: use loops;
-:heavy_check_mark: manage state changes and react to them;
+:heavy_check_mark: work with variables;
+:heavy_check_mark: use loops;
+:heavy_check_mark: manage state changes and react to them;
:heavy_check_mark: manage asynchronous tasks.
:checkered_flag: **ansible**, **module**, **playbook**
-**Knowledge**: :star: :star: :star:
+**Knowledge**: :star: :star: :star:
**Complexity**: :star: :star:
**Reading time**: 30 minutes
@@ -49,7 +49,7 @@ A variable can be defined in different places, like in a playbook, in a role or
For example, from a playbook:
-```
+```bash
---
- hosts: apache1
vars:
@@ -61,8 +61,8 @@ For example, from a playbook:
or from the command line:
-```
-$ ansible-playbook deploy-http.yml --extra-vars "service=httpd"
+```bash
+ansible-playbook deploy-http.yml --extra-vars "service=httpd"
```
Once defined, a variable can be used by calling it between double braces:
@@ -72,7 +72,7 @@ Once defined, a variable can be used by calling it between double braces:
For example:
-```
+```bash
- name: make sure apache is started
ansible.builtin.systemd:
name: "{{ service['rhel'] }}"
@@ -85,7 +85,7 @@ Of course, it is also possible to access the global variables (the **facts**) of
Variables can be included in a file external to the playbook, in which case this file must be defined in the playbook with the `vars_files` directive:
-```
+```bash
---
- hosts: apache1
vars_files:
@@ -94,7 +94,7 @@ Variables can be included in a file external to the playbook, in which case this
The `myvariables.yml` file:
-```
+```bash
---
port_http: 80
ansible.builtin.systemd::
@@ -104,7 +104,7 @@ ansible.builtin.systemd::
It can also be added dynamically with the use of the module `include_vars`:
-```
+```bash
- name: Include secrets.
ansible.builtin.include_vars:
file: vault.yml
@@ -114,14 +114,14 @@ It can also be added dynamically with the use of the module `include_vars`:
To display a variable, you have to activate the `debug` module as follows:
-```
+```bash
- ansible.builtin.debug:
var: service['debian']
```
You can also use the variable inside a text:
-```
+```bash
- ansible.builtin.debug:
msg: "Print a variable in a message : {{ service['debian'] }}"
```
@@ -132,7 +132,7 @@ To save the return of a task and to be able to access it later, you have to use
Use of a stored variable:
-```
+```bash
- name: /home content
shell: ls /home
register: homes
@@ -152,13 +152,13 @@ Use of a stored variable:
The strings that make up the stored variable can be accessed via the `stdout` value (which allows you to do things like `homes.stdout.find("core") != -1`), to exploit them using a loop (see `loop`), or simply by their indices as seen in the previous example.
-### Exercises
+### Exercises-1
* Write a playbook `play-vars.yml` that prints the distribution name of the target with its major version, using global variables.
* Write a playbook using the following dictionary to display the services that will be installed:
-```
+```bash
service:
web:
name: apache
@@ -184,7 +184,7 @@ With the help of loop, you can iterate a task over a list, a hash, or dictionary
Simple example of use, creation of 4 users:
-```
+```bash
- name: add users
user:
name: "{{ item }}"
@@ -201,7 +201,7 @@ At each iteration of the loop, the value of the list used is stored in the `item
Of course, a list can be defined in an external file:
-```
+```bash
users:
- antoine
- patrick
@@ -211,7 +211,7 @@ users:
and be used inside the task like this (after having include the vars file):
-```
+```bash
- name: add users
user:
name: "{{ item }}"
@@ -222,7 +222,7 @@ and be used inside the task like this (after having include the vars file):
We can use the example seen during the study of stored variables to improve it. Use of a stored variable:
-```
+```bash
- name: /home content
shell: ls /home
register: homes
@@ -241,7 +241,7 @@ In the loop, it becomes possible to use `item.key` which corresponds to the dict
Let's see this through a concrete example, showing the management of the system users:
-```
+```bash
---
- hosts: rocky8
become: true
@@ -269,7 +269,7 @@ Let's see this through a concrete example, showing the management of the system
Many things can be done with the loops. You will discover the possibilities offered by loops when your use of Ansible pushes you to use them in a more complex way.
-### Exercises
+### Exercises-2
* Display the content of the `service` variable from the previous exercise using a loop.
@@ -293,7 +293,7 @@ The `when` statement is very useful in many cases: not performing certain action
Behind the `when` statement the variables do not need double braces (they are in fact Jinja2 expressions...).
-```
+```bash
- name: "Reboot only Debian servers"
reboot:
when: ansible_os_family == "Debian"
@@ -301,7 +301,7 @@ The `when` statement is very useful in many cases: not performing certain action
Conditions can be grouped with parentheses:
-```
+```bash
- name: "Reboot only CentOS version 6 and Debian version 7"
reboot:
when: (ansible_distribution == "CentOS" and ansible_distribution_major_version == "6") or
@@ -310,7 +310,7 @@ Conditions can be grouped with parentheses:
The conditions corresponding to a logical AND can be provided as a list:
-```
+```bash
- name: "Reboot only CentOS version 6"
reboot:
when:
@@ -320,7 +320,7 @@ The conditions corresponding to a logical AND can be provided as a list:
You can test the value of a boolean and verify that it is true:
-```
+```bash
- name: check if directory exists
stat:
path: /home/ansible
@@ -338,19 +338,19 @@ You can test the value of a boolean and verify that it is true:
You can also test that it is not true:
-```
- when:
- - file.stat.exists
- - not file.stat.isdir
+```bash
+when:
+ - file.stat.exists
+ - not file.stat.isdir
```
You will probably have to test that a variable exists to avoid execution errors:
-```
- when: myboolean is defined and myboolean
+```bash
+when: myboolean is defined and myboolean
```
-### Exercises
+### Exercises-3
* Print the value of `service.web` only when `type` equals to `web`.
@@ -368,7 +368,7 @@ A module, being idempotent, a playbook can detect that there has been a signific
For example, several tasks may indicate that the `httpd` service needs to be restarted due to a change in its configuration files. But the service will only be restarted once to avoid multiple unnecessary starts.
-```
+```bash
- name: template configuration file
template:
src: template-site.j2
@@ -385,7 +385,7 @@ A handler is a kind of task referenced by a unique global name:
Example of handlers:
-```
+```bash
handlers:
- name: restart memcached
@@ -401,7 +401,7 @@ handlers:
Since version 2.2 of Ansible, handlers can listen directly as well:
-```
+```bash
handlers:
- name: restart memcached
@@ -441,7 +441,7 @@ By specifying a poll value of 0, Ansible will execute the task and continue with
Here's an example using asynchronous tasks, which allows you to restart a server and wait for port 22 to be reachable again:
-```
+```bash
# Wait 2s and launch the reboot
- name: Reboot system
shell: sleep 2 && shutdown -r now "Ansible reboot triggered"
@@ -468,7 +468,7 @@ You can also decide to launch a long-running task and forget it (fire and forget
* Write a playbook `play-vars.yml` that print the distribution name of the target with its major version, using global variables.
-```
+```bash
---
- hosts: ansible_clients
@@ -479,7 +479,7 @@ You can also decide to launch a long-running task and forget it (fire and forget
msg: "The distribution is {{ ansible_distribution }} version {{ ansible_distribution_major_version }}"
```
-```
+```bash
$ ansible-playbook play-vars.yml
PLAY [ansible_clients] *********************************************************************************
@@ -499,7 +499,7 @@ PLAY RECAP *********************************************************************
* Write a playbook using the following dictionary to display the services that will be installed:
-```
+```bash
service:
web:
name: apache
@@ -511,7 +511,7 @@ service:
The default type should be "web".
-```
+```bash
---
- hosts: ansible_clients
vars:
@@ -531,7 +531,7 @@ The default type should be "web".
msg: "The {{ service[type]['name'] }} will be installed with the packages {{ service[type].rpm }}"
```
-```
+```bash
$ ansible-playbook display-dict.yml
PLAY [ansible_clients] *********************************************************************************
@@ -551,7 +551,7 @@ PLAY RECAP *********************************************************************
* Override the `type` variable using the command line:
-```
+```bash
ansible-playbook --extra-vars "type=db" display-dict.yml
PLAY [ansible_clients] *********************************************************************************
@@ -570,7 +570,7 @@ PLAY RECAP *********************************************************************
* Externalize variables in a `vars.yml` file
-```
+```bash
type: web
service:
web:
@@ -581,7 +581,7 @@ service:
rpm: mariadb-server
```
-```
+```bash
---
- hosts: ansible_clients
vars_files:
@@ -594,7 +594,6 @@ service:
msg: "The {{ service[type]['name'] }} will be installed with the packages {{ service[type].rpm }}"
```
-
* Display the content of the `service` variable from the previous exercise using a loop.
!!! Note
@@ -611,7 +610,7 @@ service:
With `dict2items`:
-```
+```bash
---
- hosts: ansible_clients
vars_files:
@@ -625,7 +624,7 @@ With `dict2items`:
loop: "{{ service | dict2items }}"
```
-```
+```bash
$ ansible-playbook display-dict.yml
PLAY [ansible_clients] *********************************************************************************
@@ -648,7 +647,7 @@ PLAY RECAP *********************************************************************
With `list`:
-```
+```bash
---
- hosts: ansible_clients
vars_files:
@@ -663,7 +662,7 @@ With `list`:
~
```
-```
+```bash
$ ansible-playbook display-dict.yml
PLAY [ansible_clients] *********************************************************************************
@@ -685,7 +684,7 @@ PLAY RECAP *********************************************************************
* Print the value of `service.web` only when `type` equals to `web`.
-```
+```bash
---
- hosts: ansible_clients
vars_files:
@@ -705,7 +704,7 @@ PLAY RECAP *********************************************************************
when: type == "db"
```
-```
+```bash
$ ansible-playbook display-dict.yml
PLAY [ansible_clients] *********************************************************************************
diff --git a/docs/books/learning_ansible/03-working-with-files.md b/docs/books/learning_ansible/03-working-with-files.md
index a2bf1c590a..8bf20f57aa 100644
--- a/docs/books/learning_ansible/03-working-with-files.md
+++ b/docs/books/learning_ansible/03-working-with-files.md
@@ -10,13 +10,13 @@ In this chapter you will learn how to manage files with Ansible.
**Objectives**: In this chapter you will learn how to:
-:heavy_check_mark: modify the content of file;
-:heavy_check_mark: upload files to the targeted servers;
-:heavy_check_mark: retrieve files from the targeted servers.
+:heavy_check_mark: modify the content of file;
+:heavy_check_mark: upload files to the targeted servers;
+:heavy_check_mark: retrieve files from the targeted servers.
:checkered_flag: **ansible**, **module**, **files**
-**Knowledge**: :star: :star:
+**Knowledge**: :star: :star:
**Complexity**: :star:
**Reading time**: 20 minutes
@@ -41,7 +41,7 @@ The module requires:
Example of use:
-```
+```bash
- name: change value on inifile
community.general.ini_file:
dest: /path/to/file.ini
@@ -62,7 +62,7 @@ In this case, the line to be modified in a file will be found using a regexp.
For example, to ensure that the line starting with `SELINUX=` in the `/etc/selinux/config` file contains the value `enforcing`:
-```
+```bash
- ansible.builtin.lineinfile:
path: /etc/selinux/config
regexp: '^SELINUX='
@@ -79,7 +79,7 @@ When a file has to be copied from the Ansible server to one or more hosts, it is
Here we are copying `myflile.conf` from one location to another:
-```
+```bash
- ansible.builtin.copy:
src: /data/ansible/sources/myfile.conf
dest: /etc/myfile.conf
@@ -98,7 +98,7 @@ When a file has to be copied from a remote server to the local server, it is bes
This module does the opposite of the `copy` module:
-```
+```bash
- ansible.builtin.fetch:
src: /etc/myfile.conf
dest: /data/ansible/backup/myfile-{{ inventory_hostname }}.conf
@@ -107,7 +107,7 @@ This module does the opposite of the `copy` module:
## `template` module
-Ansible and its `template` module use the **Jinja2** template system (http://jinja.pocoo.org/docs/) to generate files on target hosts.
+Ansible and its `template` module use the **Jinja2** template system () to generate files on target hosts.
!!! Note
@@ -115,7 +115,7 @@ Ansible and its `template` module use the **Jinja2** template system (http://jin
For example:
-```
+```bash
- ansible.builtin.template:
src: /data/ansible/templates/monfichier.j2
dest: /etc/myfile.conf
@@ -126,7 +126,7 @@ For example:
It is possible to add a validation step if the targeted service allows it (for example apache with the command `apachectl -t`):
-```
+```bash
- template:
src: /data/ansible/templates/vhost.j2
dest: /etc/httpd/sites-available/vhost.conf
@@ -140,7 +140,7 @@ It is possible to add a validation step if the targeted service allows it (for e
To upload files from a web site or ftp to one or more hosts, use the `get_url` module:
-```
+```bash
- get_url:
url: http://site.com/archive.zip
dest: /tmp/archive.zip
diff --git a/docs/books/learning_ansible/04-ansible-galaxy.md b/docs/books/learning_ansible/04-ansible-galaxy.md
index 59cfb19dc8..4764fd805e 100644
--- a/docs/books/learning_ansible/04-ansible-galaxy.md
+++ b/docs/books/learning_ansible/04-ansible-galaxy.md
@@ -10,12 +10,12 @@ In this chapter you will learn how to use, install, and manage Ansible roles and
**Objectives**: In this chapter you will learn how to:
-:heavy_check_mark: install and manage collections.
-:heavy_check_mark: install and manage roles.
+:heavy_check_mark: install and manage collections.
+:heavy_check_mark: install and manage roles.
:checkered_flag: **ansible**, **ansible-galaxy**, **roles**, **collections**
-**Knowledge**: :star: :star:
+**Knowledge**: :star: :star:
**Complexity**: :star: :star: :star:
**Reading time**: 40 minutes
@@ -32,7 +32,7 @@ The `ansible-galaxy` command manages roles and collections using [galaxy.ansible
* To manage roles:
-```
+```bash
ansible-galaxy role [import|init|install|login|remove|...]
```
@@ -47,7 +47,7 @@ ansible-galaxy role [import|init|install|login|remove|...]
* To manage collections:
-```
+```bash
ansible-galaxy collection [import|init|install|login|remove|...]
```
@@ -73,13 +73,13 @@ You can check the code in the github repo of the role [here](https://github.com/
* Install the role. This needs only one command:
-```
+```bash
ansible-galaxy role install alemorvan.patchmanagement
```
* Create a playbook to include the role:
-```
+```bash
- name: Start a Patch Management
hosts: ansible_clients
vars:
@@ -98,13 +98,13 @@ Let's create tasks that will be run before and after the update process:
* Create the `custom_tasks` folder:
-```
+```bash
mkdir custom_tasks
```
* Create the `custom_tasks/pm_before_update_tasks_file.yml` (feel free to change the name and the content of this file)
-```
+```bash
---
- name: sample task before the update process
debug:
@@ -113,7 +113,7 @@ mkdir custom_tasks
* Create the `custom_tasks/pm_after_update_tasks_file.yml` (feel free to change the name and the content of this file)
-```
+```bash
---
- name: sample task after the update process
debug:
@@ -122,7 +122,7 @@ mkdir custom_tasks
And launch your first Patch Management:
-```
+```bash
ansible-playbook patchmanagement.yml
PLAY [Start a Patch Management] *************************************************************************
@@ -210,14 +210,14 @@ You can also create your own roles for your own needs and publish them on the In
A role skeleton, serving as a starting point for custom role development, can be generated by the `ansible-galaxy` command:
-```
+```bash
$ ansible-galaxy role init rocky8
- Role rocky8 was created successfully
```
The command will generate the following tree structure to contain the `rocky8` role:
-```
+```bash
tree rocky8/
rocky8/
├── defaults
@@ -260,7 +260,7 @@ Let's implement this with a "go anywhere" role that will create a default user a
We will create a `rockstar` user on all of our servers. As we don't want this user to be overridden, let's define it in the `vars/main.yml`:
-```
+```bash
---
rocky8_default_group:
name: rockstar
@@ -273,7 +273,7 @@ rocky8_default_user:
We can now use those variables inside our `tasks/main.yml` without any inclusion.
-```
+```bash
---
- name: Create default group
group:
@@ -289,7 +289,7 @@ We can now use those variables inside our `tasks/main.yml` without any inclusion
To test your new role, let's create a `test-role.yml` playbook in the same directory as your role:
-```
+```bash
---
- name: Test my role
hosts: localhost
@@ -303,7 +303,7 @@ To test your new role, let's create a `test-role.yml` playbook in the same direc
and launch it:
-```
+```bash
ansible-playbook test-role.yml
PLAY [Test my role] ************************************************************************************
@@ -327,7 +327,7 @@ Let's see the use of default variables.
Create a list of packages to install by default on your servers and an empty list of packages to uninstall. Edit the `defaults/main.yml` files and add those two lists:
-```
+```bash
rocky8_default_packages:
- tree
- vim
@@ -336,7 +336,7 @@ rocky8_remove_packages: []
and use them in your `tasks/main.yml`:
-```
+```bash
- name: Install default packages (can be overridden)
package:
name: "{{ rocky8_default_packages }}"
@@ -350,7 +350,7 @@ and use them in your `tasks/main.yml`:
Test your role with the help of the playbook previously created:
-```
+```bash
ansible-playbook test-role.yml
PLAY [Test my role] ************************************************************************************
@@ -376,7 +376,7 @@ localhost : ok=5 changed=0 unreachable=0 failed=0 s
You can now override the `rocky8_remove_packages` in your playbook and uninstall for example `cockpit`:
-```
+```bash
---
- name: Test my role
hosts: localhost
@@ -391,7 +391,7 @@ You can now override the `rocky8_remove_packages` in your playbook and uninstall
become_user: root
```
-```
+```bash
ansible-playbook test-role.yml
PLAY [Test my role] ************************************************************************************
@@ -417,7 +417,7 @@ localhost : ok=5 changed=1 unreachable=0 failed=0 s
Obviously, there is no limit to how much you can improve your role. Imagine that for one of your servers, you need a package that is in the list of those to be uninstalled. You could then, for example, create a new list that can be overridden and then remove from the list of packages to be uninstalled those in the list of specific packages to be installed by using the jinja `difference()` filter.
-```
+```bash
- name: "Uninstall default packages (can be overridden) {{ rocky8_remove_packages }}"
package:
name: "{{ rocky8_remove_packages | difference(rocky8_specifics_packages) }}"
@@ -434,13 +434,13 @@ Collections are a distribution format for Ansible content that can include playb
To install or upgrade a collection:
-```
+```bash
ansible-galaxy collection install namespace.collection [--upgrade]
```
You can then use the newly installed collection using its namespace and name before the module's name or role's name:
-```
+```bash
- import_role:
name: namespace.collection.rolename
@@ -452,7 +452,7 @@ You can find a collection index [here](https://docs.ansible.com/ansible/latest/c
Let's install the `community.general` collection:
-```
+```bash
ansible-galaxy collection install community.general
Starting galaxy collection install process
Process install dependency map
@@ -464,7 +464,7 @@ community.general:3.3.2 was installed successfully
We can now use the newly available module `yum_versionlock`:
-```
+```bash
- name: Start a Patch Management
hosts: ansible_clients
become: true
@@ -487,7 +487,7 @@ We can now use the newly available module `yum_versionlock`:
var: locks.meta.packages
```
-```
+```bash
ansible-playbook versionlock.yml
PLAY [Start a Patch Management] *************************************************************************
@@ -517,12 +517,12 @@ PLAY RECAP *********************************************************************
As with roles, you are able to create your own collection with the help of the `ansible-galaxy` command:
-```
+```bash
ansible-galaxy collection init rocky8.rockstarcollection
- Collection rocky8.rockstarcollection was created successfully
```
-```
+```bash
tree rocky8/rockstarcollection/
rocky8/rockstarcollection/
├── docs
diff --git a/docs/books/learning_ansible/05-deployments.md b/docs/books/learning_ansible/05-deployments.md
index ca9c23c826..1da7a15c58 100644
--- a/docs/books/learning_ansible/05-deployments.md
+++ b/docs/books/learning_ansible/05-deployments.md
@@ -10,15 +10,15 @@ In this chapter you will learn how to deploy applications with the Ansible role
**Objectives**: In this chapter you will learn how to:
-:heavy_check_mark: Implement Ansistrano;
-:heavy_check_mark: Configure Ansistrano;
-:heavy_check_mark: Use shared folders and files between deployed versions;
-:heavy_check_mark: Deploying different versions of a site from git;
-:heavy_check_mark: React between deployment steps.
+:heavy_check_mark: Implement Ansistrano;
+:heavy_check_mark: Configure Ansistrano;
+:heavy_check_mark: Use shared folders and files between deployed versions;
+:heavy_check_mark: Deploying different versions of a site from git;
+:heavy_check_mark: React between deployment steps.
:checkered_flag: **ansible**, **ansistrano**, **roles**, **deployments**
-**Knowledge**: :star: :star:
+**Knowledge**: :star: :star:
**Complexity**: :star: :star: :star:
**Reading time**: 40 minutes
@@ -52,7 +52,7 @@ Ansistrano deploys applications by following these 5 steps:
The skeleton of a deployment with Ansistrano looks like this:
-```
+```bash
/var/www/site/
├── current -> ./releases/20210718100000Z
├── releases
@@ -84,7 +84,7 @@ The managed server:
For more efficiency, we will use the `geerlingguy.apache` role to configure the server:
-```
+```bash
$ ansible-galaxy role install geerlingguy.apache
Starting galaxy role install process
- downloading role 'apache', owned by geerlingguy
@@ -95,7 +95,7 @@ Starting galaxy role install process
We will probably need to open some firewall rules, so we will also install the collection `ansible.posix` to work with its module `firewalld`:
-```
+```bash
$ ansible-galaxy collection install ansible.posix
Starting galaxy collection install process
Process install dependency map
@@ -126,7 +126,7 @@ Technical considerations:
Our playbook to configure the server: `playbook-config-server.yml`
-```
+```bash
---
- hosts: ansible_clients
become: yes
@@ -137,27 +137,27 @@ Our playbook to configure the server: `playbook-config-server.yml`
DirectoryIndex index.php index.htm
apache_vhosts:
- servername: "website"
- documentroot: "{{ dest }}current/html"
+ documentroot: "{{ dest }}current/html"
tasks:
- name: create directory for website
file:
- path: /var/www/site/
- state: directory
- mode: 0755
+ path: /var/www/site/
+ state: directory
+ mode: 0755
- name: install git
package:
- name: git
- state: latest
+ name: git
+ state: latest
- name: permit traffic in default zone for http service
ansible.posix.firewalld:
- service: http
- permanent: yes
- state: enabled
- immediate: yes
+ service: http
+ permanent: yes
+ state: enabled
+ immediate: yes
roles:
- { role: geerlingguy.apache }
@@ -165,13 +165,13 @@ Our playbook to configure the server: `playbook-config-server.yml`
The playbook can be applied to the server:
-```
-$ ansible-playbook playbook-config-server.yml
+```bash
+ansible-playbook playbook-config-server.yml
```
Note the execution of the following tasks:
-```
+```bash
TASK [geerlingguy.apache : Ensure Apache is installed on RHEL.] ****************
TASK [geerlingguy.apache : Configure Apache.] **********************************
TASK [geerlingguy.apache : Add apache vhosts configuration.] *******************
@@ -184,7 +184,7 @@ The `geerlingguy.apache` role makes our job much easier by taking care of the in
You can check that everything is working by using `curl`:
-```
+```bash
$ curl -I http://192.168.1.11
HTTP/1.1 404 Not Found
Date: Mon, 05 Jul 2021 23:30:02 GMT
@@ -202,7 +202,7 @@ Now that our server is configured, we can deploy the application.
For this, we will use the `ansistrano.deploy` role in a second playbook dedicated to application deployment (for more readability).
-```
+```bash
$ ansible-galaxy role install ansistrano.deploy
Starting galaxy role install process
- downloading role 'deploy', owned by ansistrano
@@ -216,7 +216,7 @@ The sources of the software can be found in the [github repository](https://gith
We will create a playbook `playbook-deploy.yml` to manage our deployment:
-```
+```bash
---
- hosts: ansible_clients
become: yes
@@ -231,7 +231,7 @@ We will create a playbook `playbook-deploy.yml` to manage our deployment:
- { role: ansistrano.deploy }
```
-```
+```bash
$ ansible-playbook playbook-deploy.yml
PLAY [ansible_clients] *********************************************************
@@ -258,13 +258,13 @@ TASK [ansistrano.deploy : ANSISTRANO | Change softlink to new release]
TASK [ansistrano.deploy : ANSISTRANO | Clean up releases]
PLAY RECAP ********************************************************************************************************************************************************************************************************
-192.168.1.11 : ok=25 changed=8 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0
+192.168.1.11 : ok=25 changed=8 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0
```
So many things done with only 11 lines of code!
-```
+```html
$ curl http://192.168.1.11
@@ -282,7 +282,7 @@ You can now connect by ssh to your client machine.
* Make a `tree` on the `/var/www/site/` directory:
-```
+```bash
$ tree /var/www/site/
/var/www/site
├── current -> ./releases/20210722155312Z
@@ -290,7 +290,7 @@ $ tree /var/www/site/
│ └── 20210722155312Z
│ ├── REVISION
│ └── html
-│ └── index.htm
+│ └── index.htm
├── repo
│ └── html
│ └── index.htm
@@ -305,7 +305,7 @@ Please note:
* From the Ansible server, restart the deployment **3** times, then check on the client.
-```
+```bash
$ tree /var/www/site/
var/www/site
├── current -> ./releases/20210722160048Z
@@ -325,7 +325,7 @@ var/www/site
│ └── 20210722160048Z
│ ├── REVISION
│ └── html
-│ └── index.htm
+│ └── index.htm
├── repo
│ └── html
│ └── index.htm
@@ -343,7 +343,7 @@ The `ansistrano_keep_releases` variable is used to specify the number of release
* Using the `ansistrano_keep_releases` variable, keep only 3 releases of the project. Check.
-```
+```bash
---
- hosts: ansible_clients
become: yes
@@ -359,14 +359,14 @@ The `ansistrano_keep_releases` variable is used to specify the number of release
- { role: ansistrano.deploy }
```
-```
+```bash
---
$ ansible-playbook -i hosts playbook-deploy.yml
```
On the client machine:
-```
+```bash
$ tree /var/www/site/
/var/www/site
├── current -> ./releases/20210722160318Z
@@ -382,7 +382,7 @@ $ tree /var/www/site/
│ └── 20210722160318Z
│ ├── REVISION
│ └── html
-│ └── index.htm
+│ └── index.htm
├── repo
│ └── html
│ └── index.htm
@@ -391,8 +391,7 @@ $ tree /var/www/site/
### Using shared_paths and shared_files
-
-```
+```bash
---
- hosts: ansible_clients
become: yes
@@ -415,13 +414,13 @@ $ tree /var/www/site/
On the client machine, create the file `logs` in the `shared` directory:
-```
+```bash
sudo touch /var/www/site/shared/logs
```
Then execute the playbook:
-```
+```bash
TASK [ansistrano.deploy : ANSISTRANO | Ensure shared paths targets are absent] *******************************************************
ok: [192.168.10.11] => (item=img)
ok: [192.168.10.11] => (item=css)
@@ -435,7 +434,7 @@ changed: [192.168.10.11] => (item=logs)
On the client machine:
-```
+```bash
$ tree -F /var/www/site/
/var/www/site/
├── current -> ./releases/20210722160631Z/
@@ -488,7 +487,7 @@ Don't forget to modify the Apache configuration to take into account this change
Change the playbook for the server configuration `playbook-config-server.yml`
-```
+```bash
---
- hosts: ansible_clients
become: yes
@@ -499,20 +498,20 @@ Change the playbook for the server configuration `playbook-config-server.yml`
DirectoryIndex index.php index.htm
apache_vhosts:
- servername: "website"
- documentroot: "{{ dest }}current/" # <1>
+ documentroot: "{{ dest }}current/" # <1>
tasks:
- name: create directory for website
file:
- path: /var/www/site/
- state: directory
- mode: 0755
+ path: /var/www/site/
+ state: directory
+ mode: 0755
- name: install git
package:
- name: git
- state: latest
+ name: git
+ state: latest
roles:
- { role: geerlingguy.apache }
@@ -522,7 +521,7 @@ Change the playbook for the server configuration `playbook-config-server.yml`
Change the playbook for the deployment `playbook-deploy.yml`
-```
+```bash
---
- hosts: ansible_clients
become: yes
@@ -550,7 +549,7 @@ Change the playbook for the deployment `playbook-deploy.yml`
* Check on the client machine:
-```
+```bash
$ tree -F /var/www/site/
/var/www/site/
├── current -> ./releases/20210722161542Z/
@@ -589,7 +588,7 @@ The `ansistrano_git_branch` variable is used to specify a `branch` or `tag` to d
* Deploy the `releases/v1.1.0` branch:
-```
+```bash
---
- hosts: ansible_clients
become: yes
@@ -616,7 +615,7 @@ The `ansistrano_git_branch` variable is used to specify a `branch` or `tag` to d
You can have fun, during the deployment, refreshing your browser, to see in 'live' the change.
-```
+```html
$ curl http://192.168.1.11
@@ -630,7 +629,7 @@ $ curl http://192.168.1.11
* Deploy the `v2.0.0` tag:
-```
+```bash
---
- hosts: ansible_clients
become: yes
@@ -653,7 +652,7 @@ $ curl http://192.168.1.11
- { role: ansistrano.deploy }
```
-```
+```html
$ curl http://192.168.1.11
@@ -686,8 +685,7 @@ A playbook can be included through the variables provided for this purpose:
* Easy example: send an email (or whatever you want like Slack notification) at the beginning of the deployment:
-
-```
+```bash
---
- hosts: ansible_clients
become: yes
@@ -713,7 +711,7 @@ A playbook can be included through the variables provided for this purpose:
Create the file `deploy/before-setup-tasks.yml`:
-```
+```bash
---
- name: Send a mail
mail:
@@ -721,7 +719,7 @@ Create the file `deploy/before-setup-tasks.yml`:
delegate_to: localhost
```
-```
+```bash
TASK [ansistrano.deploy : include] *************************************************************************************
included: /home/ansible/deploy/before-setup-tasks.yml for 192.168.10.11
@@ -729,7 +727,7 @@ TASK [ansistrano.deploy : Send a mail] *****************************************
ok: [192.168.10.11 -> localhost]
```
-```
+```bash
[root] # mailx
Heirloom Mail version 12.5 7/5/10. Type ? for help.
"/var/spool/mail/root": 1 message 1 new
@@ -738,7 +736,7 @@ Heirloom Mail version 12.5 7/5/10. Type ? for help.
* You will probably have to restart some services at the end of the deployment, to flush caches for example. Let's restart Apache at the end of the deployment:
-```
+```bash
---
- hosts: ansible_clients
become: yes
@@ -765,7 +763,7 @@ Heirloom Mail version 12.5 7/5/10. Type ? for help.
Create the file `deploy/after-symlink-tasks.yml`:
-```
+```bash
---
- name: restart apache
systemd:
@@ -773,7 +771,7 @@ Create the file `deploy/after-symlink-tasks.yml`:
state: restarted
```
-```
+```bash
TASK [ansistrano.deploy : include] *************************************************************************************
included: /home/ansible/deploy/after-symlink-tasks.yml for 192.168.10.11
diff --git a/docs/books/learning_ansible/06-large-scale-infrastructure.md b/docs/books/learning_ansible/06-large-scale-infrastructure.md
index 91cce4f4a7..16cb1f6a75 100644
--- a/docs/books/learning_ansible/06-large-scale-infrastructure.md
+++ b/docs/books/learning_ansible/06-large-scale-infrastructure.md
@@ -10,12 +10,12 @@ In this chapter you will learn how to scale your configuration management system
**Objectives**: In this chapter you will learn how to:
-:heavy_check_mark: Organize your code for large infrastructure;
-:heavy_check_mark: Apply all or part of your configuration management to a group of nodes;
+:heavy_check_mark: Organize your code for large infrastructure;
+:heavy_check_mark: Apply all or part of your configuration management to a group of nodes;
:checkered_flag: **ansible**, **config management**, **scale**
-**Knowledge**: :star: :star: :star:
+**Knowledge**: :star: :star: :star:
**Complexity**: :star: :star: :star: :star:
**Reading time**: 30 minutes
@@ -52,7 +52,7 @@ We haven't discussed it here yet, but you should know that Ansible can automatic
The Ansible documentation suggests that we organize our code as below:
-```
+```bash
inventories/
production/
hosts # inventory file for production servers
@@ -82,7 +82,7 @@ The use of Ansible tags allows you to execute or skip a part of the tasks in you
For example, let's modify our users creation task:
-```
+```bash
- name: add users
user:
name: "{{ item }}"
@@ -98,7 +98,7 @@ For example, let's modify our users creation task:
You can now play only the tasks with the tag `users` with the `ansible-playbook` option `--tags`:
-```
+```bash
ansible-playbook -i inventories/production/hosts --tags users site.yml
```
@@ -110,7 +110,7 @@ Let's focus on a proposal for the organization of files and directories necessar
Our starting point will be the `site.yml` file. This file is a bit like the orchestra conductor of the CMS since it will only include the necessary roles for the target nodes if needed:
-```
+```bash
---
- name: "Config Management for {{ target }}"
hosts: "{{ target }}"
@@ -126,7 +126,7 @@ Of course, those roles must be created under the `roles` directory at the same l
I like to manage my global vars inside a `vars/global_vars.yml`, even if I could store them inside a file located at `inventories/production/group_vars/all.yml`
-```
+```bash
---
- name: "Config Management for {{ target }}"
hosts: "{{ target }}"
@@ -141,7 +141,7 @@ I like to manage my global vars inside a `vars/global_vars.yml`, even if I could
I also like to keep the possibility of disabling a functionality. So I include my roles with a condition and a default value like this:
-```
+```bash
---
- name: "Config Management for {{ target }}"
hosts: "{{ target }}"
@@ -160,8 +160,7 @@ I also like to keep the possibility of disabling a functionality. So I include m
Don't forget to use the tags:
-
-```
+```bash
- name: "Config Management for {{ target }}"
hosts: "{{ target }}"
vars_files:
@@ -183,7 +182,7 @@ Don't forget to use the tags:
You should get something like this:
-```
+```bash
$ tree cms
cms
├── inventories
@@ -218,7 +217,7 @@ cms
Let's launch the playbook and run some tests:
-```
+```bash
$ ansible-playbook -i inventories/production/hosts -e "target=client1" site.yml
PLAY [Config Management for client1] ****************************************************************************
@@ -242,14 +241,13 @@ As you can see, by default, only the tasks of the `functionality1` role are play
Let's activate in the inventory the `functionality2` for our targeted node and rerun the playbook:
-```
+```bash
$ vim inventories/production/host_vars/client1.yml
---
enable_functionality2: true
```
-
-```
+```bash
$ ansible-playbook -i inventories/production/hosts -e "target=client1" site.yml
PLAY [Config Management for client1] ****************************************************************************
@@ -273,7 +271,7 @@ client1 : ok=3 changed=0 unreachable=0 failed=0 s
Try to apply only `functionality2`:
-```
+```bash
$ ansible-playbook -i inventories/production/hosts -e "target=client1" --tags functionality2 site.yml
PLAY [Config Management for client1] ****************************************************************************
@@ -292,7 +290,7 @@ client1 : ok=2 changed=0 unreachable=0 failed=0 s
Let's run on the whole inventory:
-```
+```bash
$ ansible-playbook -i inventories/production/hosts -e "target=plateform" site.yml
PLAY [Config Management for plateform] **************************************************************************
diff --git a/docs/books/learning_ansible/07-working-with-filters.md b/docs/books/learning_ansible/07-working-with-filters.md
index 12b1ce7532..a1d6338596 100644
--- a/docs/books/learning_ansible/07-working-with-filters.md
+++ b/docs/books/learning_ansible/07-working-with-filters.md
@@ -17,7 +17,7 @@ In this chapter you will learn how to transform data with jinja filters.
:checkered_flag: **ansible**, **jinja**, **filters**
-**Knowledge**: :star: :star: :star:
+**Knowledge**: :star: :star: :star:
**Complexity**: :star: :star: :star: :star:
**Reading time**: 20 minutes
@@ -34,7 +34,7 @@ These filters, written in python, allow us to manipulate and transform our ansib
Throughout this chapter, we will use the following playbook to test the different filters presented:
-```
+```bash
- name: Manipulating the data
hosts: localhost
gather_facts: false
@@ -78,7 +78,7 @@ Throughout this chapter, we will use the following playbook to test the differen
The playbook will be played as follows:
-```
+```bash
ansible-playbook play-filter.yml
```
@@ -90,7 +90,7 @@ To know the type of a data (the type in python language), you have to use the `t
Example:
-```
+```bash
- name: Display the type of a variable
debug:
var: true_boolean|type_debug
@@ -98,7 +98,7 @@ Example:
which gives us:
-```
+```bash
TASK [Display the type of a variable] ******************************************************************
ok: [localhost] => {
"true_boolean|type_debug": "bool"
@@ -107,13 +107,13 @@ ok: [localhost] => {
It is possible to transform an integer into a string:
-```
+```bash
- name: Transforming a variable type
debug:
var: zero|string
```
-```
+```bash
TASK [Transforming a variable type] ***************************************************************
ok: [localhost] => {
"zero|string": "0"
@@ -122,7 +122,7 @@ ok: [localhost] => {
Transform a string into an integer:
-```
+```bash
- name: Transforming a variable type
debug:
var: zero_string|int
@@ -130,7 +130,7 @@ Transform a string into an integer:
or a variable into a boolean:
-```
+```bash
- name: Display an integer as a boolean
debug:
var: non_zero | bool
@@ -151,7 +151,7 @@ or a variable into a boolean:
A character string can be transformed into upper or lower case:
-```
+```bash
- name: Lowercase a string of characters
debug:
var: whatever | lower
@@ -163,7 +163,7 @@ A character string can be transformed into upper or lower case:
which gives us:
-```
+```bash
TASK [Lowercase a string of characters] *****************************************************
ok: [localhost] => {
"whatever | lower": "it's false!"
@@ -179,7 +179,7 @@ The `replace` filter allows you to replace characters by others.
Here we remove spaces or even replace a word:
-```
+```bash
- name: Replace a character in a string
debug:
var: whatever | replace(" ", "")
@@ -191,7 +191,7 @@ Here we remove spaces or even replace a word:
which gives us:
-```
+```bash
TASK [Replace a character in a string] *****************************************************
ok: [localhost] => {
"whatever | replace(\" \", \"\")": "It'sfalse!"
@@ -205,14 +205,13 @@ ok: [localhost] => {
The `split` filter splits a string into a list based on a character:
-```
+```bash
- name: Cutting a string of characters
debug:
var: whatever | split(" ", "")
```
-
-```
+```bash
TASK [Cutting a string of characters] *****************************************************
ok: [localhost] => {
"whatever | split(\" \")": [
@@ -227,7 +226,7 @@ ok: [localhost] => {
It is frequent to have to join the different elements in a single string.
We can then specify a character or a string to insert between each element.
-```
+```bash
- name: Joining elements of a list
debug:
var: my_simple_list|join(",")
@@ -239,7 +238,7 @@ We can then specify a character or a string to insert between each element.
which gives us:
-```
+```bash
TASK [Joining elements of a list] *****************************************************************
ok: [localhost] => {
"my_simple_list|join(\",\")": "value_list_1,value_list_2,value_list_3"
@@ -259,7 +258,7 @@ are frequently used, especially in loops.
Note that it is possible to specify the name of the key and of the value to use in the transformation.
-```
+```bash
- name: Display a dictionary
debug:
var: my_dictionary
@@ -277,7 +276,7 @@ Note that it is possible to specify the name of the key and of the value to use
var: my_list | items2dict(key_name='element', value_name='value')
```
-```
+```bash
TASK [Display a dictionary] *************************************************************************
ok: [localhost] => {
"my_dictionary": {
@@ -327,13 +326,13 @@ ok: [localhost] => {
It is possible to merge or filter data from one or more lists:
-```
+```bash
- name: Merger of two lists
debug:
var: my_simple_list | union(my_simple_list_2)
```
-```
+```bash
ok: [localhost] => {
"my_simple_list | union(my_simple_list_2)": [
"value_list_1",
@@ -347,13 +346,13 @@ ok: [localhost] => {
To keep only the intersection of the 2 lists (the values present in the 2 lists):
-```
+```bash
- name: Merger of two lists
debug:
var: my_simple_list | intersect(my_simple_list_2)
```
-```
+```bash
TASK [Merger of two lists] *******************************************************************************
ok: [localhost] => {
"my_simple_list | intersect(my_simple_list_2)": [
@@ -364,13 +363,13 @@ ok: [localhost] => {
Or on the contrary keep only the difference (the values that do not exist in the second list):
-```
+```bash
- name: Merger of two lists
debug:
var: my_simple_list | difference(my_simple_list_2)
```
-```
+```bash
TASK [Merger of two lists] *******************************************************************************
ok: [localhost] => {
"my_simple_list | difference(my_simple_list_2)": [
@@ -382,7 +381,7 @@ ok: [localhost] => {
If your list contains non-unique values, it is also possible to filter them with the `unique` filter.
-```
+```bash
- name: Unique value in a list
debug:
var: my_simple_list | unique
@@ -392,7 +391,7 @@ If your list contains non-unique values, it is also possible to filter them with
You may have to import json data (from an API for example) or export data in yaml or json.
-```
+```bash
- name: Display a variable in yaml
debug:
var: my_list | to_nice_yaml(indent=4)
@@ -402,7 +401,7 @@ You may have to import json data (from an API for example) or export data in yam
var: my_list | to_nice_json(indent=4)
```
-```
+```bash
TASK [Display a variable in yaml] ********************************************************************
ok: [localhost] => {
"my_list | to_nice_yaml(indent=4)": "- element: element1\n value: value1\n- element: element2\n value: value2\n"
@@ -420,13 +419,13 @@ You will quickly be confronted with errors in the execution of your playbooks if
The value of a variable can be substituted for another one if it does not exist with the `default` filter:
-```
+```bash
- name: Default value
debug:
var: variablethatdoesnotexists | default(whatever)
```
-```
+```bash
TASK [Default value] ********************************************************************************
ok: [localhost] => {
"variablethatdoesnotexists | default(whatever)": "It's false!"
@@ -435,13 +434,13 @@ ok: [localhost] => {
Note the presence of the apostrophe `'` which should be protected, for example, if you were using the `shell` module:
-```
+```bash
- name: Default value
debug:
var: variablethatdoesnotexists | default(whatever| quote)
```
-```
+```bash
TASK [Default value] ********************************************************************************
ok: [localhost] => {
"variablethatdoesnotexists | default(whatever|quote)": "'It'\"'\"'s false!'"
@@ -450,7 +449,7 @@ ok: [localhost] => {
Finally, an optional variable in a module can be ignored if it does not exist with the keyword `omit` in the `default` filter, which will save you an error at runtime.
-```
+```bash
- name: Add a new user
ansible.builtin.user:
name: "{{ user_name }}"
@@ -463,13 +462,13 @@ Sometimes you need to use a condition to assign a value to a variable, in which
This can be avoided by using the `ternary` filter:
-```
+```bash
- name: Default value
debug:
var: (user_name == 'antoine') | ternary('admin', 'normal_user')
```
-```
+```bash
TASK [Default value] ********************************************************************************
ok: [localhost] => {
"(user_name == 'antoine') | ternary('admin', 'normal_user')": "admin"
@@ -478,8 +477,8 @@ ok: [localhost] => {
## Some other filters
- * `{{ 10000 | random }}`: as its name indicates, gives a random value.
- * `{{ my_simple_list | first }}`: extracts the first element of the list.
- * `{{ my_simple_list | length }}`: gives the length (of a list or a string).
- * `{{ ip_list | ansible.netcommon.ipv4 }}`: only displays v4 IPs. Without dwelling on this, if you need, there are many filters dedicated to the network.
- * `{{ user_password | password_hash('sha512') }}`: generates a hashed password in sha512.
+* `{{ 10000 | random }}`: as its name indicates, gives a random value.
+* `{{ my_simple_list | first }}`: extracts the first element of the list.
+* `{{ my_simple_list | length }}`: gives the length (of a list or a string).
+* `{{ ip_list | ansible.netcommon.ipv4 }}`: only displays v4 IPs. Without dwelling on this, if you need, there are many filters dedicated to the network.
+* `{{ user_password | password_hash('sha512') }}`: generates a hashed password in sha512.
diff --git a/docs/books/learning_ansible/08-management-server-optimizations.md b/docs/books/learning_ansible/08-management-server-optimizations.md
index 368417b79f..168455728f 100644
--- a/docs/books/learning_ansible/08-management-server-optimizations.md
+++ b/docs/books/learning_ansible/08-management-server-optimizations.md
@@ -33,7 +33,7 @@ Gathering facts is a process that can take some time. It can be interesting to d
These facts can be easily stored in a `redis` database:
-```
+```bash
sudo yum install redis
sudo systemctl start redis
sudo systemctl enable redis
@@ -42,7 +42,7 @@ sudo pip3 install redis
Don't forget to modify the ansible configuration:
-```
+```bash
fact_caching = redis
fact_caching_timeout = 86400
fact_caching_connection = localhost:6379:0
@@ -50,7 +50,7 @@ fact_caching_connection = localhost:6379:0
To check the correct operation, it is enough to request the `redis` server:
-```
+```bash
redis-cli
127.0.0.1:6379> keys *
127.0.0.1:6379> get ansible_facts_SERVERNAME
@@ -68,26 +68,26 @@ Ansible will be able to decrypt this file at runtime by retrieving the encryptio
Edit the `/etc/ansible/ansible.cfg` file:
-```
+```bash
#vault_password_file = /path/to/vault_password_file
vault_password_file = /etc/ansible/vault_pass
```
Store the password in this file `/etc/ansible/vault_pass` and assign necessary restrictive rights:
-```
+```bash
mysecretpassword
```
You can then encrypt your files with the command:
-```
+```bash
ansible-vault encrypt myfile.yml
```
A file encrypted by `ansible-vault` can be easily recognized by its header:
-```
+```text
$ANSIBLE_VAULT;1.1;AES256
35376532343663353330613133663834626136316234323964333735363333396136613266383966
6664322261633261356566383438393738386165333966660a343032663233343762633936313630
@@ -98,7 +98,7 @@ $ANSIBLE_VAULT;1.1;AES256
Once a file is encrypted, it can still be edited with the command:
-```
+```bash
ansible-vault edit myfile.yml
```
@@ -106,7 +106,7 @@ You can also deport your password storage to any password manager.
For example, to retrieve a password that would be stored in the rundeck vault:
-```
+```python
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import urllib.request
@@ -141,13 +141,13 @@ It will be necessary to install on the management server several packages:
* Via the package manager:
-```
+```bash
sudo dnf install python38-devel krb5-devel krb5-libs krb5-workstation
```
and configure the `/etc/krb5.conf` file to specify the correct `realms`:
-```
+```bash
[realms]
ROCKYLINUX.ORG = {
kdc = dc1.rockylinux.org
@@ -159,7 +159,7 @@ ROCKYLINUX.ORG = {
* Via the python package manager:
-```
+```bash
pip3 install pywinrm
pip3 install pywinrm[credssp]
pip3 install kerberos requests-kerberos
@@ -169,7 +169,7 @@ pip3 install kerberos requests-kerberos
Network modules usually require the `netaddr` python module:
-```
+```bash
sudo pip3 install netaddr
```
@@ -177,24 +177,24 @@ sudo pip3 install netaddr
A tool, `ansible-cmdb` has been developed to generate a CMDB from ansible.
-```
+```bash
pip3 install ansible-cmdb
```
The facts must be exported by ansible with the following command:
-```
+```bash
ansible --become --become-user=root -o -m setup --tree /var/www/ansible/cmdb/out/
```
You can then generate a global `json` file:
-```
+```bash
ansible-cmdb -t json /var/www/ansible/cmdb/out/linux > /var/www/ansible/cmdb/cmdb-linux.json
```
If you prefer a web interface:
-```
+```bash
ansible-cmdb -t html_fancy_split /var/www/ansible/cmdb/out/
```
diff --git a/docs/books/learning_bash/01-first-script.md b/docs/books/learning_bash/01-first-script.md
index 5cbe13a577..3be0523656 100644
--- a/docs/books/learning_bash/01-first-script.md
+++ b/docs/books/learning_bash/01-first-script.md
@@ -23,7 +23,7 @@ In this chapter you will learn how to write your first script in bash.
:checkered_flag: **linux**, **script**, **bash**
-**Knowledge**: :star:
+**Knowledge**: :star:
**Complexity**: :star:
**Reading time**: 10 minutes
@@ -46,7 +46,7 @@ The name of the script should respect some rules:
The author uses the "$" throughout these lessons to indicate the user's command-prompt.
-```
+```bash
#!/usr/bin/env bash
#
# Author : Rocky Documentation Team
@@ -60,14 +60,14 @@ echo "Hello world!"
To be able to run this script, as an argument to bash:
-```
+```bash
$ bash hello-world.sh
Hello world !
```
Or, more simply, after having given it the right to execute:
-```
+```bash
$ chmod u+x ./hello-world.sh
$ ./hello-world.sh
Hello world !
@@ -83,19 +83,19 @@ Hello world !
The first line to be written in any script is to indicate the name of the shell binary to be used to execute it.
If you want to use the `ksh` shell or the interpreted language `python`, you would replace the line:
-```
+```bash
#!/usr/bin/env bash
```
with :
-```
+```bash
#!/usr/bin/env ksh
```
or with :
-```
+```bash
#!/usr/bin/env python
```
@@ -117,7 +117,7 @@ Comments can be placed on a separate line or at the end of a line containing a c
Example:
-```
+```bash
# This program displays the date
date # This line is the line that displays the date!
```
diff --git a/docs/books/learning_bash/02-using-variables.md b/docs/books/learning_bash/02-using-variables.md
index 8292f65825..7387cc19ba 100644
--- a/docs/books/learning_bash/02-using-variables.md
+++ b/docs/books/learning_bash/02-using-variables.md
@@ -41,7 +41,7 @@ The content of a variable can be changed during the script, as the variable cont
The notion of a variable type in a shell script is possible but is very rarely used. The content of a variable is always a character or a string.
-```
+```bash
#!/usr/bin/env bash
#
@@ -76,7 +76,7 @@ By convention, variables created by a user have a name in lower case. This name
The character `=` assigns content to a variable:
-```
+```bash
variable=value
rep_name="/home"
```
@@ -85,14 +85,14 @@ There is no space before or after the `=` sign.
Once the variable is created, it can be used by prefixing it with a dollar $.
-```
+```bash
file=file_name
touch $file
```
It is strongly recommended to protect variables with quotes, as in this example below:
-```
+```bash
file=file name
touch $file
touch "$file"
@@ -102,7 +102,7 @@ As the content of the variable contains a space, the first `touch` will create 2
To isolate the name of the variable from the rest of the text, you must use quotes or braces:
-```
+```bash
file=file_name
touch "$file"1
touch ${file}1
@@ -112,7 +112,7 @@ touch ${file}1
The use of apostrophes inhibits the interpretation of special characters.
-```
+```bash
message="Hello"
echo "This is the content of the variable message: $message"
Here is the content of the variable message: Hello
@@ -126,7 +126,7 @@ The `unset` command allows for the deletion of a variable.
Example:
-```
+```bash
name="NAME"
firstname="Firstname"
echo "$name $firstname"
@@ -140,7 +140,7 @@ The `readonly` or `typeset -r` command locks a variable.
Example:
-```
+```bash
name="NAME"
readonly name
name="OTHER NAME"
@@ -195,21 +195,21 @@ It is possible to store the result of a command in a variable.
The syntax for sub-executing a command is as follows:
-```
+```bash
variable=`command`
variable=$(command) # Preferred syntax
```
Example:
-```
-$ day=`date +%d`
-$ homedir=$(pwd)
+```bash
+day=`date +%d`
+homedir=$(pwd)
```
With everything we've just seen, our backup script might look like this:
-```
+```bash
#!/usr/bin/env bash
#
@@ -257,13 +257,13 @@ logger "Backup of system files by ${USER} on ${HOSTNAME} in the folder ${DESTINA
Running our backup script:
-```
-$ sudo ./backup.sh
+```bash
+sudo ./backup.sh
```
will give us:
-```
+```bash
****************************************************************
Backup Script - Backup on desktop
****************************************************************
diff --git a/docs/books/learning_bash/03-data-entry-and-manipulations.md b/docs/books/learning_bash/03-data-entry-and-manipulations.md
index 7ada43699f..32b1cfb025 100644
--- a/docs/books/learning_bash/03-data-entry-and-manipulations.md
+++ b/docs/books/learning_bash/03-data-entry-and-manipulations.md
@@ -17,10 +17,10 @@ In this chapter you will learn how to make your scripts interact with users and
**Objectives**: In this chapter you will learn how to:
-:heavy_check_mark: read input from a user;
-:heavy_check_mark: manipulate data entries;
-:heavy_check_mark: use arguments inside a script;
-:heavy_check_mark: manage positional variables;
+:heavy_check_mark: read input from a user;
+:heavy_check_mark: manipulate data entries;
+:heavy_check_mark: use arguments inside a script;
+:heavy_check_mark: manage positional variables;
:checkered_flag: **linux**, **script**, **bash**, **variable**
@@ -39,13 +39,13 @@ The `read` command allows you to enter a character string and store it in a vari
Syntax of the read command:
-```
+```bash
read [-n X] [-p] [-s] [variable]
```
The first example below, prompts you for two variable inputs: "name" and "firstname", but since there is no prompt, you would have to know ahead of time that this was the case. In the case of this particular entry, each variable input would be separated by a space. The second example prompts for the variable "name" with the prompt text included:
-```
+```bash
read name firstname
read -p "Please type your name: " name
```
@@ -56,22 +56,22 @@ read -p "Please type your name: " name
| `-n` | Limits the number of characters to be entered. |
| `-s` | Hides the input. |
-When using the `-n` option, the shell automatically validates the input after the specified number of characters. The user does not have to press the ENTER key.
+When using the `-n` option, the shell automatically validates the input after the specified number of characters. The user does not have to press the ++enter++ key.
-```
+```bash
read -n5 name
```
The `read` command allows you to interrupt the execution of the script while the user enters information. The user's input is broken down into words assigned to one or more predefined variables. The words are strings of characters separated by the field separator.
-The end of the input is determined by pressing the ENTER key.
+The end of the input is determined by pressing the ++enter++ key.
Once the input is validated, each word will be stored in the predefined variable.
The division of the words is defined by the field separator character.
This separator is stored in the system variable `IFS` (**Internal Field Separator**).
-```
+```bash
set | grep IFS
IFS=$' \t\n'
```
@@ -80,9 +80,9 @@ By default, the IFS contains the space, tab and line feed.
When used without specifying a variable, this command simply pauses the script. The script continues its execution when the input is validated.
-This is used to pause a script when debugging or to prompt the user to press ENTER to continue.
+This is used to pause a script when debugging or to prompt the user to press ++enter++ to continue.
-```
+```bash
echo -n "Press [ENTER] to continue..."
read
```
@@ -93,13 +93,13 @@ The cut command allows you to isolate a column in a file or in a stream.
Syntax of the cut command:
-```
+```bash
cut [-cx] [-dy] [-fz] file
```
Example of use of the cut command:
-```
+```bash
cut -d: -f1 /etc/passwd
```
@@ -116,7 +116,7 @@ The main benefit of this command will be its association with a stream, for exam
Example:
-```
+```bash
grep "^root:" /etc/passwd | cut -d: -f3
0
```
@@ -131,7 +131,7 @@ The `tr` command allows you to convert a string.
Syntax of the `tr` command:
-```
+```bash
tr [-csd] string1 string2
```
@@ -143,22 +143,28 @@ tr [-csd] string1 string2
An example of using the `tr` command follows. If you use `grep` to return root's `passwd` file entry, you would get this:
-```
+```bash
grep root /etc/passwd
```
+
returns:
-```
+
+```bash
root:x:0:0:root:/root:/bin/bash
```
+
Now let's use `tr` command and the reduce the "o's" in the line:
-```
+```bash
grep root /etc/passwd | tr -s "o"
```
+
which returns this:
-```
+
+```bash
rot:x:0:0:rot:/rot:/bin/bash
```
+
## Extract the name and path of a file
The `basename` command allows you to extract the name of the file from a path.
@@ -167,14 +173,17 @@ The `dirname` command allows you to extract the parent path of a file.
Examples:
-```
+```bash
echo $FILE=/usr/bin/passwd
basename $FILE
```
+
Which would result in "passwd"
-```
+
+```bash
dirname $FILE
```
+
Which would result in: "/usr/bin"
## Arguments of a script
@@ -193,7 +202,7 @@ Its major disadvantage is that the user will have to be warned about the syntax
The arguments are filled in when the script command is entered.
They are separated by a space.
-```
+```bash
./script argument1 argument2
```
@@ -214,7 +223,7 @@ These variables can be used in the script like any other variable, except that t
Example:
-```
+```bash
#!/usr/bin/env bash
#
# Author : Damien dit LeDub
@@ -238,7 +247,7 @@ echo "All without separation (\$@) = $@"
This will give:
-```
+```bash
$ ./arguments.sh one two "tree four"
The number of arguments ($#) = 3
The name of the script ($0) = ./arguments.sh
@@ -264,7 +273,7 @@ The shift command allows you to shift positional variables.
Let's modify our previous example to illustrate the impact of the shift command on positional variables:
-```
+```bash
#!/usr/bin/env bash
#
# Author : Damien dit LeDub
@@ -299,7 +308,7 @@ echo "All without separation (\$@) = $@"
This will give:
-```
+```bash
./arguments.sh one two "tree four"
The number of arguments ($#) = 3
The 1st argument ($1) = one
@@ -330,13 +339,13 @@ The `set` command splits a string into positional variables.
Syntax of the set command:
-```
+```bash
set [value] [$variable]
```
Example:
-```
+```bash
$ set one two three
$ echo $1 $2 $3 $#
one two three 3
diff --git a/docs/books/learning_bash/04-check-your-knowledge.md b/docs/books/learning_bash/04-check-your-knowledge.md
index f9bf96d729..be0177e45c 100644
--- a/docs/books/learning_bash/04-check-your-knowledge.md
+++ b/docs/books/learning_bash/04-check-your-knowledge.md
@@ -13,7 +13,7 @@ tags:
:heavy_check_mark: Among these 4 shells, which one does not exist:
-- [ ] Bash
+- [ ] Bash
- [ ] Ksh
- [ ] Tsh
- [ ] Csh
diff --git a/docs/books/learning_bash/05-tests.md b/docs/books/learning_bash/05-tests.md
index cd81a197f5..e3f470d225 100644
--- a/docs/books/learning_bash/05-tests.md
+++ b/docs/books/learning_bash/05-tests.md
@@ -38,27 +38,26 @@ You should refer to the manual of the `man command` to know the different values
The return code is not visible directly, but is stored in a special variable: `$?`.
-```
+```bash
mkdir directory
echo $?
0
```
-```
+```bash
mkdir /directory
mkdir: unable to create directory
echo $?
1
```
-```
+```bash
command_that_does_not_exist
command_that_does_not_exist: command not found
echo $?
127
```
-
!!! note
The display of the contents of the `$?` variable with the `echo` command is done immediately after the command you want to evaluate because this variable is updated after each execution of a command, a command line or a script.
@@ -80,7 +79,7 @@ echo $?
It is also possible to create return codes in a script.
To do so, you just need to add a numeric argument to the `exit` command.
-```
+```bash
bash # to avoid being disconnected after the "exit 2
exit 123
echo $?
@@ -103,13 +102,13 @@ The result of the test:
Syntax of the `test` command for a file:
-```
+```bash
test [-d|-e|-f|-L] file
```
or:
-```
+```bash
[ -d|-e|-f|-L file ]
```
@@ -139,7 +138,7 @@ Options of the test command on files:
Example:
-```
+```bash
test -e /etc/passwd
echo $?
0
@@ -150,7 +149,7 @@ echo $?
An internal command to some shells (including bash) that is more modern, and provides more features than the external command `test`, has been created.
-```
+```bash
[[ -s /etc/passwd ]]
echo $?
1
@@ -164,7 +163,7 @@ echo $?
It is also possible to compare two files:
-```
+```bash
[[ file1 -nt|-ot|-ef file2 ]]
```
@@ -178,7 +177,7 @@ It is also possible to compare two files:
It is possible to test variables:
-```
+```bash
[[ -z|-n $variable ]]
```
@@ -191,13 +190,13 @@ It is possible to test variables:
It is also possible to compare two strings:
-```
+```bash
[[ string1 =|!=|<|> string2 ]]
```
Example:
-```
+```bash
[[ "$var" = "Rocky rocks!" ]]
echo $?
0
@@ -214,20 +213,20 @@ echo $?
Syntax for testing integers:
-```
+```bash
[[ "num1" -eq|-ne|-gt|-lt "num2" ]]
```
Example:
-```
+```bash
var=1
[[ "$var" -eq "1" ]]
echo $?
0
```
-```
+```bash
var=2
[[ "$var" -eq "1" ]]
echo $?
@@ -264,11 +263,11 @@ echo $?
The combination of tests allows you to perform several tests in one command.
It is possible to test the same argument (file, string or numeric) several times or different arguments.
-```
+```bash
[ option1 argument1 [-a|-o] option2 argument 2 ]
```
-```
+```bash
ls -lad /etc
drwxr-xr-x 142 root root 12288 sept. 20 09:25 /etc
[ -d /etc -a -x /etc ]
@@ -281,22 +280,21 @@ echo $?
| `-a` | AND: The test will be true if all patterns are true. |
| `-o` | OR: The test will be true if at least one pattern is true. |
-
With the internal command, it is better to use this syntax:
-```
+```bash
[[ -d "/etc" && -x "/etc" ]]
```
Tests can be grouped with parentheses `(` `)` to give them priority.
-```
+```bash
(TEST1 -a TEST2) -a TEST3
```
The `!` character is used to perform the reverse test of the one requested by the option:
-```
+```bash
test -e /file # true if file exists
! test -e /file # true if file does not exist
```
@@ -305,13 +303,13 @@ test -e /file # true if file exists
The `expr` command performs an operation with numeric integers.
-```
+```bash
expr num1 [+] [-] [\*] [/] [%] num2
```
Example:
-```
+```bash
expr 2 + 2
4
```
@@ -329,14 +327,13 @@ expr 2 + 2
| `/` | Division quotient |
| `%` | Modulo of the division |
-
## The `typeset` command
The `typeset -i` command declares a variable as an integer.
Example:
-```
+```bash
typeset -i var1
var1=1+1
var2=1+1
@@ -352,7 +349,7 @@ The `let` command tests if a character is numeric.
Example:
-```
+```bash
var1="10"
var2="AA"
let $var1
@@ -375,7 +372,7 @@ echo $?
The `let` command also allows you to perform mathematical operations:
-```
+```bash
let var=5+5
echo $var
10
@@ -383,7 +380,7 @@ echo $var
`let` can be substituted by `$(( ))`.
-```
+```bash
echo $((5+2))
7
echo $((5*2))
diff --git a/docs/books/learning_bash/06-conditional-structures.md b/docs/books/learning_bash/06-conditional-structures.md
index 35c2a0da2a..56243aac4f 100644
--- a/docs/books/learning_bash/06-conditional-structures.md
+++ b/docs/books/learning_bash/06-conditional-structures.md
@@ -36,7 +36,7 @@ But we can use it in a condition.
Syntax of the conditional alternative `if`:
-```
+```bash
if command
then
command if $?=0
@@ -52,7 +52,7 @@ Using a classical command (`mkdir`, `tar`, ...) allows you to define the actions
Examples:
-```
+```bash
if [[ -e /etc/passwd ]]
then
echo "The file exists"
@@ -68,7 +68,7 @@ fi
If the `else` block starts with a new `if` structure, you can merge the `else` and `if` with `elif` as shown below:
-```
+```bash
[...]
else
if [[ -e /etc/ ]]
@@ -99,7 +99,7 @@ The command to execute if `$?` is `true` is placed after `&&` while the command
Example:
-```
+```bash
[[ -e /etc/passwd ]] && echo "The file exists" || echo "The file does not exist"
mkdir dir && echo "The directory is created".
```
@@ -109,21 +109,26 @@ It is also possible to evaluate and replace a variable with a lighter structure
This syntax implements the braces:
* Displays a replacement value if the variable is empty:
- ```
+
+ ```bash
${variable:-value}
```
+
* Displays a replacement value if the variable is not empty:
- ```
+
+ ```bash
${variable:+value}
```
+
* Assigns a new value to the variable if it is empty:
- ```
+
+ ```bash
${variable:=value}
```
Examples:
-```
+```bash
name=""
echo ${name:-linux}
linux
@@ -160,7 +165,7 @@ Placed at the end of the structure, the choice `*` indicates the actions to be e
Syntax of the conditional alternative case:
-```
+```bash
case $variable in
value1)
commands if $variable = value1
@@ -177,7 +182,7 @@ esac
When the value is subject to variation, it is advisable to use wildcards `[]` to specify the possibilities:
-```
+```bash
[Yy][Ee][Ss])
echo "yes"
;;
@@ -185,7 +190,7 @@ When the value is subject to variation, it is advisable to use wildcards `[]` to
The character `|` also allows you to specify a value or another:
-```
+```bash
"yes" | "YES")
echo "yes"
;;
diff --git a/docs/books/learning_bash/07-loops.md b/docs/books/learning_bash/07-loops.md
index 76b8a832be..dec2a79df3 100644
--- a/docs/books/learning_bash/07-loops.md
+++ b/docs/books/learning_bash/07-loops.md
@@ -45,7 +45,7 @@ When the evaluated command is false (`$? != 0`), the shell resumes the execution
Syntax of the conditional loop structure `while`:
-```
+```bash
while command
do
command if $? = 0
@@ -54,7 +54,7 @@ done
Example using the `while` conditional structure:
-```
+```bash
while [[ -e /etc/passwd ]]
do
echo "The file exists"
@@ -77,13 +77,13 @@ The `exit` command ends the execution of the script.
Syntax of the `exit` command :
-```
+```bash
exit [n]
```
Example using the `exit` command :
-```
+```bash
bash # to avoid being disconnected after the "exit 1
exit 1
echo $?
@@ -99,7 +99,7 @@ The `break` command allows you to interrupt the loop by going to the first comma
The `continue` command allows you to restart the loop by going back to the first command after `done`.
-```
+```bash
while [[ -d / ]] INT ✘ 17s
do
echo "Do you want to continue? (yes/no)"
@@ -113,7 +113,7 @@ done
The `true` command always returns `true` while the `false` command always returns `false`.
-```
+```bash
true
echo $?
0
@@ -126,7 +126,7 @@ Used as a condition of a loop, they allow for either an execution of an infinite
Example:
-```
+```bash
while true
do
echo "Do you want to continue? (yes/no)"
@@ -146,7 +146,7 @@ When the evaluated command is true (`$? = 0`), the shell resumes the execution o
Syntax of the conditional loop structure `until`:
-```
+```bash
until command
do
command if $? != 0
@@ -155,7 +155,7 @@ done
Example of the use of the conditional structure `until`:
-```
+```bash
until [[ -e test_until ]]
do
echo "The file does not exist"
@@ -182,7 +182,7 @@ A `break` command is needed to exit the loop.
Syntax of the conditional loop structure `select`:
-```
+```bash
PS3="Your choice:"
select variable in var1 var2 var3
do
@@ -192,7 +192,7 @@ done
Example of the use of the conditional structure `select`:
-```
+```bash
PS3="Your choice: "
select choice in coffee tea chocolate
do
@@ -202,7 +202,7 @@ done
If this script is run, it shows something like this:
-```
+```text
1) Coffee
2) Tea
3) Chocolate
@@ -217,7 +217,7 @@ The `for` / `do` / `done` structure assigns the first element of the list to the
Syntax of the loop structure on list of values `for`:
-```
+```bash
for variable in list
do
commands
@@ -226,7 +226,7 @@ done
Example of using the conditional structure `for`:
-```
+```bash
for file in /home /etc/passwd /root/fic.txt
do
file $file
@@ -240,7 +240,7 @@ Any command producing a list of values can be placed after the `in` using a sub-
This can be the files in a directory. In this case, the variable will take as a value each of the words of the file names present:
-```
+```bash
for file in $(ls -d /tmp/*)
do
echo $file
@@ -249,7 +249,7 @@ done
It can be a file. In this case, the variable will take as a value each word contained in the file browsed, from the beginning to the end:
-```
+```bash
cat my_file.txt
first line
second line
@@ -265,7 +265,7 @@ line
To read a file line by line, you must modify the value of the `IFS` environment variable.
-```
+```bash
IFS=$'\t\n'
for LINE in $(cat my_file.txt); do echo $LINE; done
first line
diff --git a/docs/books/learning_bash/08-check-your-knowledge.md b/docs/books/learning_bash/08-check-your-knowledge.md
index 15c72ea96d..dd3f0d57c4 100644
--- a/docs/books/learning_bash/08-check-your-knowledge.md
+++ b/docs/books/learning_bash/08-check-your-knowledge.md
@@ -13,17 +13,17 @@ tags:
:heavy_check_mark: Every order must return a return code at the end of its execution:
-- [ ] True
+- [ ] True
- [ ] False
:heavy_check_mark: A return code of 0 indicates an execution error:
-- [ ] True
+- [ ] True
- [ ] False
:heavy_check_mark: The return code is stored in the variable `$@`:
-- [ ] True
+- [ ] True
- [ ] False
:heavy_check_mark: The test command allows you to:
@@ -41,7 +41,7 @@ tags:
:heavy_check_mark: Does the syntax of the conditional structure below seem correct to you? Explain why.
-```
+```bash
if command
command if $?=0
else
@@ -60,7 +60,7 @@ fi
:heavy_check_mark: Does the syntax of the conditional alternative structure below seem correct to you? Explain why.
-```
+```bash
case $variable in
value1)
commands if $variable = value1
diff --git a/docs/books/learning_rsync/01_rsync_overview.md b/docs/books/learning_rsync/01_rsync_overview.md
index 4d76dc0712..be0b6af133 100644
--- a/docs/books/learning_rsync/01_rsync_overview.md
+++ b/docs/books/learning_rsync/01_rsync_overview.md
@@ -5,7 +5,7 @@ contributors: Steven Spencer, Ganna Zhyrnova
update : 2022-Mar-08
---
-# Backup Brief
+# Backup Brief
What is a backup?
@@ -21,9 +21,9 @@ What are the backup methods?
* Hot backup: Refers to the backup when the system is in normal operation. As the data in the system is updated at any time, the backed-up data has a certain lag relative to the real data of the system.
* Remote backup: refers to backing up data in another geographic location to avoid data loss and service interruption caused by fire, natural disasters, theft, etc.
-## rsync in brief
+## rsync in brief
-On a server, I backed up the first partition to the second partition, which is commonly known as "Local backup." The specific backup tools are `tar` , `dd` , `dump` , `cp `, etc. can be achieved. Although the data is backed up on this server, if the hardware fails to boot up properly, the data will not be retrieved. In order to solve this problem with the local backup, we introduced another kind of backup --- "remote backup".
+On a server, I backed up the first partition to the second partition, which is commonly known as "Local backup." The specific backup tools are `tar` , `dd` , `dump` , `cp`, etc. can be achieved. Although the data is backed up on this server, if the hardware fails to boot up properly, the data will not be retrieved. In order to solve this problem with the local backup, we introduced another kind of backup --- "remote backup".
Some people will say, can't I just use the `tar` or `cp` command on the first server and send it to the second server via `scp` or `sftp`?
@@ -39,7 +39,7 @@ Therefore, there needs to be a data backup in the production environment which n
In terms of platform support, most Unix-like systems are supported, whether it is GNU/Linux or BSD. In addition, there are related `rsync` under the Windows platform, such as cwRsync.
-The original `rsync` was maintained by the Australian programmer Andrew Tridgell (shown in Figure 1 below), and now it has been maintained by Wayne Davison (shown in Figure 2 below) ) For maintenance, you can go to [ github project address ](https://github.com/WayneD/rsync) to get the information you want.
+The original `rsync` was maintained by the Australian programmer Andrew Tridgell (shown in Figure 1 below), and now it has been maintained by Wayne Davison (shown in Figure 2 below) ) For maintenance, you can go to [github project address](https://github.com/WayneD/rsync) to get the information you want.
![ Andrew Tridgell ](images/Andrew_Tridgell.jpg)
![ Wayne Davison ](images/Wayne_Davison.jpg)
@@ -48,7 +48,7 @@ The original `rsync` was maintained by the Australian programmer |push/upload|Fedora34;
Fedora34-->|pull/download|RockyLinux8;
```
-## Demonstration based on SSH protocol
+## Demonstration based on SSH protocol
!!! tip "tip"
Here, both Rocky Linux 8 and Fedora 34 use the root user to log in. Fedora 34 is the client and Rocky Linux 8 is the server.
-### pull/download
+### pull/download
Since it is based on the SSH protocol, we first create a user in the server:
@@ -90,6 +90,7 @@ total size is 0 speedup is 0.00
[root@fedora ~]# ls
aabbcc
```
+
The transfer was successful.
!!! tip "tip"
diff --git a/docs/books/learning_rsync/03_rsync_demo02.md b/docs/books/learning_rsync/03_rsync_demo02.md
index d2bc334c81..24bdcb4ea0 100644
--- a/docs/books/learning_rsync/03_rsync_demo02.md
+++ b/docs/books/learning_rsync/03_rsync_demo02.md
@@ -6,6 +6,7 @@ update: 2021-11-04
---
# Demonstration based on rsync protocol
+
In vsftpd, there are virtual users (impersonated users customized by the administrator) because it is not safe to use anonymous users and local users. We know that a server based on the SSH protocol must ensure that there is a system of users. When there are many synchronization requirements, it may be necessary to create many users. This obviously does not meet the GNU/Linux operation and maintenance standards (the more users, the more insecure), in rsync, for security reasons, there is an rsync protocol authentication login method.
**How to do it?**
@@ -17,7 +18,7 @@ Just write the corresponding parameters and values in the configuration file. In
[root@Rocky ~]# vim /etc/rsyncd.conf
```
-Some parameters and values of this file are as follows, [ here ](04_rsync_configure.md) has more parameter descriptions:
+Some parameters and values of this file are as follows, [here](04_rsync_configure.md) has more parameter descriptions:
|Item|Description|
|---|---|
@@ -91,7 +92,7 @@ aabbcc anaconda-ks.cfg fedora rsynctest.txt
success! In addition to the above writing based on the rsync protocol, you can also write like this: `rsync://li@10.1.2.84/share`
-## push/upload
+## push/upload
```bash
[root@fedora ~]# touch /root/fedora.txt
diff --git a/docs/books/learning_rsync/04_rsync_configure.md b/docs/books/learning_rsync/04_rsync_configure.md
index a57251e382..2d41efdb70 100644
--- a/docs/books/learning_rsync/04_rsync_configure.md
+++ b/docs/books/learning_rsync/04_rsync_configure.md
@@ -4,9 +4,9 @@ author : tianci li
update : 2021-11-04
---
-# /etc/rsyncd.conf
+# /etc/rsyncd.conf
-In the previous article [ rsync demo 02 ](03_rsync_demo02.md) we introduced some basic parameters. This article is to supplement other parameters.
+In the previous article [rsync demo 02](03_rsync_demo02.md) we introduced some basic parameters. This article is to supplement other parameters.
|Parameters|Description|
|---|---|
@@ -26,6 +26,6 @@ In the previous article [ rsync demo 02 ](03_rsync_demo02.md) we introduced some
| auth users = li |Enable virtual users, multiple users are separated by commas in English state|
| syslog facility = daemon | Define the level of system log. These values can be filled in: auth, authpriv, cron, daemon, ftp, kern, lpr, mail, news, security, syslog, user, uucp, local0, local1, local2 local3, local4, local5, local6 and local7. The default value is daemon|
-## Recommended configuration
+## Recommended configuration
![ photo ](images/rsync_config.jpg)
diff --git a/docs/books/learning_rsync/06_rsync_inotify.md b/docs/books/learning_rsync/06_rsync_inotify.md
index 0357f3ccdb..69e6c5a163 100644
--- a/docs/books/learning_rsync/06_rsync_inotify.md
+++ b/docs/books/learning_rsync/06_rsync_inotify.md
@@ -62,8 +62,10 @@ fs.inotify.max_user_watches = 1048576
## Related commands
The inotify-tools tool has two commands, namely:
-* **inotifywait**: for continuous monitoring, real-time output results. It is generally used with the rsync incremental backup tool. Because it is a file system monitoring, it can be used with a script. We will introduce the specific script writing later.
-* **inotifywatch**: for short-term monitoring, output results after the task is completed.
+
+* **inotifywait**: for continuous monitoring, real-time output results. It is generally used with the rsync incremental backup tool. Because it is a file system monitoring, it can be used with a script. We will introduce the specific script writing later.
+
+* **inotifywatch**: for short-term monitoring, output results after the task is completed.
`inotifywait` mainly has the following options:
diff --git a/docs/books/lxd_server/00-toc.md b/docs/books/lxd_server/00-toc.md
index 563b106a25..f183d6424d 100644
--- a/docs/books/lxd_server/00-toc.md
+++ b/docs/books/lxd_server/00-toc.md
@@ -29,7 +29,7 @@ For those wanting to use LXD as a lab environment on their own notebooks or work
* Comfort at the command line on your machine(s), and fluent in a command line editor. (Using _vi_ throughout these examples, but you can substitute in your favorite editor.)
* You will need to be your unprivileged user for the bulk of these processes. For the early setup steps, you will need to be the root user or be able to `sudo` to become so. Throughout these chapters, we assume your unprivileged user to be "lxdadmin". You will have to create this user account later in the process.
* For ZFS, ensure that UEFI secure boot is NOT enabled. Otherwise, you will end up having to sign the ZFS module to get it to load.
-* Using Rocky Linux-based containers for the most part
+* Using Rocky Linux-based containers for the most part
## Synopsis
diff --git a/docs/books/lxd_server/01-install.md b/docs/books/lxd_server/01-install.md
index e58c0fa20e..4face6b175 100644
--- a/docs/books/lxd_server/01-install.md
+++ b/docs/books/lxd_server/01-install.md
@@ -11,19 +11,19 @@ tags:
# Chapter 1: Install and configuration
-Throughout this chapter you will need to be the root user or you will need to be able to _sudo_ to root.
+Throughout this chapter you will need to be the root user or you will need to be able to *sudo* to root.
## Install EPEL and OpenZFS repositories
LXD requires the EPEL (Extra Packages for Enterprise Linux) repository, which is easy to install using:
-```
+```bash
dnf install epel-release
```
When installed, verify there are no updates:
-```
+```bash
dnf upgrade
```
@@ -33,7 +33,7 @@ If there were any kernel updates during the upgrade process, reboot the server.
Install the OpenZFS repository with:
-```
+```bash
dnf install https://zfsonlinux.org/epel/zfs-release-2-2$(rpm --eval "%{dist}").noarch.rpm
```
@@ -41,19 +41,19 @@ dnf install https://zfsonlinux.org/epel/zfs-release-2-2$(rpm --eval "%{dist}").n
LXD installation requires a snap package on Rocky Linux. For this reason, you need to install `snapd` (and a few other useful programs) with:
-```
+```bash
dnf install snapd dkms vim kernel-devel
```
Now enable and start snapd:
-```
+```bash
systemctl enable snapd
```
Then run:
-```
+```bash
systemctl start snapd
```
@@ -63,13 +63,13 @@ Reboot the server before continuing here.
Installing LXD requires the use of the snap command. At this point, you are just installing it, you are not doing the set up:
-```
+```bash
snap install lxd
```
-## Install OpenZFS
+## Install OpenZFS
-```
+```bash
dnf install zfs
```
@@ -83,13 +83,13 @@ Luckily, tweaking the settings for LXD is not hard with a few file modifications
The first file you need to change is the `limits.conf` file. This file is self-documented. Examine the explanations in the comment in the file to understand what this file does. To make your modifications enter:
-```
+```bash
vi /etc/security/limits.conf
```
This entire file consists of comments, and at the bottom, shows the current default settings. In the blank space above the end of file marker (#End of file) you need to add our custom settings. The end of the file will look like this when completed:
-```
+```text
# Modifications made for LXD
* soft nofile 1048576
@@ -100,15 +100,15 @@ root hard nofile 1048576
* hard memlock unlimited
```
-Save your changes and exit. (SHIFT+:+wq! for _vi_)
+Save your changes and exit. (++shift+colon+"w"+"q"+"exclam++ for *vi*)
### Modifying sysctl.conf with `90-lxd.override.conf`
-With _systemd_, you can make changes to your system's overall configuration and kernel options *without* modifying the main configuration file. Instead, put your settings in a separate file that will override the particular settings you need.
+With *systemd*, you can make changes to your system's overall configuration and kernel options *without* modifying the main configuration file. Instead, put your settings in a separate file that will override the particular settings you need.
To make these kernel changes, you are going to create a file called `90-lxd-override.conf` in `/etc/sysctl.d`. To do this type:
-```
+```bash
vi /etc/sysctl.d/90-lxd-override.conf
```
@@ -118,7 +118,7 @@ vi /etc/sysctl.d/90-lxd-override.conf
Place the following content in that file. Note that if you are wondering what you are doing here, the file content is self-documenting:
-```
+```bash
## The following changes have been made for LXD ##
# fs.inotify.max_queued_events specifies an upper limit on the number of events that can be queued to the corresponding inotify instance
@@ -176,19 +176,19 @@ Save your changes and exit.
At this point reboot the server.
-### Checking _sysctl.conf_ values
+### Checking *sysctl.conf* values
After the reboot, log back in as the root user to the server. You need to check that our override file has actually completed the job.
This is not hard to do. There's no need to verify every setting unless you want to, but checking a few will verify that the settings have changed. Do this with the `sysctl` command:
-```
+```bash
sysctl net.core.bpf_jit_limit
```
Which will show you:
-```
+```bash
net.core.bpf_jit_limit = 3000000000
```
diff --git a/docs/books/lxd_server/02-zfs_setup.md b/docs/books/lxd_server/02-zfs_setup.md
index 20f378e6bd..9fca62c5ce 100644
--- a/docs/books/lxd_server/02-zfs_setup.md
+++ b/docs/books/lxd_server/02-zfs_setup.md
@@ -19,7 +19,7 @@ If you have already installed ZFS, this section will walk you through ZFS setup.
First, enter this command:
-```
+```bash
/sbin/modprobe zfs
```
@@ -27,13 +27,13 @@ If there are no errors, it will return to the prompt and echo nothing. If you ge
Next you need to examine the disks on our system, find out where the operating system is, and what is available to use for the ZFS pool. You will do this with `lsblk`:
-```
+```bash
lsblk
```
Which will return something like this (your system will be different!):
-```
+```bash
AME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 32.3M 1 loop /var/lib/snapd/snap/snapd/11588
loop1 7:1 0 55.5M 1 loop /var/lib/snapd/snap/core18/1997
@@ -55,7 +55,7 @@ In this listing, you can see that */dev/sda* is in use by the operating system.
That falls outside the scope of this document, but definitely is a consideration for production. It offers better performance and redundancy. For now, create your pool on the single device you have identified:
-```
+```bash
zpool create storage /dev/sdb
```
diff --git a/docs/books/lxd_server/03-lxdinit.md b/docs/books/lxd_server/03-lxdinit.md
index eeb2dcd041..40305ba8ae 100644
--- a/docs/books/lxd_server/03-lxdinit.md
+++ b/docs/books/lxd_server/03-lxdinit.md
@@ -18,50 +18,50 @@ Throughout this chapter you will need to be root or able to `sudo` to become roo
Your server environment is all set up. You are ready to initialize LXD. This is an automated script that asks a series of questions to get your LXD instance up and running:
-```
+```bash
lxd init
```
Here are the questions and our answers for the script, with a little explanation where warranted:
-```
+```text
Would you like to use LXD clustering? (yes/no) [default=no]:
```
If interested in clustering, do some additional research on that [here](https://documentation.ubuntu.com/lxd/en/latest/clustering/)
-```
+```text
Do you want to configure a new storage pool? (yes/no) [default=yes]:
```
This seems counter-intuitive. You have already created your ZFS pool, but it will become clear in a later question. Accept the default.
-```
+```text
Name of the new storage pool [default=default]: storage
```
Leaving this "default" is an option, but for clarity, using the same name you gave our ZFS pool is better.
-```
+```text
Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]:
```
You want to accept the default.
-```
+```text
Create a new ZFS pool? (yes/no) [default=yes]: no
```
Here is where the resolution of the earlier question about creating a storage pool comes into play.
-```
+```text
Name of the existing ZFS pool or dataset: storage
Would you like to connect to a MAAS server? (yes/no) [default=no]:
```
Metal As A Service (MAAS) is outside the scope of this document.
-```
+```text
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
@@ -70,13 +70,13 @@ What IPv6 address should be used? (CIDR subnet notation, “auto” or “none
If you want to use IPv6 on your LXD containers, you can turn on this option. That is up to you.
-```
+```text
Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes
```
This is necessary to snapshot the server.
-```
+```text
Address to bind LXD to (not including port) [default=all]:
Port to bind LXD to [default=8443]:
Trust password for new clients:
@@ -85,7 +85,7 @@ Again:
This trust password is how you will connect to the snapshot server or back from the snapshot server. Set this with something that makes sense in your environment. Save this entry to a secure location, such as a password manager.
-```
+```text
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
```
@@ -94,13 +94,13 @@ Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Before you continue on, you need to create your "lxdadmin" user and ensure that it has the privileges it needs. You need the "lxdadmin" user to be able to `sudo` to root and you need it to be a member of the lxd group. To add the user and ensure it is a member of both groups do:
-```
+```bash
useradd -G wheel,lxd lxdadmin
```
Set the password:
-```
+```bash
passwd lxdadmin
```
diff --git a/docs/books/lxd_server/04-firewall.md b/docs/books/lxd_server/04-firewall.md
index 6370644e22..cdf389652f 100644
--- a/docs/books/lxd_server/04-firewall.md
+++ b/docs/books/lxd_server/04-firewall.md
@@ -25,13 +25,13 @@ As with any server, you need to ensure that it is secure from the outside world
For _firewalld_ rules, you need to use [this basic procedure](../../guides/security/firewalld.md) or be familiar with those concepts. Our assumptions are: LAN network of 192.168.1.0/24 and a bridge named lxdbr0. To be clear, you might have many interfaces on your LXD server, with one perhaps facing your WAN. You are also going to create a zone for the bridged and local networks. This is just for zone clarity's sake. The other zone names do not really apply. This procedure assumes that you already know the basics of _firewalld_.
-```
+```bash
firewall-cmd --new-zone=bridge --permanent
```
You need to reload the firewall after adding a zone:
-```
+```bash
firewall-cmd --reload
```
@@ -45,18 +45,20 @@ You want to allow all traffic from the bridge. Just add the interface, and chang
If you need to create a zone that you want to allow all access to the interface or source, but do not want to have to specify any protocols or services, then you *must* change the target from "default" to "ACCEPT". The same is true of "DROP" and "REJECT" for a particular IP block that you have custom zones for. To be clear, the "drop" zone will take care of that for you as long as you are not using a custom zone.
-```
+```bash
firewall-cmd --zone=bridge --add-interface=lxdbr0 --permanent
firewall-cmd --zone=bridge --set-target=ACCEPT --permanent
```
+
Assuming no errors and everything is still working just do a reload:
-```
+```bash
firewall-cmd --reload
```
+
If you list out your rules now with `firewall-cmd --zone=bridge --list-all` you will see:
-```
+```bash
bridge (active)
target: ACCEPT
icmp-block-inversion: no
@@ -72,22 +74,25 @@ bridge (active)
icmp-blocks:
rich rules:
```
+
Note that you also want to allow your local interface. Again, the included zones are not appropriately named for this. Create a zone and use the source IP range for the local interface to ensure you have access:
-```
+```bash
firewall-cmd --new-zone=local --permanent
firewall-cmd --reload
```
+
Add the source IPs for the local interface, and change the target to "ACCEPT":
-```
+```bash
firewall-cmd --zone=local --add-source=127.0.0.1/8 --permanent
firewall-cmd --zone=local --set-target=ACCEPT --permanent
firewall-cmd --reload
```
+
Go ahead and list out the "local" zone to ensure your rules are there with `firewall-cmd --zone=local --list all` which will show:
-```
+```bash
local (active)
target: ACCEPT
icmp-block-inversion: no
@@ -106,23 +111,26 @@ local (active)
You want to allow SSH from our trusted network. We will use the source IPs here, and the built-in "trusted" zone. The target for this zone is already "ACCEPT" by default.
-```
+```bash
firewall-cmd --zone=trusted --add-source=192.168.1.0/24
```
+
Add the service to the zone:
-```
+```bash
firewall-cmd --zone=trusted --add-service=ssh
```
+
If everything is working, move your rules to permanent and reload the rules:
-```
+```bash
firewall-cmd --runtime-to-permanent
firewall-cmd --reload
```
+
Listing out your "trusted" zone will show:
-```
+```bash
trusted (active)
target: ACCEPT
icmp-block-inversion: no
@@ -141,13 +149,13 @@ trusted (active)
By default, the "public" zone is in the enabled state and has SSH allowed. For security, you do not want SSH allowed on the "public" zone. Ensure that your zones are correct and that the access you are getting to the server is by one of the LAN IPs (in the case of our example). You might lock yourself out of the server if you do not verify this before continuing. When you are sure you have access from the correct interface, remove SSH from the "public" zone:
-```
+```bash
firewall-cmd --zone=public --remove-service=ssh
```
Test access and ensure you are not locked out. If not, move your rules to permanent, reload, and list out zone "public" to ensure the removal of SSH:
-```
+```bash
firewall-cmd --runtime-to-permanent
firewall-cmd --reload
firewall-cmd --zone=public --list-all
diff --git a/docs/books/lxd_server/05-lxd_images.md b/docs/books/lxd_server/05-lxd_images.md
index 29a0c908e6..2fc1ed2325 100644
--- a/docs/books/lxd_server/05-lxd_images.md
+++ b/docs/books/lxd_server/05-lxd_images.md
@@ -17,7 +17,7 @@ Throughout this chapter you will need to run commands as your unprivileged user
You probably can not wait to get started with a container. There are many container operating system possibilities. To get a feel for how many possibilities, enter this command:
-```
+```bash
lxc image list images: | more
```
@@ -25,13 +25,13 @@ Enter the space bar to page through the list. This list of containers and virtua
The **last** thing you want to do is to page through looking for a container image to install, particularly if you know the image that you want to create. Change the command to show only Rocky Linux install options:
-```
+```bash
lxc image list images: | grep rocky
```
This brings up a much more manageable list:
-```
+```bash
| rockylinux/8 (3 more) | 0ed2f148f7c6 | yes | Rockylinux 8 amd64 (20220805_02:06) | x86_64 | CONTAINER | 128.68MB | Aug 5, 2022 at 12:00am (UTC) |
| rockylinux/8 (3 more) | 6411a033fdf1 | yes | Rockylinux 8 amd64 (20220805_02:06) | x86_64 | VIRTUAL-MACHINE | 643.15MB | Aug 5, 2022 at 12:00am (UTC) |
| rockylinux/8/arm64 (1 more) | e677777306cf | yes | Rockylinux 8 arm64 (20220805_02:29) | aarch64 | CONTAINER | 124.06MB | Aug 5, 2022 at 12:00am (UTC) |
@@ -50,7 +50,7 @@ This brings up a much more manageable list:
For the first container, you are going to use "rockylinux/8". To install it, you *might* use:
-```
+```bash
lxc launch images:rockylinux/8 rockylinux-test-8
```
@@ -58,19 +58,19 @@ That will create a Rocky Linux-based container named "rockylinux-test-8". You ca
To start the container manually, use:
-```
+```bash
lxc start rockylinux-test-8
```
To Rename the image (we are not going to do this here, but this is how to do it) first stop the container:
-```
+```bash
lxc stop rockylinux-test-8
```
Use the `move` command to change the container's name:
-```
+```bash
lxc move rockylinux-test-8 rockylinux-8
```
@@ -78,25 +78,25 @@ If you followed this instruction anyway, stop the container and move it back to
For the purposes of this guide, go ahead and install two more images for now:
-```
+```bash
lxc launch images:rockylinux/9 rockylinux-test-9
```
and
-```
+```bash
lxc launch images:ubuntu/22.04 ubuntu-test
```
Examine what you have by listing your images:
-```
+```bash
lxc list
```
which will return this:
-```
+```bash
+-------------------+---------+----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------------+---------+----------------------+------+-----------+-----------+
@@ -106,6 +106,4 @@ which will return this:
+-------------------+---------+----------------------+------+-----------+-----------+
| ubuntu-test | RUNNING | 10.146.84.181 (eth0) | | CONTAINER | 0 |
+-------------------+---------+----------------------+------+-----------+-----------+
-
```
-
diff --git a/docs/books/lxd_server/06-profiles.md b/docs/books/lxd_server/06-profiles.md
index 251d583f47..a107a01f85 100644
--- a/docs/books/lxd_server/06-profiles.md
+++ b/docs/books/lxd_server/06-profiles.md
@@ -29,7 +29,7 @@ For now, just be aware that this has drawbacks when choosing container images ba
To create our macvlan profile, use this command:
-```
+```bash
lxc profile create macvlan
```
@@ -37,13 +37,13 @@ If you were on a multi-interface machine and wanted more than one macvlan templa
You want to change the macvlan interface, but before you do, you need to know what the parent interface is for our LXD server. This will be the interface that has a LAN (in this case) assigned IP. To find what interface that is, use:
-```
+```bash
ip addr
```
Look for the interface with the LAN IP assignment in the 192.168.1.0/24 network:
-```
+```bash
2: enp3s0: mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 40:16:7e:a9:94:85 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.106/24 brd 192.168.1.255 scope global dynamic noprefixroute enp3s0
@@ -56,7 +56,7 @@ In this case, the interface is "enp3s0".
Next change the profile:
-```
+```bash
lxc profile device add macvlan eth0 nic nictype=macvlan parent=enp3s0
```
@@ -64,14 +64,13 @@ This command adds all of the necessary parameters to the macvlan profile require
Examine what this command created, by using the command:
-```
+```bash
lxc profile show macvlan
```
Which will give you output similar to this:
-
-```
+```bash
config: {}
description: ""
devices:
@@ -87,13 +86,13 @@ You can use profiles for many other things, but assigning a static IP to a conta
To assign the macvlan profile to rockylinux-test-8 you need to do the following:
-```
+```bash
lxc profile assign rockylinux-test-8 default,macvlan
```
Do the same thing for rockylinux-test-9:
-```
+```bash
lxc profile assign rockylinux-test-9 default,macvlan
```
@@ -101,7 +100,7 @@ This says, you want the default profile, and to apply the macvlan profile too.
## Rocky Linux macvlan
-In RHEL distributions and clones, Network Manager has been in a constant state of change. Because of this, the way the `macvlan` profile works does not work (at least in comparison to other distributions), and requires a little additional work to assign IP addresses from DHCP or statically.
+In RHEL distributions and clones, Network Manager has been in a constant state of change. Because of this, the way the `macvlan` profile works does not work (at least in comparison to other distributions), and requires a little additional work to assign IP addresses from DHCP or statically.
Remember that none of this has anything to do with Rocky Linux particularly, but with the upstream package implementation.
@@ -115,18 +114,18 @@ Having the profile assigned, however, does not change the default configuration,
To test this, do the following:
-```
+```bash
lxc restart rocky-test-8
lxc restart rocky-test-9
```
List your containers again and note that the rockylinux-test-9 does not have an IP address anymore:
-```
+```bash
lxc list
```
-```
+```bash
+-------------------+---------+----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------------+---------+----------------------+------+-----------+-----------+
@@ -136,19 +135,19 @@ lxc list
+-------------------+---------+----------------------+------+-----------+-----------+
| ubuntu-test | RUNNING | 10.146.84.181 (eth0) | | CONTAINER | 0 |
+-------------------+---------+----------------------+------+-----------+-----------+
-
```
+
As you can see, our Rocky Linux 8.x container received the IP address from the LAN interface, whereas the Rocky Linux 9.x container did not.
To further demonstrate the problem here, you need to run `dhclient` on the Rocky Linux 9.0 container. This will show us that the macvlan profile, *is* in fact applied:
-```
+```bash
lxc exec rockylinux-test-9 dhclient
```
Another container listing now shows the following:
-```
+```bash
+-------------------+---------+----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------------+---------+----------------------+------+-----------+-----------+
@@ -162,51 +161,51 @@ Another container listing now shows the following:
That should have happened with a stop and start of the container, but it does not. Assuming that you want to use a DHCP assigned IP address every time, you can fix this with a simple crontab entry. To do this, we need to gain shell access to the container by entering:
-```
+```bash
lxc exec rockylinux-test-9 bash
```
Next, lets determine the path to `dhclient`. To do this, because this container is from a minimal image, you will need to first install `which`:
-```
+```bash
dnf install which
```
then run:
-```
+```bash
which dhclient
```
which will return:
-```
+```bash
/usr/sbin/dhclient
```
Next, change root's crontab:
-```
+```bash
crontab -e
```
Add this line:
-```
+```bash
@reboot /usr/sbin/dhclient
```
-The crontab command entered uses _vi_ . To save your changes and exit use SHIFT+:+wq.
+The crontab command entered uses *vi* . To save your changes and exit use ++shift+colon+"w"+"q"++.
Exit the container and restart rockylinux-test-9:
-```
+```bash
lxc restart rockylinux-test-9
```
Another listing will reveal that the container has the DHCP address assigned:
-```
+```bash
+-------------------+---------+----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------------+---------+----------------------+------+-----------+-----------+
@@ -225,19 +224,19 @@ To statically assign an IP address, things get even more convoluted. Since `netw
To do this, you need to gain shell access to the container again:
-```
+```bash
lxc exec rockylinux-test-9 bash
```
Next, you are going to create a bash script in `/usr/local/sbin` called "static":
-```
+```bash
vi /usr/local/sbin/static
```
The contents of this script are not difficult:
-```
+```bash
#!/usr/bin/env bash
/usr/sbin/ip link set dev eth0 name net0
@@ -246,41 +245,40 @@ The contents of this script are not difficult:
/usr/sbin/ip route add default via 192.168.1.1
```
-What are we doing here?
+What are we doing here?
* you rename eth0 to a new name that we can manage ("net0")
* you assign the new static IP that we have allocated for our container (192.168.1.151)
* you bring up the new "net0" interface
* you need to add the default route for our interface
-
Make our script executable with:
-```
+```bash
chmod +x /usr/local/sbin/static
```
Add this to root's crontab for the container with the @reboot time:
-```
+```bash
@reboot /usr/local/sbin/static
```
Finally, exit the container and restart it:
-```
+```bash
lxc restart rockylinux-test-9
```
Wait a few seconds and list out the containers again:
-```
+```bash
lxc list
```
You should see success:
-```
+```bash
+-------------------+---------+----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------------+---------+----------------------+------+-----------+-----------+
@@ -298,19 +296,19 @@ Luckily, In Ubuntu's implementation of Network Manager, the macvlan stack is NOT
Just like with your rockylinux-test-9 container, you need to assign the profile to our container:
-```
+```bash
lxc profile assign ubuntu-test default,macvlan
```
To find out if DHCP assigns an address to the container stop and start the container again:
-```
+```bash
lxc restart ubuntu-test
```
List the containers again:
-```
+```bash
+-------------------+---------+----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------------+---------+----------------------+------+-----------+-----------+
@@ -326,13 +324,13 @@ Success!
Configuring the Static IP is just a little different, but not at all hard. You need to change the .yaml file associated with the container's connection (`10-lxc.yaml`). For this static IP, you will use 192.168.1.201:
-```
+```bash
vi /etc/netplan/10-lxc.yaml
```
Change what is there to the following:
-```
+```bash
network:
version: 2
ethernets:
@@ -348,13 +346,13 @@ Save your changes and exit the container.
Restart the container:
-```
+```bash
lxc restart ubuntu-test
```
When you list your containers again, you will see your static IP:
-```
+```bash
+-------------------+---------+----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------------+---------+----------------------+------+-----------+-----------+
diff --git a/docs/books/lxd_server/07-configurations.md b/docs/books/lxd_server/07-configurations.md
index df19661747..085b4e8cca 100644
--- a/docs/books/lxd_server/07-configurations.md
+++ b/docs/books/lxd_server/07-configurations.md
@@ -15,13 +15,13 @@ Throughout this chapter you will need to run commands as your unprivileged user
There are a wealth of options for configuring the container after installation. Before seeing those, however, let us examine the `info` command for a container. In this example, you will use the ubuntu-test container:
-```
+```bash
lxc info ubuntu-test
```
This will show the following:
-```
+```bash
Name: ubuntu-test
Location: none
Remote: unix://
@@ -60,7 +60,7 @@ Resources:
There is much good information there, from the profiles applied, to the memory in use, disk space in use, and more.
-### A word about configuration and some options
+## A word about configuration and some options
By default, LXD will assign the required system memory, disk space, CPU cores, and other resources, to the container. But what if you want to be more specific? That is totally possible.
@@ -70,29 +70,29 @@ Just remember that every action you make to configure a container _can_ have neg
Rather than run through all of the options for configuration, use the tab auto-complete to see the options available:
-```
+```bash
lxc config set ubuntu-test
```
-and TAB.
+and ++tab++.
This shows you all of the options for configuring a container. If you have questions about what one of the configuration options does, head to the [official documentation for LXD](https://documentation.ubuntu.com/lxd/en/latest/config-options/) and do a search for the configuration parameter, or Google the entire string, such as `lxc config set limits.memory` and examine the results of the search.
Here we examine a few of the most used configuration options. For example, if you want to set the max amount of memory that a container can use:
-```
+```bash
lxc config set ubuntu-test limits.memory 2GB
```
That says that if the memory is available to use, for example there is 2GB of memory available, then the container can actually use more than 2GB if it is available. It is a soft limit, for example.
-```
+```bash
lxc config set ubuntu-test limits.memory.enforce 2GB
```
That says that the container can never use more than 2GB of memory, whether it is currently available or not. In this case it is a hard limit.
-```
+```bash
lxc config set ubuntu-test limits.cpu 2
```
@@ -104,14 +104,13 @@ That says to limit the number of CPU cores that the container can use to 2.
Remember when you set up our storage pool in the ZFS chapter? You named the pool "storage," but you could have named it anything. If you want to examine this, you can use this command, which works equally well for any of the other pool types too (as shown for dir):
-```
+```bash
lxc storage show storage
```
-
This shows the following:
-```
+```bash
config:
source: /var/snap/lxd/common/lxd/storage-pools/storage
description: ""
@@ -129,11 +128,10 @@ locations:
This shows that all of our containers use our dir storage pool. When using ZFS, you can also set a disk quota on a container. Here is what that command looks like, setting a 2GB disk quota on the ubuntu-test container:
-```
+```bash
lxc config device override ubuntu-test root size=2GB
```
As stated earlier, use configuration options sparingly, unless you have got a container that wants to use way more than its share of resources. LXD, for the most part, will manage the environment well on its own.
Many more options exist that might be of interest to some people. Doing your own research will help you to find out if any of those are of value in your environment.
-
diff --git a/docs/books/lxd_server/08-snapshots.md b/docs/books/lxd_server/08-snapshots.md
index 5e5b627510..a21f4ce319 100644
--- a/docs/books/lxd_server/08-snapshots.md
+++ b/docs/books/lxd_server/08-snapshots.md
@@ -17,25 +17,25 @@ Container snapshots, along with a snapshot server (more on that later), are prob
The author used LXD containers for PowerDNS public facing servers, and the process of updating those applications became less worrisome, thanks to taking snapshots before every update.
-You can even snapshot a container when it is running.
+You can even snapshot a container when it is running.
## The snapshot process
Start by getting a snapshot of the ubuntu-test container by using this command:
-```
+```bash
lxc snapshot ubuntu-test ubuntu-test-1
```
Here, you are calling the snapshot "ubuntu-test-1", but you can call it anything. To ensure that you have the snapshot, do an `lxc info` of the container:
-```
+```bash
lxc info ubuntu-test
```
You have looked at an info screen already. If you scroll to the bottom, you now see:
-```
+```bash
Snapshots:
ubuntu-test-1 (taken at 2021/04/29 15:57 UTC) (stateless)
```
@@ -44,13 +44,13 @@ Success! Our snapshot is in place.
Get into the ubuntu-test container:
-```
+```bash
lxc exec ubuntu-test bash
```
Create an empty file with the _touch_ command:
-```
+```bash
touch this_file.txt
```
@@ -58,19 +58,19 @@ Exit the container.
Before restoring the container how it was prior to creating the file, the safest way to restore a container, particularly if there have been many changes, is to stop it first:
-```
+```bash
lxc stop ubuntu-test
```
Restore it:
-```
+```bash
lxc restore ubuntu-test ubuntu-test-1
```
Start the container again:
-```
+```bash
lxc start ubuntu-test
```
@@ -78,7 +78,7 @@ If you get back into the container again and look, our "this_file.txt" that you
When you do not need a snapshot anymore you can delete it:
-```
+```bash
lxc delete ubuntu-test/ubuntu-test-1
```
@@ -94,7 +94,7 @@ lxc delete ubuntu-test/ubuntu-test-1
So always delete snapshots with the container running.
-In the chapters that follow you will:
+In the chapters that follow you will:
* set up the process of creating snapshots automatically
* set up expiration of a snapshot so that it goes away after a certain length of time
diff --git a/docs/books/lxd_server/09-snapshot_server.md b/docs/books/lxd_server/09-snapshot_server.md
index a64bbecded..b816d6047a 100644
--- a/docs/books/lxd_server/09-snapshot_server.md
+++ b/docs/books/lxd_server/09-snapshot_server.md
@@ -17,7 +17,7 @@ As noted at the beginning, the snapshot server for LXD must be a mirror of the p
The process of building the snapshot server is exactly like the production server. To fully emulate our production server set up, do all of **Chapters 1-4** again on the snapshot server, and when completed, return to this spot.
-You are back!! Congratulations, this must mean that you have successfully completed the basic installation for the snapshot server.
+You are back!! Congratulations, this must mean that you have successfully completed the basic installation for the snapshot server.
## Setting up the primary and snapshot server relationship
@@ -27,38 +27,38 @@ In our lab, we do not have that luxury. Perhaps you've got the same scenario run
In our lab, the primary LXD server is running on 192.168.1.106 and the snapshot LXD server is running on 192.168.1.141. SSH into each server and add the following to the /etc/hosts file:
-```
+```bash
192.168.1.106 lxd-primary
192.168.1.141 lxd-snapshot
```
Next, you need to allow all traffic between the two servers. To do this, you are going to change the `firewalld` rules. First, on the lxd-primary server, add this line:
-```
+```bash
firewall-cmd zone=trusted add-source=192.168.1.141 --permanent
```
and on the snapshot server, add this rule:
-```
+```bash
firewall-cmd zone=trusted add-source=192.168.1.106 --permanent
```
then reload:
-```
+```bash
firewall-cmd reload
```
Next, as our unprivileged (lxdadmin) user, you need to set the trust relationship between the two machines. This is done by running the following on lxd-primary:
-```
+```bash
lxc remote add lxd-snapshot
```
This displays the certificate to accept. Accept it, and it will prompt for your password. This is the "trust password" that you set up when doing the LXD initialization step. Hopefully, you are securely keeping track of all of these passwords. When you enter the password, you will receive this:
-```
+```bash
Client certificate stored at server: lxd-snapshot
```
@@ -70,31 +70,31 @@ Before you can migrate your first snapshot, you need to have any profiles create
You will need to create this for lxd-snapshot. Go back to [Chapter 6](06-profiles.md) and create the "macvlan" profile on lxd-snapshot if you need to. If your two servers have the same parent interface names ("enp3s0" for example) then you can copy the "macvlan" profile over to lxd-snapshot without recreating it:
-```
+```bash
lxc profile copy macvlan lxd-snapshot
```
With all of the relationships and profiles set up, the next step is to actually send a snapshot from lxd-primary over to lxd-snapshot. If you have been following along exactly, you have probably deleted all of your snapshots. Create another snapshot:
-```
+```bash
lxc snapshot rockylinux-test-9 rockylinux-test-9-snap1
```
If you run the "info" command for `lxc`, you can see the snapshot at the bottom of our listing:
-```
+```bash
lxc info rockylinux-test-9
```
Which will show something like this at the bottom:
-```
+```bash
rockylinux-test-9-snap1 at 2021/05/13 16:34 UTC) (stateless)
```
OK, fingers crossed! Let us try to migrate our snapshot:
-```
+```bash
lxc copy rockylinux-test-9/rockylinux-test-9-snap1 lxd-snapshot:rockylinux-test-9
```
@@ -102,7 +102,7 @@ This command says, within the container rockylinux-test-9, you want to send the
After a short time, the copy will be complete. Want to find out for sure? Do an `lxc list` on the lxd-snapshot server. Which should return the following:
-```
+```bash
+-------------------+---------+------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------------+---------+------+------+-----------+-----------+
@@ -112,13 +112,13 @@ After a short time, the copy will be complete. Want to find out for sure? Do an
Success! Try starting it. Because we are starting it on the lxd-snapshot server, you need to stop it first on the lxd-primary server to avoid an IP address conflict:
-```
+```bash
lxc stop rockylinux-test-9
```
And on the lxd-snapshot server:
-```
+```bash
lxc start rockylinux-test-9
```
@@ -128,9 +128,9 @@ Assuming all of this works without error, stop the container on lxd-snapshot and
The snapshots copied to lxd-snapshot will be down when they migrate, but if you have a power event or need to reboot the snapshot server because of updates or something, you will end up with a problem. Those containers will try to start on the snapshot server creating a potential IP address conflict.
-To eliminate this, you need to set the migrated containers so that they will not start on reboot of the server. For our newly copied rockylinux-test-9 container, you will do this with:
+To eliminate this, you need to set the migrated containers so that they will not start on reboot of the server. For our newly copied rockylinux-test-9 container, you will do this with:
-```
+```bash
lxc config set rockylinux-test-9 boot.autostart 0
```
@@ -142,7 +142,7 @@ It is great that you can create snapshots when you need to, and sometimes you _d
The first thing you need to do is schedule a process to automate snapshot creation on lxd-primary. You will do this for each container on the lxd-primary server. When completed, it will take care of this going forward. You do this with the following syntax. Note the similarities to a crontab entry for the timestamp:
-```
+```bash
lxc config set [container_name] snapshots.schedule "50 20 * * *"
```
@@ -150,18 +150,18 @@ What this is saying is, do a snapshot of the container name every day at 8:50 PM
To apply this to our rockylinux-test-9 container:
-```
+```bash
lxc config set rockylinux-test-9 snapshots.schedule "50 20 * * *"
```
You also want to set up the name of the snapshot to be meaningful by our date. LXD uses UTC everywhere, so our best bet to keep track of things, is to set the snapshot name with a date and time stamp that is in a more understandable format:
-```
+```bash
lxc config set rockylinux-test-9 snapshots.pattern "rockylinux-test-9{{ creation_date|date:'2006-01-02_15-04-05' }}"
```
GREAT, but you certainly do not want a new snapshot every day without getting rid of an old one, right? You would fill up the drive with snapshots. To fix this you run:
-```
+```bash
lxc config set rockylinux-test-9 snapshots.expiry 1d
```
diff --git a/docs/books/lxd_server/10-automating.md b/docs/books/lxd_server/10-automating.md
index 9e46f68bb2..4d43bc0e21 100644
--- a/docs/books/lxd_server/10-automating.md
+++ b/docs/books/lxd_server/10-automating.md
@@ -13,20 +13,19 @@ tags:
Throughout this chapter you will need to be root or able to `sudo` to become root.
-Automating the snapshot process makes things a whole lot easier.
+Automating the snapshot process makes things a whole lot easier.
## Automating the snapshot copy process
-
Perform this process on lxd-primary. First thing you need to do is create a script that will run by a cron in /usr/local/sbin called "refresh-containers" :
-```
+```bash
sudo vi /usr/local/sbin/refreshcontainers.sh
```
The script is pretty minimal:
-```
+```bash
#!/bin/bash
# This script is for doing an lxc copy --refresh against each container, copying
# and updating them to the snapshot server.
@@ -40,25 +39,25 @@ for x in $(/var/lib/snapd/snap/bin/lxc ls -c n --format csv)
Make it executable:
-```
+```bash
sudo chmod +x /usr/local/sbin/refreshcontainers.sh
```
Change the ownership of this script to your lxdadmin user and group:
-```
+```bash
sudo chown lxdadmin.lxdadmin /usr/local/sbin/refreshcontainers.sh
```
Set up the crontab for the lxdadmin user to run this script, in this case at 10 PM:
-```
+```bash
crontab -e
```
Your entry will look like this:
-```
+```bash
00 22 * * * /usr/local/sbin/refreshcontainers.sh > /home/lxdadmin/refreshlog 2>&1
```
@@ -68,6 +67,6 @@ This will create a log in lxdadmin's home directory called "refreshlog" which wi
The automated procedure will fail sometimes. This generally happens when a particular container fails to refresh. You can manually re-run the refresh with the following command (assuming rockylinux-test-9 here, is our container):
-```
+```bash
lxc copy --refresh rockylinux-test-9 lxd-snapshot:rockylinux-test-9
```
diff --git a/docs/books/lxd_server/30-appendix_a.md b/docs/books/lxd_server/30-appendix_a.md
index 7754989b80..fa3e376498 100644
--- a/docs/books/lxd_server/30-appendix_a.md
+++ b/docs/books/lxd_server/30-appendix_a.md
@@ -24,25 +24,25 @@ While not a part of the chapters for an LXD Server, this procedure will help tho
From the command line, install the EPEL repository:
-```
+```bash
sudo dnf install epel-release
```
When installation finishes, do an upgrade:
-```
+```bash
sudo dnf upgrade
```
Install `snapd`
-```
+```bash
sudo dnf install snapd
```
Enable the `snapd` service
-```
+```bash
sudo systemctl enable snapd
```
@@ -50,48 +50,48 @@ Reboot your notebook or workstation
Install the snap for LXD:
-```
+```bash
sudo snap install lxd
```
## LXD initialization
-If you have looked through the production server chapters, this is nearly the same as the production server init procedure.
+If you have looked through the production server chapters, this is nearly the same as the production server init procedure.
-```
+```bash
sudo lxd init
```
-This will start a question and answer dialog.
+This will start a question and answer dialog.
Here are the questions and our answers for the script, with a little explanation where warranted:
-```
+```text
Would you like to use LXD clustering? (yes/no) [default=no]:
```
If you have interest in clustering, do some additional research on that [at Linux containers here](https://documentation.ubuntu.com/lxd/en/latest/clustering/).
-```
+```text
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]: storage
```
-Optionally, you can accept the default.
+Optionally, you can accept the default.
-```
+```text
Name of the storage backend to use (btrfs, dir, lvm, ceph) [default=btrfs]: dir
```
Note that `dir` is somewhat slower than `btrfs`. If you have the foresight to leave a disk empty, you can use that device (example: /dev/sdb) for the `btrfs` device and then select `btrfs`, but only if your host computer has an operating system that supports `btrfs`. Rocky Linux and any RHEL clone will not support `btrfs` - not yet, anyway. `dir` will work fine for a lab environment.
-```
+```text
Would you like to connect to a MAAS server? (yes/no) [default=no]:
```
Metal As A Service (MAAS) is outside the scope of this document.
-```
+```text
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
@@ -100,13 +100,13 @@ What IPv6 address should be used? (CIDR subnet notation, “auto” or “none
If you want to use IPv6 on your LXD containers, you can turn on this option. That is up to you.
-```
+```text
Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes
```
This is necessary to snapshot the workstation. Answer "yes" here.
-```
+```text
Address to bind LXD to (not including port) [default=all]:
Port to bind LXD to [default=8443]:
Trust password for new clients:
@@ -115,7 +115,7 @@ Again:
This trust password is how you will connect to the snapshot server or back from the snapshot server. Set this with something that makes sense in your environment. Save this entry to a secure location, such as a password manager.
-```
+```text
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
```
@@ -124,7 +124,7 @@ Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
The next thing you need to do is to add your user to the lxd group. Again, you will need to use `sudo` or be root for this:
-```
+```text
sudo usermod -a -G lxd [username]
```
@@ -136,13 +136,13 @@ At this point, you have made a bunch of changes. Before you go any further, rebo
To ensure that `lxd` started and that your user has privileges, from the shell prompt do:
-```
+```text
lxc list
```
Note you have not used `sudo` here. Your user has the ability to enter these commands. You will see something like this:
-```
+```bash
+------------+---------+----------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------------+---------+----------------------+------+-----------+-----------+
@@ -163,6 +163,6 @@ From this point, you can use the chapters from our "LXD Production Server" to co
* [LXD Beginners Guide](../../guides/containers/lxd_web_servers.md) which will get you started using LXD productively.
* [Official LXD Overview and Documentation](https://documentation.ubuntu.com/lxd/en/latest/)
-## Conclusion
+## Conclusion
-LXD is a powerful tool that you can use on workstations or servers for increased productivity. On a workstation, it is great for lab testing, but can also keep semi-permanent instances of operating systems and applications available in their own private space.
+LXD is a powerful tool that you can use on workstations or servers for increased productivity. On a workstation, it is great for lab testing, but can also keep semi-permanent instances of operating systems and applications available in their own private space.