diff --git a/.github/workflows/cluster.yml b/.github/workflows/cluster.yml index 2ea477c4..f30cfe81 100644 --- a/.github/workflows/cluster.yml +++ b/.github/workflows/cluster.yml @@ -82,7 +82,7 @@ jobs: cd /home/lme-user/LME/testing/v2/installers && \ python3 ./azure/build_azure_linux_network.py \ -g pipe-${{ env.UNIQUE_ID }} \ - -s 0.0.0.0/0 \ + -s ${{ env.IP_ADDRESS }}/32 \ -vs Standard_D8_v4 \ -l centralus \ -ast 23:00 \ @@ -251,7 +251,7 @@ jobs: env: ES_PASSWORD: ${{ env.ES_PASSWORD }} run: | - sleep 120 + sleep 360 cd testing/v2/development docker compose -p ${{ env.UNIQUE_ID }} exec -T pipeline bash -c " ssh -o StrictHostKeyChecking=no lme-user@${{ env.AZURE_IP }} \ @@ -265,9 +265,7 @@ jobs: run: | cd testing/v2/development docker compose -p ${{ env.UNIQUE_ID }} exec -T pipeline bash -c " - cd /home/lme-user/LME/testing/v2/installers && \ - IP_ADDRESS=\$(cat pipe-${{ env.UNIQUE_ID }}.ip.txt) && \ - ssh lme-user@\$IP_ADDRESS 'cd /home/lme-user/LME/testing/tests && \ + ssh lme-user@${{ env.AZURE_IP }} 'cd /home/lme-user/LME/testing/tests && \ echo ELASTIC_PASSWORD=\"$ES_PASSWORD\" >> .env && \ echo KIBANA_PASSWORD=\"$KIBANA_PASSWORD\" >> .env && \ echo elastic=\"$ES_PASSWORD\" >> .env && \ @@ -282,14 +280,12 @@ jobs: run: | cd testing/v2/development docker compose -p ${{ env.UNIQUE_ID }} exec -T pipeline bash -c " - cd /home/lme-user/LME/testing/v2/installers && \ - IP_ADDRESS=\$(cat pipe-${{ env.UNIQUE_ID }}.ip.txt) && \ - ssh lme-user@\$IP_ADDRESS 'cd /home/lme-user/LME/testing/tests && \ + ssh lme-user@${{ env.AZURE_IP }} 'cd /home/lme-user/LME/testing/tests && \ echo ELASTIC_PASSWORD=\"$ES_PASSWORD\" >> .env && \ echo KIBANA_PASSWORD=\"$KIBANA_PASSWORD\" >> .env && \ echo elastic=\"$ES_PASSWORD\" >> .env && \ source venv/bin/activate && \ - pytest -v selenium_tests/' + pytest -v selenium_tests/' " - name: Cleanup Azure resources @@ -311,4 +307,4 @@ jobs: run: | cd testing/v2/development docker compose -p ${{ env.UNIQUE_ID }} down - docker system prune -af \ No newline at end of file + docker system prune -af diff --git a/.github/workflows/linux_only.yml b/.github/workflows/linux_only.yml index 54bab48d..c5e5223e 100644 --- a/.github/workflows/linux_only.yml +++ b/.github/workflows/linux_only.yml @@ -16,6 +16,7 @@ jobs: ES_PASSWORD: "" KIBANA_PASSWORD: "" AZURE_IP: "" + IP_ADDRESS: "" steps: - name: Checkout repository @@ -26,6 +27,9 @@ jobs: cd testing/v2/development echo "HOST_UID=$(id -u)" > .env echo "HOST_GID=$(id -g)" >> .env + PUBLIC_IP=$(curl -s https://api.ipify.org) + echo "IP_ADDRESS=$PUBLIC_IP" >> $GITHUB_ENV + - name: Start pipeline container run: | @@ -57,7 +61,7 @@ jobs: cd /home/lme-user/LME/testing/v2/installers && \ python3 ./azure/build_azure_linux_network.py \ -g pipe-${{ env.UNIQUE_ID }} \ - -s 0.0.0.0/0 \ + -s ${{ env.IP_ADDRESS }}/32 \ -vs Standard_E4d_v4 \ -l westus \ -ast 23:00 \ diff --git a/README.md b/README.md index 796a2106..973feaa2 100644 --- a/README.md +++ b/README.md @@ -1,18 +1,53 @@ -![N|Solid](/docs/imgs/cisa.png) - [![Downloads](https://img.shields.io/github/downloads/cisagov/lme/total.svg)]() -# Logging Made Easy: Podmanized -This will eventually be merged with the Readme file at [LME-README](https://github.com/cisagov/LME). + +# Logging Made Easy: + +CISA's Logging Made Easy has a self-install tutorial for organizations to gain a basic level of centralized security logging for Windows clients and provide functionality to detect attacks. LME is the integration of multiple open software platforms which come at no cost to users. LME helps users integrate software platforms together to produce an end-to-end logging capability. LME also provides some pre-made configuration files and scripts, although there is the option to do this on your own. + +Logging Made Easy can: + +- Show where administrative commands are being run on enrolled devices +- See who is using which machine +- In conjunction with threat reports, it is possible to query for the presence of an attacker in the form of Tactics, Techniques and Procedures (TTPs) + +## Disclaimer: + +LME is still in development, and version 2.1 will address scaling out the deployment. + +While LME offers SEIM like capabilities, it should be consider a small simple SIEM. + +The LME team simplified the process and created clear instruction on what to download and which configugrations to use, and created convinent scripts to auto configure when possible. + +LME is not able to comment on or troubleshoot individual installations. If you believe you have have found an issue with the LME code or documentation please submit a GitHub issue. If you have a question about your installation, please look through all open and closed issues to see if it has been addressed before. If not, then submit a [GitHub issue](https://github.com/cisagov/lme/issues) using the Bug Template, ensuring that you provide all the requested information. + +For general questions about LME and suggestions, please visit [GitHub Discussions](https://github.com/cisagov/lme/discussions) to add a discussion post. + +## Who is Logging Made Easy for? + +From single IT administrators with a handful of devices in their network to larger organizations. + +LME is suited for for: + +- Organizations without [SOC](https://en.wikipedia.org/wiki/Information_security_operations_center), SIEM or any monitoring in place at the moment. +- Organizations that lack the budget, time or understanding to set up a logging system. +- Organizations that that require gathering logs and monitoring IT +- Organizations that understand LMEs limitiation + ## Table of Contents: +- [Pre-Requisites:](#architecture) - [Architecture:](#architecture) - [Installation:](#installation) - [Deploying Agents:](#deploying-agents) - [Password Encryption:](#password-encryption) -- [Further Documentation:](#documentation) +- [Further Documentation & Upgrading:](#documentation) + +## Pre-Requisites +If you are unsure you meet the pre-requisites to installing LME, please read our [prerequisites documentation](/docs/markdown/prerequisites.md). +The biggest Pre-requisite is setting up hardware for your ubuntu server with a minimum of `2 processors`, `16gb ram`, and `128gb` of dedicated storage for LME's Elasticsearch database. ## Architecture: Ubuntu 22.04 server running podman containers setup as podman quadlets controlled via systemd. @@ -20,10 +55,11 @@ Ubuntu 22.04 server running podman containers setup as podman quadlets controlle ### Required Ports: Ports required are as follows: - Elasticsearch: *9200* - - Kibana: 443 + - Kibana: *443,5601* - Wazuh: *1514,1515,1516,55000,514* - Agent: *8220* +**Kibana NOTE**: 5601 is the default port, and we've set kibana to listen on 443 as well ### Diagram: @@ -40,7 +76,7 @@ Podman is more secure (by default) against container escape attacks than Docker. - Elastic agents provide integrations, have more features than winlogbeat. - wazuh-manager: runs the wazuh manager so we can deploy and manage wazuh agents. - Wazuh (open source) gives EDR (Endpoint Detection Response) with security dashboards to cover the security of all of the machines. - - lme-frontend: will host an api and gui that unifies the architecture behind one interface + - lme-frontend (*coming in a future release*): will host an api and gui that unifies the architecture behind one interface ### Agents: Wazuh agents will enable EDR capabilities, while Elastic agents will enable logging capabilities. @@ -49,10 +85,11 @@ Wazuh agents will enable EDR capabilities, while Elastic agents will enable logg - https://github.com/elastic/elastic-agent ## Installation: - -If you are unsure you meet the pre-requisites to installing LME, please read our [prerequisites documentation](/docs/markdown/prerequisites.md) Please ensure you follow all the configuration steps required below. +**Upgrading**: +If you are a previous user of LME and wish to upgrade from 1.4 -> 2.0, please see our [upgrade documentation](/docs/markdown/maintenance/upgrading.md). + ### Downloading LME: **All steps will assume you start in your cloned directory of LME on your ubuntu 22.04 server** @@ -79,7 +116,7 @@ in `setup` find the configuration for certificate generation and password settin `instances.yml` defines the certificates that will get created. The shellscripts initialize accounts and create certificates, and will run from their respective quadlet definitions `lme-setup-accts` and `lme-setup-certs` respectively. -Quadlet configuration for containers is in: `/quadlet/`. These are mapped to the root's systemd unit files, but will execute as the `lmed` user. +Quadlet configuration for containers is in: `/quadlet/`. These are mapped to the root's systemd unit files, but will execute as a non-privileged user. \***TO EDIT**:\* The only file that really needs to be touched is creating `/config/lme-environment.env`, which sets up the required environment variables @@ -110,7 +147,7 @@ You can run this installer to run the total install in ansible. ```bash sudo apt update && sudo apt install -y ansible # cd ~/LME-PRIV/lme-2-arch # Or path to your clone of this repo -ansible-playbook install_lme_local.yml +ansible-playbook ./scripts/install_lme_local.yml ``` This assumes that you have the repo in `~/LME/`. @@ -120,7 +157,6 @@ ansible-playbook ./scripts/install_lme_local.yml -e "clone_dir=/path/to/clone/di ``` This also assumes your user can sudo without a password. If you need to input a password when you sudo, you can run it with the `-K` flag and it will prompt you for a password. -There is a step that will fail, this is expected, it is checking for podman secrets to see if they exist... on an intial install none will exist :) #### Steps performed in automated install: TODO finalize this with more words @@ -130,16 +166,21 @@ TODO finalize this with more words 3. Setup Nix 4. set service user passwords 5. Install Quadlets -6. Setup Containers for root +6. Setup Containers for root: The contianers listed in `$clone_directory/config/containers.txt` will be pulled and tagged 7. Start lme.service #### NOTES: -1. `/opt/lme` will be owned by the lmed user, all lme services will run and execute as lmed, and this ensures least privilege in lmed's execution because lmed is a non-admin,unprivileged user. +1. `/opt/lme` will be owned by root, all lme services will run and execute as unprivileged users. The active lme configuration is stored in `/opt/lme/config`. -3. [this script](/scripts/set_sysctl_limits.sh) is executed via ansible AND will change unprivileged ports to start at 80, to allow kibana to listen on 443 from a user run container. If this is not desired, we will be publishing steps to setup firewall rules using ufw//iptables to manage the firewall on this host at a later time. - -4. the master password will be stored at `/etc/lme/pass.sh` and owned by root, while service user passwords will be stored at `/etc/lme/vault/` +2. Other relevant directories are listed here: +- `/root/.config/containers/containers.conf`: LME will setup a custom podman configuration for secrets management via [ansible vault](https://docs.ansible.com/ansible/latest/cli/ansible-vault.html). +- `/etc/lme`: storage directory for the master password and user password vault +- `/etc/lme/pass.sh`: the master password file +- `/etc/containers/systemd`: directory where LME installs its quadlet service files +- `/etc/systemd/system`: directory where lme.service is installed + +3. the master password will be stored at `/etc/lme/pass.sh` and owned by root, while service user passwords will be stored at `/etc/lme/vault/` ### Verification post install: @@ -160,15 +201,13 @@ sudo -i journalctl -xu lme.service #try resetting failed: sudo -i systemctl reset-failed lme* sudo -i systemctl restart lme.service -``` -2. Check you can connect to elasticsearch -```bash -#substitute your password below: -curl -k -u elastic:$(sudo -i ansible-vault view /etc/lme/vault/$(sudo -i podman secret ls | grep elastic | awk '{print $1}') | tr -d '\n') https://localhost:9200 +#also try inspecting container logs: +#CONTAINER_NAME=lme-elasticsearch +sudo -i podman logs -f $CONTAINER_NAME ``` -3. Check conatiners are running: +2. Check conatiners are running and healthy: ```bash sudo -i podman ps --format "{{.Names}} {{.Status}}" ``` @@ -180,11 +219,19 @@ lme-kibana Up 2 hours (healthy) lme-wazuh-manager Up About an hour lme-fleet-server Up 50 minutes ``` +We are working on getting health check commands for wazuh and fleet, currently they are not integrated + +3. Check you can connect to elasticsearch +```bash +#substitute your password below: +curl -k -u elastic:$(sudo -i ansible-vault view /etc/lme/vault/$(sudo -i podman secret ls | grep elastic | awk '{print $1}') | tr -d '\n') https://localhost:9200 +``` 4. Check you can connect to kibana +You can use an ssh proxy to forward a local port to the remote linux host ```bash -#connect via ssh -ssh -L 8080:localhost:443 [YOUR-LINUX-SERVER] +#connect via ssh if you need to +ssh -L 8080:localhost:5601 [YOUR-LINUX-SERVER] #go to browser: #https://localhost:8080 ``` @@ -246,7 +293,8 @@ systemctl start wazuh-agent From PowerShell with admin capabilities run the following command ``` -Invoke-WebRequest -Uri https://packages.wazuh.com/4.x/windows/wazuh-agent-4.7.5-1.msi -OutFile wazuh-agent-4.7.5-1.msi; Start-Process msiexec.exe -ArgumentList '/i wazuh-agent-4.7.5-1.msi /q WAZUH_MANAGER="IPADDRESS OF WAZUH HOST MACHINE"' -Wait -NoNewWindow +Invoke-WebRequest -Uri https://packages.wazuh.com/4.x/windows/wazuh-agent-4.7.5-1.msi -OutFile wazuh-agent-4.7.5-1.msi;` +Start-Process msiexec.exe -ArgumentList '/i wazuh-agent-4.7.5-1.msi /q WAZUH_MANAGER="IPADDRESS OF WAZUH HOST MACHINE"' -Wait -NoNewWindow` ``` Start the service: @@ -265,12 +313,11 @@ NET START Wazuh ## Password Encryption: Password encryption is enabled using ansible-vault to store all lme user and lme service user passwords at rest. We do submit a hash of the password to Have I been pwned to check to see if it is compromised: [READ MORE HERE](https://haveibeenpwned.com/FAQs) + ### where are passwords stored?: ```bash # Define user-specific paths -USER_CONFIG_DIR="/root/.config/lme" -USER_VAULT_DIR="/opt/lme/vault" -USER_SECRETS_CONF="$USER_CONFIG_DIR/secrets.conf" +USER_VAULT_DIR="/etc/lme/vault" PASSWORD_FILE="/etc/lme/pass.sh" ``` @@ -288,29 +335,36 @@ lme-user@ubuntu:~/LME-TEST$ sudo -i ${PWD}/scripts/password_management.sh -h ### grabbing passwords: To view the appropriate service user password use ansible-vault, as root: ``` +#script: +$CLONE_DIRECTORY/scripts/extract_secrets.sh -p #to print + +#add them as variables to your current shell +source $CLONE_DIRECTORY/scripts/extract_secrets.sh #without printing values +source $CLONE_DIRECTORY/scripts/extract_secrets.sh -q #with no output + +## manually: #where wazuh_api is the service user whose password you want: sudo -i ansible-vault view /etc/lme/vault/$(sudo -i podman secret ls | grep wazuh_api | awk '{print $1}') ``` - - # Documentation: ### Logging Guidance - [LME in the CLOUD](/docs/markdown/logging-guidance/cloud.md) - - [Log Retention](/docs/markdown/logging-guidance/retention.md) TODO update to be current + - [Log Retention](/docs/markdown/logging-guidance/retention.md) *TODO*: change link to new documentation - [Additional Log Types](/docs/markdown/logging-guidance/other-logging.md) -### Reference: TODO update these to current - - [FAQ](/docs/markdown/reference/faq.md) - - [Troubleshooting](/docs/markdown/reference/troubleshooting.md) +## Reference: + - [FAQ](/docs/markdown/reference/faq.md) *TODO* + - [Troubleshooting](/docs/markdown/reference/troubleshooting.md) *TODO* - [Dashboard Descriptions](/docs/markdown/reference/dashboard-descriptions.md) - [Guide to Organizational Units](/docs/markdown/chapter1/guide_to_ous.md) - [Security Model](/docs/markdown/reference/security-model.md) - - [DEV NOTES](/docs/markdown/reference/dev-notes) -### Maintenance: - - [Backups](/docs/markdown/maintenance/backups.md) - - [Upgrading](/docs/markdown/maintenance/upgrading.md) - - [Certificates](/docs/markdown/maintenance/certificates.md) - +## Maintenance: + - [Backups](/docs/markdown/maintenance/backups.md) *TODO* change link to new documentation + - [Upgrading 1x -> 2x](/scripts/upgrade/README.md) + - [Certificates](/docs/markdown/maintenance/certificates.md) *TODO* + +## Agents: +*TODO* add in docs in new documentation diff --git a/scripts/install_lme_local.yml b/ansible/install_lme_local.yml similarity index 98% rename from scripts/install_lme_local.yml rename to ansible/install_lme_local.yml index 5ebbff5f..d6e849c9 100644 --- a/scripts/install_lme_local.yml +++ b/ansible/install_lme_local.yml @@ -244,7 +244,6 @@ set_fact: ansible_env: "{{ ansible_env | combine({'PATH': ansible_env.PATH ~ ':/nix/var/nix/profiles/default/bin'}) }}" - - name: Update PATH in user's profile lineinfile: path: "~/.profile" @@ -291,7 +290,9 @@ args: executable: /bin/bash ignore_errors: true - + #only fail on a real error + failed_when: result.rc != 0 and (result.rc == 1 and result.changed == false) + - name: Set podman secret passwords shell: | source /root/.profile @@ -306,7 +307,8 @@ - wazuh_api - wazuh become: yes - when: result is failed + ## only run this when + when: result.rc == 1 - name: Install Quadlets hosts: localhost diff --git a/ansible/set_fleet.yml b/ansible/set_fleet.yml new file mode 100644 index 00000000..d7839383 --- /dev/null +++ b/ansible/set_fleet.yml @@ -0,0 +1,310 @@ +--- +- name: Set up Fleet + hosts: localhost + become: yes + gather_facts: no + + vars: + headers: + kbn-version: "8.12.2" + kbn-xsrf: "kibana" + Content-Type: "application/json" + max_retries: 60 + delay_seconds: 10 + debug_mode: false + + tasks: + - name: Read lme-environment.env file + ansible.builtin.slurp: + src: /opt/lme/lme-environment.env + register: lme_env_content + + - name: Set environment variables + ansible.builtin.set_fact: + env_dict: "{{ env_dict | default({}) | combine({ item.split('=', 1)[0]: item.split('=', 1)[1] }) }}" + loop: "{{ (lme_env_content['content'] | b64decode).split('\n') }}" + when: item != '' and not item.startswith('#') + + - name: Display set environment variables + debug: + msg: "Set {{ item.key }}" + loop: "{{ env_dict | dict2items }}" + when: item.value | length > 0 + + - name: Source extract_secrets + ansible.builtin.shell: | + set -a + . {{ playbook_dir }}/../scripts/extract_secrets.sh -q + echo "elastic=$elastic" + echo "wazuh=$wazuh" + echo "kibana_system=$kibana_system" + echo "wazuh_api=$wazuh_api" + args: + executable: /bin/bash + register: extract_secrets_vars + no_log: "{{ not debug_mode }}" + + - name: Set secret variables + ansible.builtin.set_fact: + env_dict: "{{ env_dict | combine({ item.split('=', 1)[0]: item.split('=', 1)[1] }) }}" + loop: "{{ extract_secrets_vars.stdout_lines }}" + no_log: "{{ not debug_mode }}" + + - name: Set playbook variables + ansible.builtin.set_fact: + ipvar: "{{ env_dict.IPVAR | default('') }}" + local_kbn_url: "{{ env_dict.LOCAL_KBN_URL | default('') }}" + local_es_url: "{{ env_dict.LOCAL_ES_URL | default('') }}" + stack_version: "{{ env_dict.STACK_VERSION | default('') }}" + cluster_name: "{{ env_dict.CLUSTER_NAME | default('') }}" + elastic_username: "{{ env_dict.ELASTIC_USERNAME | default('') }}" + elasticsearch_username: "{{ env_dict.ELASTICSEARCH_USERNAME | default('') }}" + kibana_fleet_username: "{{ env_dict.KIBANA_FLEET_USERNAME | default('') }}" + indexer_username: "{{ env_dict.INDEXER_USERNAME | default('') }}" + api_username: "{{ env_dict.API_USERNAME | default('') }}" + license: "{{ env_dict.LICENSE | default('') }}" + es_port: "{{ env_dict.ES_PORT | default('') }}" + kibana_port: "{{ env_dict.KIBANA_PORT | default('') }}" + fleet_port: "{{ env_dict.FLEET_PORT | default('') }}" + mem_limit: "{{ env_dict.MEM_LIMIT | default('') }}" + elastic_password: "{{ env_dict.elastic | default('') }}" + wazuh_password: "{{ env_dict.wazuh | default('') }}" + kibana_system_password: "{{ env_dict.kibana_system | default('') }}" + wazuh_api_password: "{{ env_dict.wazuh_api | default('') }}" + + - name: Debug - Display set variables (sensitive information redacted) + debug: + msg: + - "ipvar: {{ ipvar }}" + - "local_kbn_url: {{ local_kbn_url }}" + - "local_es_url: {{ local_es_url }}" + - "elastic_username: {{ elastic_username }}" + - "stack_version: {{ stack_version }}" + - "cluster_name: {{ cluster_name }}" + - "elasticsearch_username: {{ elasticsearch_username }}" + - "kibana_fleet_username: {{ kibana_fleet_username }}" + - "indexer_username: {{ indexer_username }}" + - "api_username: {{ api_username }}" + - "license: {{ license }}" + - "es_port: {{ es_port }}" + - "kibana_port: {{ kibana_port }}" + - "fleet_port: {{ fleet_port }}" + - "mem_limit: {{ mem_limit }}" + - "elastic password is set: {{ elastic_password | length > 0 }}" + - "wazuh password is set: {{ wazuh_password | length > 0 }}" + - "kibana_system password is set: {{ kibana_system_password | length > 0 }}" + - "wazuh_api password is set: {{ wazuh_api_password | length > 0 }}" + when: debug_mode | bool + + - name: Wait for Kibana port to be available + wait_for: + host: "{{ ipvar }}" + port: "{{ kibana_port | int }}" + timeout: 300 + register: kibana_port_check + + - name: Wait for Fleet API to be ready + ansible.builtin.shell: | + attempt=0 + max_attempts=30 + delay=10 + while [ $attempt -lt $max_attempts ]; do + response=$(curl -s -o /dev/null -w "%{http_code}" -k -u elastic:{{ elastic_password }} {{ local_kbn_url }}/api/fleet/agents/setup) + if [ "$response" = "200" ]; then + echo "Fleet API is ready. Proceeding with configuration..." + exit 0 + fi + echo "Waiting for Fleet API to be ready..." + sleep $delay + attempt=$((attempt+1)) + done + echo "Fleet API did not become ready within the expected time." + exit 1 + register: fleet_api_check + changed_when: false + no_log: "{{ not debug_mode }}" + + - name: Display Fleet API check result + debug: + var: fleet_api_check.stdout_lines + + - name: Confirm Fleet API is ready + debug: + msg: "Fleet API is ready" + when: "'Fleet API is ready' in fleet_api_check.stdout" + + - name: Fail if Fleet API is not ready + fail: + msg: "Fleet API did not become ready within the expected time." + when: "'Fleet API is ready' not in fleet_api_check.stdout" + + - name: Get CA fingerprint + ansible.builtin.shell: | + sudo bash -c ' + set -a + . {{ playbook_dir }}/../scripts/extract_secrets.sh -q + set +a + /nix/var/nix/profiles/default/bin/podman exec -w /usr/share/elasticsearch/config/certs/ca lme-elasticsearch cat ca.crt | openssl x509 -noout -fingerprint -sha256 | cut -d "=" -f 2 | tr -d : | head -n1 + ' + register: ca_fingerprint + changed_when: false + become: yes + become_method: sudo + no_log: "{{ not debug_mode }}" + + - name: Display CA fingerprint + debug: + var: ca_fingerprint.stdout + when: + - ca_fingerprint is defined + - ca_fingerprint.stdout is defined + - debug_mode | bool + + - name: Set Fleet server hosts + uri: + url: "{{ local_kbn_url }}/api/fleet/settings" + method: PUT + user: "{{ elastic_username }}" + password: "{{ elastic_password }}" + force_basic_auth: yes + validate_certs: no + headers: "{{ headers }}" + body_format: json + body: + fleet_server_hosts: ["https://{{ ipvar }}:{{ fleet_port }}"] + register: fleet_server_hosts_result + no_log: "{{ not debug_mode }}" + ignore_errors: yes + + - name: Debug Fleet server hosts result + debug: + var: fleet_server_hosts_result + when: fleet_server_hosts_result is defined and debug_mode | bool + + - name: Set Fleet default output hosts + uri: + url: "{{ local_kbn_url }}/api/fleet/outputs/fleet-default-output" + method: PUT + user: "{{ elastic_username }}" + password: "{{ elastic_password }}" + force_basic_auth: yes + validate_certs: no + headers: "{{ headers }}" + body_format: json + body: + hosts: ["https://{{ ipvar }}:9200"] + register: fleet_output_hosts_result + no_log: "{{ not debug_mode }}" + ignore_errors: yes + + - name: Debug Fleet default output hosts result + debug: + var: fleet_output_hosts_result + when: fleet_output_hosts_result is defined + + - name: Set Fleet default output CA trusted fingerprint + uri: + url: "{{ local_kbn_url }}/api/fleet/outputs/fleet-default-output" + method: PUT + user: "{{ elastic_username }}" + password: "{{ elastic_password }}" + force_basic_auth: yes + validate_certs: no + headers: "{{ headers }}" + body_format: json + body: + ca_trusted_fingerprint: "{{ ca_fingerprint.stdout }}" + register: fleet_output_fingerprint_result + no_log: "{{ not debug_mode }}" + + - name: Set Fleet default output SSL verification mode + uri: + url: "{{ local_kbn_url }}/api/fleet/outputs/fleet-default-output" + method: PUT + user: "{{ elastic_username }}" + password: "{{ elastic_password }}" + force_basic_auth: yes + validate_certs: no + headers: "{{ headers }}" + body_format: json + body: + config_yaml: "ssl.verification_mode: certificate" + register: fleet_output_ssl_result + no_log: "{{ not debug_mode }}" + + - name: Create Endpoint Policy + uri: + url: "{{ local_kbn_url }}/api/fleet/agent_policies?sys_monitoring=true" + method: POST + user: "{{ elastic_username }}" + password: "{{ elastic_password }}" + force_basic_auth: yes + validate_certs: no + headers: "{{ headers }}" + body_format: json + body: + name: "Endpoint Policy" + description: "" + namespace: "default" + monitoring_enabled: ["logs", "metrics"] + inactivity_timeout: 1209600 + timeout: 600 + register: endpoint_policy_result + no_log: "{{ not debug_mode }}" + + - name: Get Endpoint package version + uri: + url: "{{ local_kbn_url }}/api/fleet/epm/packages/endpoint" + method: GET + user: "{{ elastic_username }}" + password: "{{ elastic_password }}" + force_basic_auth: yes + validate_certs: no + headers: "{{ headers }}" + register: endpoint_package_result + no_log: "{{ not debug_mode }}" + + - name: Create Elastic Defend package policy + uri: + url: "{{ local_kbn_url }}/api/fleet/package_policies" + method: POST + user: "{{ elastic_username }}" + password: "{{ elastic_password }}" + force_basic_auth: yes + validate_certs: no + headers: "{{ headers }}" + body_format: json + timeout: 600 + body: + name: "Elastic Defend" + description: "" + namespace: "default" + policy_id: "{{ endpoint_policy_result.json.item.id }}" + enabled: true + inputs: + - enabled: true + streams: [] + type: "ENDPOINT_INTEGRATION_CONFIG" + config: + _config: + value: + type: "endpoint" + endpointConfig: + preset: "EDRComplete" + package: + name: "endpoint" + title: "Elastic Defend" + version: "{{ endpoint_package_result.json.item.version }}" + register: elastic_defend_policy_result + no_log: "{{ not debug_mode }}" + + - name: Display results + debug: + var: "{{ item }}" + loop: + - fleet_server_hosts_result + - fleet_output_hosts_result + - fleet_output_fingerprint_result + - fleet_output_ssl_result + - endpoint_policy_result + - elastic_defend_policy_result diff --git a/config/setup/init-setup.sh b/config/setup/init-setup.sh index c5e9ccc2..9884d2c3 100644 --- a/config/setup/init-setup.sh +++ b/config/setup/init-setup.sh @@ -24,12 +24,13 @@ if [ ! -f "${CERTS_DIR}/certs.zip" ]; then elasticsearch-certutil cert --silent --pem --in "${INSTANCES_PATH}" --out "${CERTS_DIR}/certs.zip" --ca-cert "${CERTS_DIR}/ca/ca.crt" --ca-key "${CERTS_DIR}/ca/ca.key" unzip -o "${CERTS_DIR}/certs.zip" -d "${CERTS_DIR}" cat "${CERTS_DIR}/elasticsearch/elasticsearch.crt" "${CERTS_DIR}/ca/ca.crt" > "${CERTS_DIR}/elasticsearch/elasticsearch.chain.pem" -fi -echo "Setting file permissions... certs" -chown -R elasticsearch:elasticsearch "${CERTS_DIR}" -find "${CERTS_DIR}" -type d -exec chmod 755 {} \; -find "${CERTS_DIR}" -type f -exec chmod 644 {} \; + echo "Setting file permissions... certs" + chown -R elasticsearch:elasticsearch "${CERTS_DIR}" + find "${CERTS_DIR}" -type d -exec chmod 755 {} \; + find "${CERTS_DIR}" -type f -exec chmod 644 {} \; + + echo "Setting file permissions... data" + chown -R elasticsearch:elasticsearch "${DATA_DIR}" +fi -echo "Setting file permissions... data" -chown -R elasticsearch:elasticsearch "${DATA_DIR}" diff --git a/config/vault-pass.sh b/config/vault-pass.sh deleted file mode 100755 index b0f7b8b3..00000000 --- a/config/vault-pass.sh +++ /dev/null @@ -1,2 +0,0 @@ -#!/bin/bash -echo $LME_ANSIBLE_VAULT_PASS diff --git a/docs/markdown/logging-guidance/cloud.md b/docs/markdown/logging-guidance/cloud.md index b8da5737..56ad50e5 100644 --- a/docs/markdown/logging-guidance/cloud.md +++ b/docs/markdown/logging-guidance/cloud.md @@ -5,6 +5,7 @@ These docs attempt to answer some FAQ and other documentation around Logging Mad ## Does LME run in the cloud? Yes, Logging Made easy is a simple client-server model, and Logging Made Easy can be deployed in the cloud for cloud infrastructure or in the cloud for on-prem machines. + ### Deploying LME in the cloud for on prem systems: In order for the LME agents to talk to LME in the cloud you'll need to ensure the clients you want to monitor can communicate through: 1) the cloud firewall AND 2) logging Made easy's own server firewall. @@ -12,11 +13,11 @@ In order for the LME agents to talk to LME in the cloud you'll need to ensure th The easiest way is to make sure you can hit these LME server ports from the on-prem client: - WAZUH ([DOCS](https://documentation.wazuh.com/current/user-manual/agent/agent-enrollment/requirements.html)): 1514,1515 - - Agent ([DOCS](https://www.elastic.co/guide/en/elastic-stack/current/installing-stack-demo-self.html#install-stack-self-elastic-agent)): 8220 + - Agent ([DOCS](https://www.elastic.co/guide/en/elastic-stack/current/installing-stack-demo-self.html#install-stack-self-elastic-agent)): 8220 -You'll need to make sure the Cloud firewall is setup to allow those ports. On azure, this is a NSG rule you'll need to set for the LME virtual machine. +You'll need to make sure your Cloud firewall is setup to allow those ports. On azure, network security groups (NSG) run a firewall on your virtual machines network interfaces. You'll need to update your LME virtual machine's rules to allow inbound connections on the agent ports. Azure has a detailed guide for how to add security rules [here](https://learn.microsoft.com/en-us/azure/virtual-network/manage-network-security-group?tabs=network-security-group-portal#create-a-security-rule) -Then on LME, you'll want to make sure you have either the firewall disabled (if you're using hte cloud firewall as the main firewall): +Then on LME, you'll want to make sure you have either the firewall disabled (if you're using the cloud firewall as the main firewall): ``` lme-user@ubuntu:~$ sudo ufw status Status: inactive @@ -38,6 +39,64 @@ To Action From 8220 (v6) ALLOW Anywhere (v6) ``` +You can add the above ports to ufw via the following command: +``` +sudo ufw allow 1514 +sudo ufw allow 1515 +sudo ufw allow 8220 +``` + +In addition, you'll need to setup rules to forward traffic to the container network: +``` +ufw allow in on eth0 out on podman1 to any port +``` +Theres a helpful stackoverflow article on why: [LINK](https://stackoverflow.com/questions/70870689/configure-ufw-for-podman-on-port-443) +Your `podman1` interface name maybe differently, check the output of your network interfaces here and see if its also called podman1: +``` +sudo -i podman network inspect lme | jq 'map(select(.name == "lme")) | map(.network_interface) | .[]' +``` + ### Deploying LME for cloud infrastructure: Every cloud setup is different, but as long as the LME server is on the same network and able to talk to the machines you want to monitor everything should be good to go. + +## Other firewall rules +You may also want to access kibana from outside the cloud as well. You'll want to make sure you either allow port `5601` or port `443` inbound from the cloud firewall AND virtual machine firewall. + +``` +root@ubuntu:/opt/lme# sudo ufw allow 443 +Rule added +Rule added (v6) +``` + +``` +root@ubuntu:/opt/lme# sudo ufw status +Status: active + +To Action From +-- ------ ---- +22 ALLOW Anywhere +1514 ALLOW Anywhere +1515 ALLOW Anywhere +8220 ALLOW Anywhere +443 ALLOW Anywhere +22 (v6) ALLOW Anywhere (v6) +1514 (v6) ALLOW Anywhere (v6) +1515 (v6) ALLOW Anywhere (v6) +8220 (v6) ALLOW Anywhere (v6) +443 (v6) ALLOW Anywhere (v6) +``` + +### Don't lock yourself out AND Enabling the firewall + +You also probably don't want to lock yourself out of ssh, so make sure to enable port 22! +``` +sudo ufw allow 22 +``` + +Enable ufw: +``` +sudo ufw enable +``` + + diff --git a/docs/markdown/maintenance/upgrading.md b/docs/markdown/maintenance/upgrading.md index 5f48ea70..bb947a0e 100644 --- a/docs/markdown/maintenance/upgrading.md +++ b/docs/markdown/maintenance/upgrading.md @@ -1,148 +1,6 @@ # Upgrading -Please see https://github.com/cisagov/LME/releases/ for our latest release. +This page serves as a landing page for future upgrading when we release new versions. -Below you can find the upgrade paths that are currently supported and what steps are required for these upgrades. Note that major version upgrades tend to include significant changes, and so will require manual intervention and will not be automatically applied, even if auto-updates are enabled. - -Applying these changes is automated for any new installations. But, if you have an existing installation, you need to conduct some extra steps. **Before performing any of these steps it is advised to take a backup of the current installation using the method described [here](/docs/markdown/maintenance/backups.md).** - -## 1. Finding your LME version (and the components versions) -When reporting an issue or suggesting improvements, it is important to include the versions of all the components, where possible. This ensures that the issue has not already been fixed! - -### 1.1. Windows Server -* Operating System: Press "Windows Key"+R and type ```winver``` -* WEC Config: Open EventViewer > Subscriptions > "LME" > Description should contain version number -* Winlogbeat Config: At the top of the file C:\Program Files\lme\winlogbeat.yml there should be a version number. -* Winlogbeat.exe version: Using PowerShell, navigate to the location of the Winlogbeat executable ("C:\Program Files\lme\winlogbeat-x.x.x-windows-x86_64") and run `.\winlogbeat version`. -* Sysmon config: From either the top of the file or look at the status dashboard -* Sysmon executable: Either run sysmon.exe or look at the status dashboard - -### 1.2. Linux Server -* Docker: on the Linux server type ```docker --version``` -* Linux: on the Linux server type ```cat /etc/os-release``` -* Logstash config: on the Linux server type ```sudo docker config inspect logstash.conf --pretty``` - - -## 2. Upgrade from versions prior to v0.5 -LME does not support upgrading directly from versions prior to v0.5 to v1.0. Prior to switching to CISA's repo, first upgrade to the latest version of LME published by the NCSC (v0.5.1). Then follow the instructions above to upgrade to v1.0. - - -## 3. Upgrade from v0.5 to v1.0.0 - -Since LME's transition from the NCSC to CISA, the location of the LME repository has changed from `https://github.com/ukncsc/lme` to `https://github.com/cisagov/lme`. To obtain any further updates to LME on the ELK server, you will need to transition to the new git repository. Because vital configuration files are stored within the same folder as the git repo, it's simpler to copy the old LME folder to a different location, clone the new repo, copy the files and folders unique to your system, and then optionally delete the old folder. You can do this by running the following commands: - - -``` -sudo mv /opt/lme /opt/lme_old -sudo git clone https://github.com/cisagov/lme.git /opt/lme -sudo cp -r /opt/lme_old/Chapter\ 3\ Files/certs/ /opt/lme/Chapter\ 3\ Files/ -sudo cp /opt/lme_old/Chapter\ 3\ Files/docker-compose-stack-live.yml /opt/lme/Chapter\ 3\ Files/ -sudo cp /opt/lme_old/Chapter\ 3\ Files/get-docker.sh /opt/lme/Chapter\ 3\ Files/ -sudo cp /opt/lme_old/Chapter\ 3\ Files/logstash.edited.conf /opt/lme/Chapter\ 3\ Files/ -sudo cp /opt/lme_old/files_for_windows.zip /opt/lme/ -sudo cp /opt/lme_old/lme.conf /opt/lme/ -sudo cp /opt/lme_old/lme_update.sh /opt/lme/ -``` -Finally, you'll need to grab your old dashboard_update password and add it into the new dashboard_update script: -``` -OLD_Password=[OLD_PASSWORD_HERE] -sudo cp /opt/lme/Chapter\ 3\ Files/dashboard_update.sh /opt/lme/ -sed -i "s/dashboardupdatepassword/$OLD_Password/g" /opt/lme/dashboard_update.sh -``` - - -### 3.1. ELK Stack Update -You can update the ELK stack portion of LME to v1.0 (including dashboards and ELK stack containers) by running the following on the Linux server: - -``` -cd /opt/lme/Chapter\ 3\ Files/ -sudo ./deploy.sh upgrade -``` -**The last step of this script makes all files only readable by their owner in /opt/lme, so that all root owned files with passwords in them are only readable by root. This prevents a local unprivileged user from gaining access to the elastic stack.** - -Once the deploy update is finished, next update the dashboards that are provided alongside LME to the latest version. This can be done by running the below script, with more detailed instructions available [here](/docs/markdown/chapter4.md#411-import-initial-dashboards): - -\*\**NOTE:*\*\* *You may need to wait several minutes for Kibana to successfully initialize after the update before running this script during the upgrade process. If you encounter a "Failed to connect" error or an "Entity Too Large" error wait for several minutes before trying again.* - -##### Optional Substep: Clear out old dashboards -**Skip this step if you don't want to clear out the old dashboards** - -The LME team will not be maintaining any old dashboards from the old NCSC LME version, so if you would like to clean up your LME you can remove the dashboards by navigating to: https:///app/management/kibana/objects - -From there select all the dashboards in the search: `type:(dashboard)` and delete them. -Then you can re-import the new dashboards like above. - -If you have any custom dashboards you should download them manually and add them to the repo as discussed in the new dashboard's folder [README](/Chapter 4 Files/dashboards/Readme.md). - -Most data from the old LME should display just fine in the new dashboards, but there could be some issues, so please feel free to file an issue if there are problems. - - -``` -sudo /opt/lme/dashboard_update.sh -``` - -The rules built-in to the Elastic SIEM can then be updated to the latest version by following the instructions listed in [Chapter 4](/docs/markdown/chapter4.md#42-enable-the-detection-engine) and selecting the option to update the prebuilt rules when prompted, before making sure all of the rules are activated: - -![Update Rules](/docs/imgs/update-rules.png) - - - -### 3.2. Winlogbeat Update -The winlogbeat.yml file used with LME v0.5.1 is not compatible with Winlogbeat 8.5.0, the version used with LME v1.0. As such, running `./deploy.sh update` from step 1.1.1 regenerates a new config file. - -**Your client may still authenticate and push logs to elasticsearch, but for both the security of the client and your LME setup we suggest you still update** - -To update Winlogbeat: -1. Copy files_for_windows.zip to the Event Collector, following the instructions listed under [3.2.4 Download Files for Windows Event Collector](/docs/markdown/chapter3/chapter3.md#324-download-files-for-windows-event-collector). -2. From an elevated PowerShell session, navigate to the location of the Winlogbeat executable ("C:\Program Files\lme\winlogbeat-x.x.x-windows-x86_64\") and then run `./uninstall-service-winlogbeat.ps1` -3. Re-install Winlogbeat, using the new copy of files_for_windows.zip, following the instructions listed under [3.3 Configuring Winlogbeat on Windows Event Collector Server](/docs/markdown/chapter3/chapter3.md#33-configuring-winlogbeat-on-windows-event-collector-server) - -### 3.3. Network Share Updates -LME v1.0 made a minor change to the file structure used in the SYSVOL folder, so a few manual changes are needed to accommodate this. -1. Set up the SYSVOL folder as described in [2.2.1 - Folder Layout](/docs/markdown/chapter2.md#221---folder-layout). -2. Replace the old version of update.bat with the [latest version](/Chapter%202%20Files/GPO%20Deployment/update.bat). -3. Update the path to update.bat used in the LME-Sysmon-Task GPO (refer to [2.2.3 - Scheduled task GPO Policy](/docs/markdown/chapter2.md#223---scheduled-task-gpo-policy)). - -### 3.4. Checklist -1. Have the ELK stack components been upgraded on the Linux server? While on the Linux server, run `sudo docker ps | grep lme`. Version 8.7.1 of Logstash, Kibana, and Elasticsearch should be running. -2. Has Winlogbeat been updated to version 8.5.0? From Event Collector, using PowerShell, navigate to the location of the Winlogbeat executable ("C:\Program Files\lme\winlogbeat-x.x.x-windows-x86_64") and run `.\winlogbeat version`. -3. Is the LME folder inside SYSVOL properly structured? Refer to the checklist listed at the end of chapter 2. -4. Are the events from all clients visible inside elastic? Refer to [4.1.2 Check you are receiving logs](/docs/markdown/chapter4.md#412-check-you-are-receiving-logs). - -## 4. Upgrade to v1.3.1 - -This is a hotfix to the install script and some additional troubleshooting steps added to documentation on space management. Unless you're encountering problems with your current installation, or if your logs are running out of space, there's no need to upgrade to v1.3.1, as it doesn't offer any additional functionality changes. - -## 5. Upgrade to v1.3.2 - -This is a hotfix to address dashboards which failed to load on a fresh install of v1.3.1. If you are currently running v1.3.0, you do not need to upgrade at this time. If you are running versions **before** 1.3.0 or are running v1.3.1, we recommend you upgrade to the latest version. - -Please refer to the [Upgrading to latest version](/docs/markdown/maintenance/upgrading.md#upgrading-to-latest-version) to apply the hotfix. - -## 6. v1.3.3 - Update on data retention failure during LME install - -This is a hotfix to address an error with data retention failure in the deploy.sh script during a fresh LME install. We recommend you upgrade to the latest version if you require disk sizes of 1TB or greater. - -If you've tried to install LME before, then run the following commands as root: -``` -git pull -git checkout main -cd /opt/lme/Chapter\ 3\ Files/ -sudo ./deploy.sh uninstall -sudo docker volume rm lme-esdata -sudo docker volume rm lme-logstashdata -sudo ./deploy.sh install -``` - -## 7. Upgrade to latest version -To fetch the latest changes, on the Linux server, run the following commands as root: -``` -git pull -git checkout main -cd /opt/lme/Chapter\ 3\ Files/ -sudo ./deploy.sh uninstall -sudo ./deploy.sh install -``` - -The deploy.sh script should have now created new files on the Linux server at location /opt/lme/files_for_windows.zip . This file needs to be copied across and used on the Windows Event Collector server like it was explained in Chapter 3 sections [3.2.4 & 3.3 ](/docs/markdown/chapter3/chapter3.md#324-download-files-for-windows-event-collector). +Currently the only upgrade path is from 1.4 -> 2.0 [HERE](/scripts/upgrade/README.md). diff --git a/docs/markdown/prerequisites.md b/docs/markdown/prerequisites.md index f34e9ed0..039478cc 100644 --- a/docs/markdown/prerequisites.md +++ b/docs/markdown/prerequisites.md @@ -3,26 +3,21 @@ ## What kind of IT skills do I need to install LME? - The LME project can be installed by someone at the skill level of a systems administrator or enthusiast. If you have ever… - * Installed a Windows server and connected it to an Active Directory domain -* Ideally deployed a Group Policy Object (GPO) * Changed firewall rules * Installed a Linux operating system, and logged in over SSH. - … then you are likely to have the skills to install LME! -We estimate that you should allow a couple of days to run through the entire installation process, though you can break up the process to fit your schedule. While we have automated steps where we can and made the instructions as detailed as possible, installation will require more steps than simply using an installation wizard. +We estimate that you should allow a couple of hours to run through the entire installation process. While we have automated steps where we can and made the instructions as detailed as possible, installation will require more steps than simply using an installation wizard. ## High level overview diagram of the LME system -![High level overview](/docs/imgs/chapter_overview.jpg) -

-Figure 1: High level overview, linking to documentation chapters -

+![diagram](/docs/imgs/lme-architecture-v2.jpg) + +Please see the [main readme](/README.md#Diagram) for a more detailed description ## How much does LME cost? @@ -44,51 +39,55 @@ Text in **bold** means that you have to make a decision or take an action that n Text in *italics* is an easy way of doing something, such as running a script. Double check you are comfortable doing this. A longer, manual, way is also provided. -``` Text in boxes is a command you need to type ``` - +``` +Text in boxes is a command you need to type +``` You should follow each chapter in order, and complete the checklist at the end before continuing. ## Scaling the solution To keep LME simple, our guide only covers single server setups. It’s difficult to estimate how much load the single server setup will take. -It’s possible to scale the solution to multiple event collectors and ELK nodes, but that will require more experience with the technologies involved. +It’s possible to scale the solution to multiple event collectors and ELK nodes, but that will require more experience with the technologies involved. We plan to publish documentation for scaling LME in the future. ## Required infrastructure To begin your Logging Made Easy installation, you will need access to (or creation of) the following servers: -* A Domain Controller to administer a Windows Active Directory. This is for deploying Group Policy Objects (GPO) * A server with 2 processor cores and at least 8GB RAM. We will install the Windows Event Collector Service on this machine, set it up as a Windows Event Collector (WEC), and join it to the domain. - * If budget allows, we recommend having a dedicated server for Windows Event collection. If this is not possible, the WEC can be setup on an existing server, but consider the performance impacts. - * The WEC server can be Windows Server 2016 (or later) or Windows 8.1 client (or later) -* A Debian-based Linux server. We will install our database (Elasticsearch) and dashboard software on this machine. This is all taken care of through Docker containers. +* An ubuntu linux 22.04 server. We will install our database (Elasticsearch) and dashboard software on this machine. This is all taken care of through Podman containers. ### Minimum Hardware Requirements: - - CPU: 2 processor cores, + - CPU: 2 processor cores, 4+ recommended - MEMORY: 16GB RAM, (32GB+ recommended by [Elastic](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-hardware-prereq.html)), - STORAGE: dedicated 128GB storage for ELK (not including storage for OS and other files) - This is estimated to only support ~17 clients of log streaming data/day, and Elasticsearch will automatically purge old logs to make space for new ones. We **highly** suggest more storage than 128GB for any other sized enterprise network. - -### Notes: - * **DO NOT install Docker from the "Featured Snaps" section of the Ubuntu Server install procedure, we install the Docker community edition later.** - * The deploy script has only been tested on Ubuntu: `18.04` Long Term Support (LTS) and `22.04` LTS. + +#### confirm these settings: +to check memory run this command, look under the "free" column +```bash +$ free -h +total used free shared buff/cache available +Mem: 31Gi 6.4Gi 22Gi 4.0Mi 2.8Gi 24Gi +Swap: 0B 0B 0B +``` + +to check the number of CPUs +```bash +$ lscpu | egrep 'CPU\(s\)' +``` + +to check hardware storage, typically the /dev/root will be your main filesystem. The number of gigabytes available is in the Avail column +```bash +$ df -h +Filesystem Size Used Avail Use% Mounted on +/dev/root 124G 13G 112G 11% / +``` ## Where to install the servers Servers can be either on premise, in a public cloud or private cloud. It is your choice, but you'll need to consider how to network between the clients and servers. ## What firewall rules are needed? +TODO -![Overview of Network rules](/docs/imgs/troubleshooting-overview.jpg) -

-Figure 1: Overview of Network rules -

- -| Diagram Reference | Protocol information | -| :---: |-------------| -| a | Outbound WinRM using TCP 5985.

Link is HTTP, underlying data is authenticated and encrypted with Kerberos.

See [this Microsoft article](https://docs.microsoft.com/en-us/windows/security/threat-protection/use-windows-event-forwarding-to-assist-in-intrusion-detection) for more information | -| b | Inbound WinRM TCP 5985.

Link is HTTP, underlying data is authenticated and encrypted with Kerberos.

See [this Microsoft article](https://docs.microsoft.com/en-us/windows/security/threat-protection/use-windows-event-forwarding-to-assist-in-intrusion-detection) for more information

(optional) Inbound TCP 3389 for Remote Desktop management | -| c | Outbound TCP 5044.

Lumberjack protocol using TLS mutual authentication. | -| d | Inbound TCP 5044.

Lumberjack protocol using TLS mutual authentication.

Inbound TCP 443 for dashboard access

(optional) Inbound TCP 22 for SSH management | -## Now move onto [Chapter 1 – Setup Windows Event Forwarding](/docs/markdown/chapter1/chapter1.md) diff --git a/docs/markdown/reference/dev-notes.md b/docs/markdown/reference/dev-notes.md deleted file mode 100644 index b4dfbeba..00000000 --- a/docs/markdown/reference/dev-notes.md +++ /dev/null @@ -1,163 +0,0 @@ -# Dev notes: -TODO update these to be relevant/new - -Notes to convert compose -> quadlet -1. start the containers with compose -2. podlet generate from the containers created - -### compose: -running: -```shell -podman-compose up -d -``` - -stopping: -```shell -podman-compose down --remove-orphans - -#only run if you want to remove all volumes: -podman-compose down -v --remove-orphans -``` - -### install/get podlet: -``` -#https://github.com/containers/podlet/releases -wget https://github.com/containers/podlet/releases/download/v0.3.0/podlet-x86_64-unknown-linux-gnu.tar.xz -#add it to path: -cp ./podlet-x86_64-unknown-linux-gnu/podlet .local/bin/ -``` - -### generate the quadlet files: -[DOCS](https://docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html), [BLOG](https://mo8it.com/blog/quadlet/) - -``` -cd ~/LME-PRIV/quadlet - -for x in $(podman ps --filter label=io.podman.compose.project=lme-2-arch -a --format "{{.Names}}");do echo $x; podlet generate container $x > $x.container;done -``` - -### dealing with journalctl logs: -https://unix.stackexchange.com/questions/638432/clear-failed-states-or-all-old-logs-from-systemctl-status-service -``` -#delete all logs: -sudo rm /var/log/journal/$STRING_OF_HEX/user-1000* -``` - -### debugging commands: -``` -systemctl --user stop lme.service -systemctl --user status lme* -systemctl --user restart lme.service -journalctl --user -u lme-fleet-server.service -systemctl --user status lme* -cp -r $CLONE_DIRECTORY/config/ /opt/lme && cp -r $CLONE_DIRECTORY/quadlet /opt/lme -systemctl --user daemon-reload && systemctl --user list-unit-files lme\* -systemctl --user reset-failed -podman volume rm -a - -###make sure all ports are free as well: -sudo ss -tulpn -``` - -### password setup stuff: -#### setup the config directory -This will setup the container config so it uses ansible vault for podman secret creation AND sets up the proper ansible-vault environment variables. - -``` -ln -sf /opt/lme/config/containers.conf $HOME/.config/containers/containers.conf -#preserve `chmod +x` executable -cp -rTp config/ /opt/lme/config -#source our password env var: -. ./scripts/set_vault_key_env.sh -#create the vault directory: -/opt/lme/vault/ -``` - -#### create password file: -This will setup the ansible vault files in the expected paths -``` -ansible-vault create /opt/lme/vault.yml -``` - -### **Manual Install OLD**( optional if not running ansible install): -``` -export CLONE_DIRECTORY=~/LME-PRIV/lme-2-arch -#systemd will setup nix: -#Old way to setup nix if desired: sh <(curl -L https://nixos.org/nix/install) --daemon -sudo apt install jq uidmap nix-bin nix-setup-systemd - -sudo nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs -sudo nix-channel --update - -# Add user to nix group in /etc/group -sudo usermod -aG nix-users $USER - -#install podman and podman-compose -sudo nix-env -iA nixpkgs.podman - -# Set the path for root and lme-user -#echo 'export PATH=$PATH:$HOME/.nix-profile/bin' >> ~/.bashrc -echo 'export PATH=$PATH:/nix/var/nix/profiles/default/bin' >> ~/.bashrc -sudo sh -c 'echo "export PATH=$PATH:/nix/var/nix/profiles/default/bin" >> /root/.bashrc' - -#to allow 443/80 bind and setup memory/limits -sudo NON_ROOT_USER=$USER $CLONE_DIRECTORY/set_sysctl_limits.sh - -#export XDG_CONFIG_HOME="$HOME/.config" -#export XDG_RUNTIME_DIR=/run/user/$(id -u) - -#setup user-generator on systemd: -sudo $CLONE_DIRECTORY/link_latest_podman_quadlet.sh - -#setup loginctl -sudo loginctl enable-linger $USER -``` - -Quadlet configuration for containers is in: `/quadlet/` -1. setup `/opt/lme` thats the running directory for lme: -```bash -sudo mkdir -p /opt/lme -sudo chown -R $USER:$USER /opt/lme -cp -r $CLONE_DIRECTORY/config/ /opt/lme/ -cp -r $CLONE_DIRECTORY/quadlet/ /opt/lme/ - -#setup quadlets -mkdir -p ~/.config/containers/ -ln -s /opt/lme/quadlet ~/.config/containers/systemd - -#setup service file -mkdir -p ~/.config/systemd/user -ln -s /opt/lme/quadlet/lme.service ~/.config/systemd/user/ -``` - -### pull and tag all containers: -This will let us maintain the lme container versions using the `LME_LATEST` tag. Whenever we update, we change the local image to point to the newest update, and run `podman auto-update` to update the containers. - -**NOTE TO FUTURE SELVES: NEEDS TO BE `LOCALHOST` TO AVOID REMOTE TAGGING ATTACK** - -```bash -sudo mkdir -p /etc/containers -sudo tee /etc/containers/policy.json <