-
Notifications
You must be signed in to change notification settings - Fork 1
Container setup
These variables can be set locally in one of your startup script, in ~/.config/profile
(sourced at login) or system wide in /etc/profile.d/custom.sh
. Editing this last file is the preferred way to set environment variables rather than editing /etc/profile
.
These are defined in /etc/profile.d/custom.sh
. To override them, use your home profile.
/etc/profile.d/custom-profile.sh
----------------------------------------------
# /etc/profile.d/custom-profile.sh @ poppy
# Last edited 01-05-2015
# This is the place where to export custom environment variables. Best is to use this file rather than /etc/profile
# Per user variables have to be writen in ~/.profile or $XDG_CONFIG_HOME/profile
### XDG environment variables
export XDG_CONFIG_HOME=${HOME}/.config
export XDG_CONFIG_DIRS=/etc/xdg
export XDG_DATA_DIRS=/usr/local/share:/usr/share
export XDG_DATA_HOME=${HOME}/.local/share
## some default programms
export EDITOR=vim
export VISUAL=vim
export CCACHE_DIR=/storage/.ccache
export USE_CCACHE=1
# change postgresql root data
export PGDATA=/db/postgres/data
export PGROOT=/db/postgres
# avoid bundler to install gems system-wide
export GEM_HOME=$(ruby -e 'print Gem.user_dir')
List systemd environment variables:
# systemctl show-environment
# loginctl enable-linger username
# systemctl start user@userid
# systemctl enable user@userid
then, as user, to enable any service:
$ systemctl --user start MyService
By default, these two files are sources at login:
~/.config/environment, ~/.config/profile
. The script for sourcing these files is /etc/profile.d/user-env.sh
.
Each use can use any other file to source at startup with local setup.
To list your environment variables:
$ env
Poppy follows the conventions of xdg-users-dir. More and more apps are following this configuration. Most important is the $XDG_CONFIG_HOME directory. It is set to ~/.config
. This allow you to gather a lot of package settings in this directory and free the root of your home directory.
Example with zsh:
~/.config/profile
---------------------
export ZDOTDIR=$XDG_CONFIG_HOME/zsh
Now all your zsh settings and folders are in ~.config/zsh
/etc/profile.d/user-env.sh
-------------------------------------------
# /etc/profile.d/user-env.sh @ poppy
# Last edited 2015-05-08
# define user specific files
# source user environment variable file if it exists
if [[ -e "$XDG_CONFIG_HOME/environment" ]] ; then
. "$XDG_CONFIG_HOME/environment"
fi
# source user shell customization file if it exists
if [[ -e "$XDG_CONFIG_HOME/profile" ]] ; then
. "$XDG_CONFIG_HOME/profile"
fi
To enable colors, you need to add these lines in one of your startup script or profile:
test "SSH_CONNECTION" && export TERM=xterm
test "$TERM" = screen-256color && export TERM=screen
TIP:
For those of you running locally urxt, you may have a broken backspace. Please follow this trick. Add TERM=xterm
in your .zshrc.
Please refer to Wiki Filesystem for global setup.
Systemd has native support for mounts (man systemd.mount). In fact systemd reads /etc/fstab, uses it to generate mount units, and mount the filesystems itself. Container /etc/fstab
does not exists, thus the need to to create mount units by hand for our /etc, /var, /storage
. All mount files are based on same principles.
/etc/systemd/system/etc.mount
------------------------------
[Unit]
ConditionPathIsSymbolicLink=!/etc
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
[Mount]
What=etc
Where=/etc
Type=btrfs
[Install]
WantedBy=local-fs.target
-------------------------------------
Use ccache is set to 1 with ccache directory in /storage/.ccache
two extra partitions are created: /db, /storage
This directory is for placing everything you want and will save place in /. The directory is open to any wheel members:
$ ls -al /
.............
drwxrwxrwx 1 root wheel 32 May 7 17:28 storage/
A download directory has been created in /storage, as a backup_vrac. This last one is used at your convenience to temporary backup files you will later remove if no more needed.
Please refer to Wiki Networking for global setup.
No DNS server has been deployed on poppy. Hurricane Electric is our free DNS provider. Here are the settings and A records:
Raw AXFR output
thetradinghall.com Dumped Sun May 10 12:01:52 2015
thetradinghall.com. 86400 IN SOA ns1.he.net. hostmaster.he.net. (
2015051017 ;serial
10800 ;refresh
1800 ;retry
604800 ;expire
86400 ) ;minimum
thetradinghall.com 86400 IN NS ns2.he.net.
thetradinghall.com 86400 IN NS ns3.he.net.
thetradinghall.com 86400 IN NS ns4.he.net.
thetradinghall.com 86400 IN NS ns5.he.net.
thetradinghall.com 86400 IN A 212.147.52.214
www.thetradinghall.com 86400 IN A 212.147.52.214
rstudio.thetradinghall.com 86400 IN A 212.147.52.214
mail.thetradinghall.com 86400 IN A 212.147.52.214
thetradinghall.com 86400 IN MX 10 mail.thetradinghall.com.
slacklog.thetradinghall.com 86400 IN A 212.147.52.214
cloud.thetradinghall.com 86400 IN A 212.147.52.214
ftp.thetradinghall.com 86400 IN A 212.147.52.214
wiki.thetradinghall.com 86400 IN A 212.147.52.214
phppgadmin.thetradinghall.com 86400 IN A 212.147.52.214
The /etc/sysconfig/
directory is a location for configuration files and scripts.
Set hostname:
# hostnamectl set-hostname poppy
# hostnamectl status
Static hostname: poppy
Icon name: computer-container
Chassis: container
Machine ID: 59b720b533834a4eafe07a62c2482266
Boot ID: 1e922b562c66451ab844721a5674757c
Virtualization: systemd-nspawn
Operating System: Fedora 23 (Server Edition)
CPE OS Name: cpe:/o:fedoraproject:fedora:23
Kernel: Linux 4.2.5-1-hortensia
Architecture: x86-64
Destination Network Address Translation makes some services in container with private IP address be accessible from the Internet. All settings to redirect services (ftp, ssh etc) to poppy are done on the router.
Assigned protocols are:
HTTP :80
HTTPS :443
SSH :22
SMTP :587 (avec authentification)
SMTP :465 (SSL)
Postgresql :5432
Cockpit :9090
iptables specific rules are done in the server. In Fedora, iptable rules are in /etc/sysconfig/iptables
. iptables are started and enabled via systemd iptables
.
To list the rules:
# iptables -L
When no rules are at work:
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
To flush all rules:
# iptables -F
To reset specific rules, run one of the following commands:
# iptables -F
# iptables -X
# iptables -t nat -F
# iptables -t nat -X
# iptables -t mangle -F
# iptables -t mangle -X
# iptables -t raw -F
# iptables -t raw -X
# iptables -t security -F
# iptables -t security -X
# iptables -P INPUT ACCEPT
# iptables -P FORWARD ACCEPT
# iptables -P OUTPUT ACCEPT
GnuPG allows to encrypt and sign your data and communication, features a versatile key management system as well as access modules for all kinds of public key directories.
Install gnupg2. For easiness, we symlinked gpg2 to gpg.
# ln -s /usr/bin/gpg2 /usr/bin/gpg
The home directory is set via environment variable $GNUPGHOME
to ${XDG_CONFIG_HOME}/gnupg
. Directory permission is set to 700, and directory content to 600.
1- We generate a RSA key of 2048 bits with no expiration and alternative cypher.
$ gpg --gen-key --expert
The public key will appear as pubring.gpg
2- List public key
$ gpg --list-keys
pub 2048R/54E2B5A0 2016-07-04
uid Arnaud Gaboury (TTH) <[email protected]>
sub 2048R/5E729BD1 2016-07-04
You will get the key number fot the key and subkey
3- List private key
$ gpg --list-secret-keys
4- Export public key: generate an ASCII version of your public key (e.g. to distribute it by e-mail)
$ gpg --output public.key --armor --export <user-id>
5- Create ~/.config/gnupg/gpg.conf
5- Make the key be the default one in your gpg.conf
6- Send key to server
gpg --send-keys 92AB63D5
Revocation certificates are automatically generated for newly generated keys, although one can be generated manually by the user later. These are located at ~/.gnupg/openpgp-revocs.d/
7- Export the key pair to another computer:
gpg --output tth_sec.key --armor --export-secret-key KeyID
gpg --output tth_pub.key --armor --export KeyID
8- Import keys
$ gpg --import public.key
This adds the public key in the file "public.key" to your public key ring.
to import a private key:
gpg --allow-secret-key-import --import private.key
% systemctl status sshd
● sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2015-05-08 13:01:53 CEST; 23min ago
Docs: man:sshd(8)
man:sshd_config(5)
Main PID: 77 (sshd)
CGroup: /system.slice/system-systemd\x2dnspawn.slice/[email protected]/system.slice/sshd.service
└─77 /usr/sbin/sshd -D
May 08 13:01:53 poppy systemd[1]: Started OpenSSH server daemon.
May 08 13:01:53 poppy systemd[1]: Starting OpenSSH server daemon...
May 08 13:01:53 poppy sshd[77]: Server listening on 0.0.0.0 port 22.
May 08 13:01:53 poppy sshd[77]: Server listening on :: port 22.
It is advised to use sshd.socket + [email protected], which spawn on-demand instances of the SSH daemon per connection. Using it implies that systemd listens on the SSH socket and will only start the daemon process for an incoming connection.
1- edit sshd.socket :
# systemctl edit sshd.socket
ListenStream=192.168.1.94:42660
FreeBind=true
2- add these lines to:
/etc/pam.d/sshd
----------------
auth include system-remote-login
account include system-remote-login
password include system-remote-login
session include system-remote-login
3- add these two files:
/etc/pam.d/system-remote-login
--------------
#%PAM-1.0
auth include system-login
account include system-login
password include system-login
session include system-login
----------------------
/etc/pam.d/system-login
---------------
#%PAM-1.0
auth required pam_tally.so onerr=succeed file=/var/log/faillog
auth required pam_shells.so
auth requisite pam_nologin.so
auth include system-auth
account required pam_access.so
account required pam_nologin.so
account include system-auth
password include system-auth
session optional pam_loginuid.so
session required pam_env.so
session include system-auth
session optional pam_motd.so motd=/etc/motd
session optional pam_mail.so dir=/var/spool/mail standard quiet
-session optional pam_systemd.s
3- stop/disable sshd.service
4- start/enable sshd.socket
$ systemctl status sshd.socket
● sshd.socket - OpenSSH Server Socket
Loaded: loaded (/usr/lib/systemd/system/sshd.socket; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/sshd.socket.d
└─override.conf
Active: active (listening) since Sat 2016-03-19 15:16:58 CET; 23min ago
Docs: man:sshd(8)
man:sshd_config(5)
Listen: 0.0.0.0:22 (Stream)
192.168.1.94:42660 (Stream)
Accepted: 16; Connected: 0
Mar 19 15:16:58 poppy systemd[1]: Listening on OpenSSH Server Socket.
Mar 19 15:16:58 poppy systemd[1]: Starting OpenSSH Server Socket.
5- to see connection logs, run:
$ journalctl -unit sshd.socket
% systemctl status postfix
● postfix.service - Postfix Mail Transport Agent
Loaded: loaded (/usr/lib/systemd/system/postfix.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2015-05-08 13:02:44 CEST; 24min ago
Process: 98 ExecStart=/usr/sbin/postfix start (code=exited, status=0/SUCCESS)
Process: 97 ExecStartPre=/usr/libexec/postfix/chroot-update (code=exited, status=0/SUCCESS)
Process: 78 ExecStartPre=/usr/libexec/postfix/aliasesdb (code=exited, status=0/SUCCESS)
Main PID: 291 (master)
CGroup: /system.slice/system-systemd\x2dnspawn.slice/[email protected]/system.slice/postfix.service
├─291 /usr/libexec/postfix/master -w
├─303 pickup -l -t unix -u
└─304 qmgr -l -t unix -u
May 08 13:01:53 poppy systemd[1]: Starting Postfix Mail Transport Agent...
May 08 13:02:34 poppy postfix/postfix-script[279]: starting the Postfix mail system
May 08 13:02:44 poppy postfix/master[291]: daemon started -- version 3.0.1, configuration /etc/postfix
May 08 13:02:44 poppy systemd[1]: Started Postfix Mail Transport Agent.
Listening port have been changed from the usual 22
The ssh server is run by the openssh-server package. SSH daemon is started at boot with systemd ssh.service.
-bash-4.3# systemctl status sshd
● sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2015-05-04 12:09:44 CEST; 1 day 6h ago
Docs: man:sshd(8)
man:sshd_config(5)
Main PID: 30 (sshd)
CGroup: /system.slice/system-systemd\x2dnspawn.slice/[email protected]/system.slice/sshd.service
└─30 /usr/sbin/sshd -D
May 04 12:09:44 poppy systemd[1]: Started OpenSSH server daemon.
May 04 12:09:44 poppy systemd[1]: Starting OpenSSH server daemon...
May 04 12:09:44 poppy sshd[30]: Server listening on 0.0.0.0 port XXXX.
May 04 12:09:44 poppy sshd[30]: Server listening on :: port XXXX.
NOTE: for security purpose, telnet, rlogin, rsh and vsftpd are not installed on the server.
- users autorized to ssh are listed in
AllowUsers user1 user2
- Permit root login has been set to NO
- banner is in
/etc/ssh/banner
- Passwordauthentication is set to NO
- UsePAM is set to yes
- X11 forwarding is set to NO
vsftpd (Very Secure FTP Daemon) is a lightweight, stable and secure FTP server for UNIX-like systems. Most of the settings in vsftpd are done by editing the file /etc/vsftpd/vsftpd.conf
NOTE:
- anonymous ftp is disabled
- write is enabled
- deny email is enabled. Place banned emails in
/etc/vsftpd/banned_emails
- chroot is disabled unless user is specified in
/etc/vsftpd/chroot_list
file.
Main configuration file for PHP is /etc/php.ini
.
PHP dynamic extension modules are now in /etc/php.d
directory. They are loaded via the .ini files found in this directory. To enable a new module, create a new /etc/php.d/myModule.ini
; Enable myModule extension module
extension=myModule.so
TO WRITE