[GH-ISSUE #57] LXC VPS #23

Closed
opened 2026-03-03 13:58:26 +03:00 by kerem · 19 comments
Owner

Originally created by @roobyz on GitHub (Dec 13, 2019).
Original GitHub issue: https://github.com/konstruktoid/hardening/issues/57

I'm having a few challenges getting this to work on a VPS running on an LXC containier.

  1. First auditd and app armor need to be disabled
  2. Second, ssh access to the VPS stops working:
  • have tried to disable f_sshconfig, f_sshdconfig, f_hosts, f_logindconf, and f_sysctl with no luck
    Any suggestions on how I might try to repurpose your repo to work on a VPS running LXC?
Originally created by @roobyz on GitHub (Dec 13, 2019). Original GitHub issue: https://github.com/konstruktoid/hardening/issues/57 I'm having a few challenges getting this to work on a VPS running on an LXC containier. 1. First auditd and app armor need to be disabled 2. Second, ssh access to the VPS stops working: - have tried to disable f_sshconfig, f_sshdconfig, f_hosts, f_logindconf, and f_sysctl with no luck Any suggestions on how I might try to repurpose your repo to work on a VPS running LXC?
kerem closed this issue 2026-03-03 13:58:27 +03:00
Author
Owner

@konstruktoid commented on GitHub (Dec 13, 2019):

Hi @roobyz, can you attach some logs and show me the debug output when connecting with ssh?

<!-- gh-comment-id:565433331 --> @konstruktoid commented on GitHub (Dec 13, 2019): Hi @roobyz, can you attach some logs and show me the debug output when connecting with `ssh`?
Author
Owner

@roobyz commented on GitHub (Dec 13, 2019):

Yes... that raises another good point that I missed earlier. In this exaple, I only disabled auditd and app armor.

After running the script, upon exiting, ssh actually works. After a reboot, ssh stops working right on the step before "debug1: Connection established". The log looks something like this after reboot:

OpenSSH_8.1p1, OpenSSL 1.1.1d 10 Sep 2019
debug1: Reading configuration data /home/user/.ssh/config
debug1: Connecting to 111.222.333.444 [111.222.333.444] port 2233.

<!-- gh-comment-id:565485922 --> @roobyz commented on GitHub (Dec 13, 2019): Yes... that raises another good point that I missed earlier. In this exaple, I only disabled auditd and app armor. After running the script, upon exiting, ssh actually works. After a reboot, ssh stops working right on the step before "debug1: Connection established". The log looks something like this after reboot: _OpenSSH_8.1p1, OpenSSL 1.1.1d 10 Sep 2019 debug1: Reading configuration data /home/user/.ssh/config debug1: Connecting to 111.222.333.444 [111.222.333.444] port 2233._
Author
Owner

@konstruktoid commented on GitHub (Dec 16, 2019):

Yeah, that doesn't tell me much.
Can you include the output of ssh -vv <HOST>?

Do you have access to the console?

<!-- gh-comment-id:565939420 --> @konstruktoid commented on GitHub (Dec 16, 2019): Yeah, that doesn't tell me much. Can you include the output of `ssh -vv <HOST>`? Do you have access to the console?
Author
Owner

@roobyz commented on GitHub (Dec 17, 2019):

ok, thank you for your patience. I tried again from the beginning and had everything fully functional with complete SSH access. Then I ran your ubuntu.sh script and I still had SSH access upon completion, but after a reboot it was locked . Ran as requested with ssh -vv:

OpenSSH_8.1p1, OpenSSL 1.1.1d  10 Sep 2019
debug1: Connecting to 64.65.66.67 port 2233.
debug1: Connection established.
debug1: identity file /home/ubuntu/.ssh/id_ed25519 type 3
debug1: identity file /home/ubuntu/.ssh/id_ed25519-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.1
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.9p1 Ubuntu-10
debug1: match: OpenSSH_7.9p1 Ubuntu-10 pat OpenSSH* compat 0x04000000
debug2: fd 3 setting O_NONBLOCK
debug1: Authenticating to 64.65.66.67:2233 as 'ubuntu'
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: local client KEXINIT proposal
debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,ext-info-c
debug2: host key algorithms: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa
debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: compression ctos: none,zlib@openssh.com,zlib
debug2: compression stoc: none,zlib@openssh.com,zlib
debug2: languages ctos: 
debug2: languages stoc: 
debug2: first_kex_follows 0 
debug2: reserved 0 
debug2: peer server KEXINIT proposal
debug2: KEX algorithms: curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
debug2: host key algorithms: rsa-sha2-512,rsa-sha2-256,ssh-rsa,ecdsa-sha2-nistp256,ssh-ed25519
debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes256-ctr
debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes256-ctr
debug2: MACs ctos: hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512,hmac-sha2-256
debug2: MACs stoc: hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512,hmac-sha2-256
debug2: compression ctos: none
debug2: compression stoc: none
debug2: languages ctos: 
debug2: languages stoc: 
debug2: first_kex_follows 0 
debug2: reserved 0 
debug1: kex: algorithm: curve25519-sha256@libssh.org
debug1: kex: host key algorithm: ecdsa-sha2-nistp256
debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ecdsa-sha2-nistp256 SHA256:XrdoAC6M9X+o9d1eXdYEJoTyT08IxIEZbjk6w1it4pM
debug1: checking without port identifier
debug1: Host '64.65.66.67' is known and matches the ECDSA host key.
debug1: Found key in /home/ubuntu/.ssh/known_hosts:1
debug1: found matching key w/out port
debug2: set_newkeys: mode 1
debug1: rekey out after 134217728 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug2: set_newkeys: mode 0
debug1: rekey in after 134217728 blocks
debug1: Will attempt key: /home/ubuntu/.ssh/id_ed25519 ED25519 SHA256:NgcyxhOTZrCD9po0uJDFMdtIjl/fsgRz6fd2M9JmeDg explicit agent
debug2: pubkey_prepare: done
debug1: SSH2_MSG_EXT_INFO received
debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521>
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received

Authorized users only. All activity may be monitored and reported.

debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Offering public key: /home/ubuntu/.ssh/id_ed25519 ED25519 SHA256:NgcyxhOTZrCD9po0uJDFMdtIjl/fsgRz6fd2M9JmeDg explicit agent
debug2: we sent a publickey packet, wait for reply
debug1: Server accepts key: /home/ubuntu/.ssh/id_ed25519 ED25519 SHA256:NgcyxhOTZrCD9po0uJDFMdtIjl/fsgRz6fd2M9JmeDg explicit agent
debug1: Authentication succeeded (publickey).
Authenticated to 64.65.66.67 ([64.65.66.67]:2233).
debug1: channel 0: new [client-session]
debug2: channel 0: send open
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
debug1: pledge: network
debug1: client_input_global_request: rtype hostkeys-00@openssh.com want_reply 0
debug1: Remote: /home/ubuntu/.ssh/authorized_keys:2: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding
debug1: Remote: /home/ubuntu/.ssh/authorized_keys:2: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding
debug2: channel_input_open_confirmation: channel 0: callback start
debug2: fd 3 setting TCP_NODELAY
debug2: client_session2_setup: id 0
debug2: channel 0: request pty-req confirm 1
debug2: channel 0: request shell confirm 1
debug2: channel_input_open_confirmation: channel 0: callback done
debug2: channel 0: open confirm rwindow 0 rmax 32768
debug2: channel_input_status_confirm: type 99 id 0
debug2: PTY allocation request accepted on channel 0
debug2: channel_input_status_confirm: type 100 id 0
shell request failed on channel 0
<!-- gh-comment-id:566376319 --> @roobyz commented on GitHub (Dec 17, 2019): ok, thank you for your patience. I tried again from the beginning and had everything fully functional with complete SSH access. Then I ran your `ubuntu.sh` script and I still had SSH access upon completion, but after a reboot it was locked . Ran as requested with `ssh -vv`: ``` bash OpenSSH_8.1p1, OpenSSL 1.1.1d 10 Sep 2019 debug1: Connecting to 64.65.66.67 port 2233. debug1: Connection established. debug1: identity file /home/ubuntu/.ssh/id_ed25519 type 3 debug1: identity file /home/ubuntu/.ssh/id_ed25519-cert type -1 debug1: Local version string SSH-2.0-OpenSSH_8.1 debug1: Remote protocol version 2.0, remote software version OpenSSH_7.9p1 Ubuntu-10 debug1: match: OpenSSH_7.9p1 Ubuntu-10 pat OpenSSH* compat 0x04000000 debug2: fd 3 setting O_NONBLOCK debug1: Authenticating to 64.65.66.67:2233 as 'ubuntu' debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: local client KEXINIT proposal debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,ext-info-c debug2: host key algorithms: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: compression ctos: none,zlib@openssh.com,zlib debug2: compression stoc: none,zlib@openssh.com,zlib debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug2: peer server KEXINIT proposal debug2: KEX algorithms: curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256 debug2: host key algorithms: rsa-sha2-512,rsa-sha2-256,ssh-rsa,ecdsa-sha2-nistp256,ssh-ed25519 debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes256-ctr debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes256-ctr debug2: MACs ctos: hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512,hmac-sha2-256 debug2: MACs stoc: hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512,hmac-sha2-256 debug2: compression ctos: none debug2: compression stoc: none debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug1: kex: algorithm: curve25519-sha256@libssh.org debug1: kex: host key algorithm: ecdsa-sha2-nistp256 debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ecdsa-sha2-nistp256 SHA256:XrdoAC6M9X+o9d1eXdYEJoTyT08IxIEZbjk6w1it4pM debug1: checking without port identifier debug1: Host '64.65.66.67' is known and matches the ECDSA host key. debug1: Found key in /home/ubuntu/.ssh/known_hosts:1 debug1: found matching key w/out port debug2: set_newkeys: mode 1 debug1: rekey out after 134217728 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug2: set_newkeys: mode 0 debug1: rekey in after 134217728 blocks debug1: Will attempt key: /home/ubuntu/.ssh/id_ed25519 ED25519 SHA256:NgcyxhOTZrCD9po0uJDFMdtIjl/fsgRz6fd2M9JmeDg explicit agent debug2: pubkey_prepare: done debug1: SSH2_MSG_EXT_INFO received debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521> debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received Authorized users only. All activity may be monitored and reported. debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Offering public key: /home/ubuntu/.ssh/id_ed25519 ED25519 SHA256:NgcyxhOTZrCD9po0uJDFMdtIjl/fsgRz6fd2M9JmeDg explicit agent debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: /home/ubuntu/.ssh/id_ed25519 ED25519 SHA256:NgcyxhOTZrCD9po0uJDFMdtIjl/fsgRz6fd2M9JmeDg explicit agent debug1: Authentication succeeded (publickey). Authenticated to 64.65.66.67 ([64.65.66.67]:2233). debug1: channel 0: new [client-session] debug2: channel 0: send open debug1: Requesting no-more-sessions@openssh.com debug1: Entering interactive session. debug1: pledge: network debug1: client_input_global_request: rtype hostkeys-00@openssh.com want_reply 0 debug1: Remote: /home/ubuntu/.ssh/authorized_keys:2: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding debug1: Remote: /home/ubuntu/.ssh/authorized_keys:2: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding debug2: channel_input_open_confirmation: channel 0: callback start debug2: fd 3 setting TCP_NODELAY debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug2: channel 0: request shell confirm 1 debug2: channel_input_open_confirmation: channel 0: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel_input_status_confirm: type 100 id 0 shell request failed on channel 0 ```
Author
Owner

@konstruktoid commented on GitHub (Dec 17, 2019):

That seems fine except it won't let you in, but if you got access to the server console could you check with sudo journalctl -r -u ssh after a failed login?
My initial suggestion would be to increase the sshd_config options MaxAuthTries and MaxSessions to 6 (or the number of available keys in use).

<!-- gh-comment-id:566416272 --> @konstruktoid commented on GitHub (Dec 17, 2019): That seems fine except it won't let you in, but if you got access to the server console could you check with `sudo journalctl -r -u ssh` after a failed login? My initial suggestion would be to increase the `sshd_config` options MaxAuthTries and MaxSessions to 6 (or the number of available keys in use).
Author
Owner

@roobyz commented on GitHub (Dec 18, 2019):

I tried updating the "max" values you specified and nothing changed. So I restored the sshd_config back to the original and restarted ssh.service, and that also had no impact. Seems like some other security setting that impacts sshd logins is somehow involved. The journal results, show that pam is instantly logging me out.

Dec 18 06:46:26 rzr-silk sshd[85436]: pam_unix(sshd:session): session closed for user ubuntu
Dec 18 06:46:25 rzr-silk sshd[85436]: pam_unix(sshd:session): session opened for user ubuntu by (uid=0)
Dec 18 06:46:25 rzr-silk sshd[85436]: Accepted publickey for ubuntu from 10.10.10.10 port 49622 ssh2: ED25519 SHA256:NgcyxhOTZrCD9po0uJDFMdtIjl/fsgRz6f
<!-- gh-comment-id:566929981 --> @roobyz commented on GitHub (Dec 18, 2019): I tried updating the "max" values you specified and nothing changed. So I restored the sshd_config back to the original and restarted ssh.service, and that also had no impact. Seems like some other security setting that impacts sshd logins is somehow involved. The journal results, show that pam is instantly logging me out. ``` bash Dec 18 06:46:26 rzr-silk sshd[85436]: pam_unix(sshd:session): session closed for user ubuntu Dec 18 06:46:25 rzr-silk sshd[85436]: pam_unix(sshd:session): session opened for user ubuntu by (uid=0) Dec 18 06:46:25 rzr-silk sshd[85436]: Accepted publickey for ubuntu from 10.10.10.10 port 49622 ssh2: ED25519 SHA256:NgcyxhOTZrCD9po0uJDFMdtIjl/fsgRz6f ```
Author
Owner

@konstruktoid commented on GitHub (Dec 18, 2019):

What version of Ubuntu are you running?
I'll try to replicate this with Vagrant.

<!-- gh-comment-id:566963364 --> @konstruktoid commented on GitHub (Dec 18, 2019): What version of Ubuntu are you running? I'll try to replicate this with Vagrant.
Author
Owner

@roobyz commented on GitHub (Dec 19, 2019):

When logging-in pre-hardened, the log contains lines similar to the "accepted" and "opened" lines above. The "closed" line only happens after hardening. In either case, the log indicates that access is granted to user 1000 by uid=0 (root). Using Ubuntu 19.04 on LXC.

<!-- gh-comment-id:567274863 --> @roobyz commented on GitHub (Dec 19, 2019): When logging-in pre-hardened, the log contains lines similar to the "accepted" and "opened" lines above. The "closed" line only happens after hardening. In either case, the log indicates that access is granted to user 1000 by uid=0 (root). Using Ubuntu 19.04 on LXC.
Author
Owner

@roobyz commented on GitHub (Dec 19, 2019):

I ferreted out the culprit... f_limitsconf updates "/etc/security/limits.conf", which then locks out ssh access. Haven't figured out what about it caused it yet, but my ssh access is back. Now I need to figure out why "pihole" doesn't work. :) There are a few minor issues (i.e. tmpfs, forwarding), but mostly works now. My sense is that these may be related to Ubuntu 19.04. Thank you for your help!!

<!-- gh-comment-id:567342628 --> @roobyz commented on GitHub (Dec 19, 2019): I ferreted out the culprit... **f_limitsconf** updates "_/etc/security/limits.conf_", which then locks out ssh access. Haven't figured out what about it caused it yet, but my ssh access is back. Now I need to figure out why "pihole" doesn't work. :) There are a few minor issues (i.e. tmpfs, forwarding), but mostly works now. My sense is that these may be related to Ubuntu 19.04. Thank you for your help!!
Author
Owner

@konstruktoid commented on GitHub (Dec 19, 2019):

That's interesting, do run you run any code when you log in?
And /etc/security/limits.conf is used as a fallback since systemd has taken control, but please check your limits using https://github.com/konstruktoid/hardening/blob/master/misc/proc_check.sh,
sudo bash proc_check.sh.

<!-- gh-comment-id:567380875 --> @konstruktoid commented on GitHub (Dec 19, 2019): That's interesting, do run you run any code when you log in? And `/etc/security/limits.conf` is used as a fallback since `systemd` has taken control, but please check your limits using https://github.com/konstruktoid/hardening/blob/master/misc/proc_check.sh, `sudo bash proc_check.sh`.
Author
Owner

@roobyz commented on GitHub (Dec 19, 2019):

No code run when logging in. Might a similar problem related to running a LXC VPS that I have with NGINX, I had to disable Auto worker process setting, because NGINX would immediately assign 32 worker processes (one per CPU core) even though my VPS is allotted 1 VCPU. In this case, it seems that the limit setting is sensing other code running on the VPS and then exceeding the limits that your script sets.

If my theory is correct, I would need to multiply your limits by 32 to compensate for LXC. Thoughts? :-)

<!-- gh-comment-id:567670600 --> @roobyz commented on GitHub (Dec 19, 2019): No code run when logging in. Might a similar problem related to running a LXC VPS that I have with NGINX, I had to disable Auto worker process setting, because NGINX would immediately assign 32 worker processes (one per CPU core) even though my VPS is allotted 1 VCPU. In this case, it seems that the limit setting is sensing other code running on the VPS and then exceeding the limits that your script sets. If my theory is correct, I would need to multiply your limits by 32 to compensate for LXC. Thoughts? :-)
Author
Owner

@konstruktoid commented on GitHub (Dec 20, 2019):

My thoughts is that it feels very odd.
Checking on a Ubuntu 18.04 after reboot etc, I get the following:

$ systemctl show "user@(id -n).service" | grep -Ei 'nofile|nproc'
LimitNOFILE=1048576
LimitNOFILESoft=1048576
LimitNPROC=7872
LimitNPROCSoft=7872
$ ulimit -n -u
open files                      (-n) 1024
max user processes              (-u) 512

Could you publish your values?
And nginx settings or limits shouldn't interfere with yours.

<!-- gh-comment-id:567836318 --> @konstruktoid commented on GitHub (Dec 20, 2019): My thoughts is that it feels very odd. Checking on a Ubuntu 18.04 after reboot etc, I get the following: ``` $ systemctl show "user@(id -n).service" | grep -Ei 'nofile|nproc' LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitNPROC=7872 LimitNPROCSoft=7872 $ ulimit -n -u open files (-n) 1024 max user processes (-u) 512 ``` Could you publish your values? And `nginx` settings or limits shouldn't interfere with yours.
Author
Owner

@roobyz commented on GitHub (Dec 21, 2019):

Figured out the issue... soft nproc of 512 was too low and blocked my ssh access.

Defalt Values:

LimitNOFILE=1048576
LimitNOFILESoft=1048576
LimitNPROC=1031023
LimitNPROCSoft=1031023
open files                      (-n) 1024
max user processes              (-u) 1031023

After Values, updated based on your values:

LimitNOFILE=1048576
LimitNOFILESoft=1048576
LimitNPROC=1031023
LimitNPROCSoft=1031023
open files                      (-n) 1024
max user processes              (-u) 768
<!-- gh-comment-id:568150863 --> @roobyz commented on GitHub (Dec 21, 2019): Figured out the issue... soft nproc of 512 was too low and blocked my ssh access. Defalt Values: ``` bash LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitNPROC=1031023 LimitNPROCSoft=1031023 open files (-n) 1024 max user processes (-u) 1031023 ``` After Values, updated based on your values: ``` bash LimitNOFILE=1048576 LimitNOFILESoft=1048576 LimitNPROC=1031023 LimitNPROCSoft=1031023 open files (-n) 1024 max user processes (-u) 768 ```
Author
Owner

@konstruktoid commented on GitHub (Dec 21, 2019):

512 soft nproc is a lot of processes just for signing in.

<!-- gh-comment-id:568208715 --> @konstruktoid commented on GitHub (Dec 21, 2019): 512 soft nproc is a lot of processes just for signing in.
Author
Owner

@roobyz commented on GitHub (Dec 22, 2019):

My theory about running on LXC is correct:

  • Currently the proc filesystem is not "container aware" in mount namespaces
  • Tools basing their logic on this will get host-related values instead of container-related values

My container is only running 40 processes, but the host server is running much more. In my NGINX example, because of the proc limitations, the auto worker feature calculates 32 host CPUs rather than 1 container vCPU. It seems that the max user process settings might be accounting for the number of processes on the host system rather than on my container instance, as well.

What do you think?

<!-- gh-comment-id:568279987 --> @roobyz commented on GitHub (Dec 22, 2019): My theory about running on LXC is correct: * _Currently the proc filesystem is not "container aware" in mount namespaces_ * _Tools basing their logic on this will get host-related values instead of container-related values_ My container is only running 40 processes, but the host server is running much more. In my NGINX example, because of the proc limitations, the auto worker feature calculates 32 host CPUs rather than 1 container vCPU. It seems that the max user process settings might be accounting for the number of processes on the **host** system rather than on my **container** instance, as well. What do you think?
Author
Owner

@konstruktoid commented on GitHub (Dec 25, 2019):

Seems like a reasonably explanation but is it expected to work like that? Does cat /proc/stat show all host CPU:s etc?

<!-- gh-comment-id:568895083 --> @konstruktoid commented on GitHub (Dec 25, 2019): Seems like a reasonably explanation but is it expected to work like that? Does `cat /proc/stat` show _all_ host CPU:s etc?
Author
Owner

@roobyz commented on GitHub (Dec 25, 2019):

It seems to be correct, in part. In my example, you can see there is one virtual cpu (cpu0), but you can also see that there are 32 cores, and 187 filesystems. I am using one virtual core and 8 filesystems. In addition, I can only see the 40 processes that I'm running, however compared to the 270 processes on my home system, that number is obviously artificially low.

For example:
cat /proc/stat | grep cpu

cpu  80802 0 0 288230376151711744 0 0 0 0 0 0
cpu0 80802 0 0 288230376151711744 0 0 0 0 0 0

cat /proc/cpuinfo | grep cores
cpu cores : 32
cat /proc/partitions | wc -l
187

<!-- gh-comment-id:568913317 --> @roobyz commented on GitHub (Dec 25, 2019): It seems to be correct, in part. In my example, you can see there is one virtual cpu (cpu0), but you can also see that there are 32 cores, and 187 filesystems. I am using one virtual core and 8 filesystems. In addition, I can only see the 40 processes that I'm running, however compared to the 270 processes on my home system, that number is obviously artificially low. For example: *cat /proc/stat | grep cpu* ``` cpu 80802 0 0 288230376151711744 0 0 0 0 0 0 cpu0 80802 0 0 288230376151711744 0 0 0 0 0 0 ``` *cat /proc/cpuinfo | grep cores* `cpu cores : 32` *cat /proc/partitions | wc -l* `187`
Author
Owner

@konstruktoid commented on GitHub (Dec 26, 2019):

Don't know if that is intended or something to notify upstream about, but good thing you found out what the issue were.

<!-- gh-comment-id:569045387 --> @konstruktoid commented on GitHub (Dec 26, 2019): Don't know if that is intended or something to notify upstream about, but good thing you found out what the issue were.
Author
Owner

@konstruktoid commented on GitHub (Jan 17, 2020):

Closing due to inactivity.

<!-- gh-comment-id:575534899 --> @konstruktoid commented on GitHub (Jan 17, 2020): Closing due to inactivity.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/hardening#23
No description provided.