mirror of
https://github.com/konstruktoid/hardening.git
synced 2026-04-26 01:05:56 +03:00
[GH-ISSUE #57] LXC VPS #23
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @roobyz on GitHub (Dec 13, 2019).
Original GitHub issue: https://github.com/konstruktoid/hardening/issues/57
I'm having a few challenges getting this to work on a VPS running on an LXC containier.
Any suggestions on how I might try to repurpose your repo to work on a VPS running LXC?
@konstruktoid commented on GitHub (Dec 13, 2019):
Hi @roobyz, can you attach some logs and show me the debug output when connecting with
ssh?@roobyz commented on GitHub (Dec 13, 2019):
Yes... that raises another good point that I missed earlier. In this exaple, I only disabled auditd and app armor.
After running the script, upon exiting, ssh actually works. After a reboot, ssh stops working right on the step before "debug1: Connection established". The log looks something like this after reboot:
OpenSSH_8.1p1, OpenSSL 1.1.1d 10 Sep 2019
debug1: Reading configuration data /home/user/.ssh/config
debug1: Connecting to 111.222.333.444 [111.222.333.444] port 2233.
@konstruktoid commented on GitHub (Dec 16, 2019):
Yeah, that doesn't tell me much.
Can you include the output of
ssh -vv <HOST>?Do you have access to the console?
@roobyz commented on GitHub (Dec 17, 2019):
ok, thank you for your patience. I tried again from the beginning and had everything fully functional with complete SSH access. Then I ran your
ubuntu.shscript and I still had SSH access upon completion, but after a reboot it was locked . Ran as requested withssh -vv:@konstruktoid commented on GitHub (Dec 17, 2019):
That seems fine except it won't let you in, but if you got access to the server console could you check with
sudo journalctl -r -u sshafter a failed login?My initial suggestion would be to increase the
sshd_configoptions MaxAuthTries and MaxSessions to 6 (or the number of available keys in use).@roobyz commented on GitHub (Dec 18, 2019):
I tried updating the "max" values you specified and nothing changed. So I restored the sshd_config back to the original and restarted ssh.service, and that also had no impact. Seems like some other security setting that impacts sshd logins is somehow involved. The journal results, show that pam is instantly logging me out.
@konstruktoid commented on GitHub (Dec 18, 2019):
What version of Ubuntu are you running?
I'll try to replicate this with Vagrant.
@roobyz commented on GitHub (Dec 19, 2019):
When logging-in pre-hardened, the log contains lines similar to the "accepted" and "opened" lines above. The "closed" line only happens after hardening. In either case, the log indicates that access is granted to user 1000 by uid=0 (root). Using Ubuntu 19.04 on LXC.
@roobyz commented on GitHub (Dec 19, 2019):
I ferreted out the culprit... f_limitsconf updates "/etc/security/limits.conf", which then locks out ssh access. Haven't figured out what about it caused it yet, but my ssh access is back. Now I need to figure out why "pihole" doesn't work. :) There are a few minor issues (i.e. tmpfs, forwarding), but mostly works now. My sense is that these may be related to Ubuntu 19.04. Thank you for your help!!
@konstruktoid commented on GitHub (Dec 19, 2019):
That's interesting, do run you run any code when you log in?
And
/etc/security/limits.confis used as a fallback sincesystemdhas taken control, but please check your limits using https://github.com/konstruktoid/hardening/blob/master/misc/proc_check.sh,sudo bash proc_check.sh.@roobyz commented on GitHub (Dec 19, 2019):
No code run when logging in. Might a similar problem related to running a LXC VPS that I have with NGINX, I had to disable Auto worker process setting, because NGINX would immediately assign 32 worker processes (one per CPU core) even though my VPS is allotted 1 VCPU. In this case, it seems that the limit setting is sensing other code running on the VPS and then exceeding the limits that your script sets.
If my theory is correct, I would need to multiply your limits by 32 to compensate for LXC. Thoughts? :-)
@konstruktoid commented on GitHub (Dec 20, 2019):
My thoughts is that it feels very odd.
Checking on a Ubuntu 18.04 after reboot etc, I get the following:
Could you publish your values?
And
nginxsettings or limits shouldn't interfere with yours.@roobyz commented on GitHub (Dec 21, 2019):
Figured out the issue... soft nproc of 512 was too low and blocked my ssh access.
Defalt Values:
After Values, updated based on your values:
@konstruktoid commented on GitHub (Dec 21, 2019):
512 soft nproc is a lot of processes just for signing in.
@roobyz commented on GitHub (Dec 22, 2019):
My theory about running on LXC is correct:
My container is only running 40 processes, but the host server is running much more. In my NGINX example, because of the proc limitations, the auto worker feature calculates 32 host CPUs rather than 1 container vCPU. It seems that the max user process settings might be accounting for the number of processes on the host system rather than on my container instance, as well.
What do you think?
@konstruktoid commented on GitHub (Dec 25, 2019):
Seems like a reasonably explanation but is it expected to work like that? Does
cat /proc/statshow all host CPU:s etc?@roobyz commented on GitHub (Dec 25, 2019):
It seems to be correct, in part. In my example, you can see there is one virtual cpu (cpu0), but you can also see that there are 32 cores, and 187 filesystems. I am using one virtual core and 8 filesystems. In addition, I can only see the 40 processes that I'm running, however compared to the 270 processes on my home system, that number is obviously artificially low.
For example:
cat /proc/stat | grep cpu
cat /proc/cpuinfo | grep cores
cpu cores : 32cat /proc/partitions | wc -l
187@konstruktoid commented on GitHub (Dec 26, 2019):
Don't know if that is intended or something to notify upstream about, but good thing you found out what the issue were.
@konstruktoid commented on GitHub (Jan 17, 2020):
Closing due to inactivity.