[GH-ISSUE #29] [auditd] kernel panics after installation #10

Closed
opened 2026-03-03 13:58:19 +03:00 by kerem · 33 comments
Owner

Originally created by @frederikbosch on GitHub (Aug 7, 2018).
Original GitHub issue: https://github.com/konstruktoid/hardening/issues/29

First of all, great package! I have a question around auditd. Directly after installation my VM was rebooting all the time. The reason was the audit log limit exceeded and that caused a kernel panic. I found out this was caused by our backup application that was installed before I ran the hardening script.

In order to get back control of the VM I changed the failure mode from 2 to 0 in recovery mode. Now, I want to prevent the kernel panics by adding a rule to auditd. Only I have no idea what that rule should be. I already saw in the logs that the backup program (running from /usr/sbin) was doing al kinds of operations (e.g. cp, key=tmp).

What rule should I add to prevent the Kernel panic?

Originally created by @frederikbosch on GitHub (Aug 7, 2018). Original GitHub issue: https://github.com/konstruktoid/hardening/issues/29 First of all, great package! I have a question around auditd. Directly after installation my VM was rebooting all the time. The reason was the audit log limit exceeded and that caused a kernel panic. I found out this was caused by our backup application that was installed before I ran the hardening script. In order to get back control of the VM I changed the failure mode from *2* to *0* in recovery mode. Now, I want to prevent the kernel panics by adding a rule to auditd. Only I have no idea what that rule should be. I already saw in the logs that the backup program (running from /usr/sbin) was doing al kinds of operations (e.g. cp, key=tmp). What rule should I add to prevent the Kernel panic?
kerem closed this issue 2026-03-03 13:58:19 +03:00
Author
Owner

@konstruktoid commented on GitHub (Aug 7, 2018):

Hi and thanks @frederikbosch!
The reason for the rebooting might be that auditd is configure to halt the system if running low on disk space.
Change the various left_action and max_log_file_action options from halt or suspend to admin_space_left_action = ignore, space_left_action = ignore, max_log_file_action = rotate in the /etc/audit/auditd.conf file.

<!-- gh-comment-id:411054006 --> @konstruktoid commented on GitHub (Aug 7, 2018): Hi and thanks @frederikbosch! The reason for the rebooting might be that auditd is configure to halt the system if running low on disk space. Change the various `left_action` and `max_log_file_action` options from `halt` or `suspend` to `admin_space_left_action = ignore`, `space_left_action = ignore`, `max_log_file_action = rotate` in the `/etc/audit/auditd.conf` file.
Author
Owner

@konstruktoid commented on GitHub (Aug 7, 2018):

Oh, and the auditd rules are very verbose and might be necessary to trim down in your environment depending on your ability to parse and analyze the data.

<!-- gh-comment-id:411054419 --> @konstruktoid commented on GitHub (Aug 7, 2018): Oh, and the auditd rules are very verbose and might be necessary to trim down in your environment depending on your ability to parse and analyze the data.
Author
Owner

@frederikbosch commented on GitHub (Aug 7, 2018):

@konstruktoid Thanks for your replies. I am struggling with all the rules applied to my VM, auditd and others. I would really like to work with a hardened server. But am I running against all kinds of walls. At the moment I am not even able to execute sudo nano /etc/audit/auditd.conf. I am only able to change things to the system in recovery mode. No idea why. I guess I am not experienced enough to have them.

<!-- gh-comment-id:411056185 --> @frederikbosch commented on GitHub (Aug 7, 2018): @konstruktoid Thanks for your replies. I am struggling with all the rules applied to my VM, auditd and others. I would really like to work with a hardened server. But am I running against all kinds of walls. At the moment I am not even able to execute `sudo nano /etc/audit/auditd.conf`. I am only able to change things to the system in recovery mode. No idea why. I guess I am not experienced enough to have them.
Author
Owner

@konstruktoid commented on GitHub (Aug 7, 2018):

Try removing the auditd packages from the system when you're in single user/recovery mode, too see if that's the issue.

<!-- gh-comment-id:411058425 --> @konstruktoid commented on GitHub (Aug 7, 2018): Try removing the auditd packages from the system when you're in single user/recovery mode, too see if that's the issue.
Author
Owner

@frederikbosch commented on GitHub (Aug 7, 2018):

That does not bring back the possibility to execute a command with sudo. I am on Ubuntu 18.04. /var/log/auth.log is telling me the following.

Aug  7 15:47:26 gitlab02 sudo: pam_unix(sudo:auth): conversation failed
Aug  7 15:47:26 gitlab02 sudo: pam_unix(sudo:auth): auth could not identify password for [genkgo]

The last command that I was able to execute with sudo was the hardening script. I have removed auditd in the meanwhile, no success.

<!-- gh-comment-id:411064095 --> @frederikbosch commented on GitHub (Aug 7, 2018): That does not bring back the possibility to execute a command with sudo. I am on Ubuntu 18.04. `/var/log/auth.log` is telling me the following. ```log Aug 7 15:47:26 gitlab02 sudo: pam_unix(sudo:auth): conversation failed Aug 7 15:47:26 gitlab02 sudo: pam_unix(sudo:auth): auth could not identify password for [genkgo] ``` The last command that I was able to execute with `sudo` was the hardening script. I have removed auditd in the meanwhile, no success.
Author
Owner

@frederikbosch commented on GitHub (Aug 7, 2018):

And the password is correct, because I am able to login and change the password. I also tried to read the password from a file sudo -S apt-get update <~/passwd.txt. Failed too.

<!-- gh-comment-id:411064576 --> @frederikbosch commented on GitHub (Aug 7, 2018): And the password is correct, because I am able to login and change the password. I also tried to read the password from a file `sudo -S apt-get update <~/passwd.txt`. Failed too.
Author
Owner

@frederikbosch commented on GitHub (Aug 7, 2018):

@konstruktoid It seemed that I fixed both issues, not being able to run sudo commands and kernel panics because of auditd. Thanks for your help.

However, I am having the sudo problem again. As it seems, I hit the limit of 5 password failures (deny=5 from /etc/pam.d/common-auth). When I get exited from the shell, e.g. because of non-activity due to long running script, this also counts as a deny. At least that is what I am guessing. I find this in the logs.

machine sudo: pam_tally2(sudo:auth): user genkgo (1000) tally 8, deny 5.

In order to fix it. I have to run recovery mode again, remove /var/log/faillog and then reboot. I have no clue how to fix it otherwise.

<!-- gh-comment-id:411091496 --> @frederikbosch commented on GitHub (Aug 7, 2018): @konstruktoid It seemed that I fixed both issues, not being able to run `sudo` commands and kernel panics because of auditd. Thanks for your help. However, I am having the `sudo` problem again. As it seems, I hit the limit of 5 password failures (`deny=5` from /etc/pam.d/common-auth). When I get exited from the shell, e.g. because of non-activity due to long running script, this also counts as a deny. At least that is what I am guessing. I find this in the logs. `machine sudo: pam_tally2(sudo:auth): user genkgo (1000) tally 8, deny 5`. In order to fix it. I have to run recovery mode again, remove /var/log/faillog and then reboot. I have no clue how to fix it otherwise.
Author
Owner

@frederikbosch commented on GitHub (Aug 7, 2018):

I am on Ubuntu 18.04 btw.

<!-- gh-comment-id:411097604 --> @frederikbosch commented on GitHub (Aug 7, 2018): I am on Ubuntu 18.04 btw.
Author
Owner

@konstruktoid commented on GitHub (Aug 8, 2018):

Great too hear that removing auditd fixed it; reducing the number of audit rules, using syslog/journald to send the logs to a remote server and replacing auditd halt actions with ignore might be an option as well.

You can reset the pam_tally2 locks with pam_tally2 --user=genkgo --reset, but this should reset automatically when you enter the correct username and password.

<!-- gh-comment-id:411315115 --> @konstruktoid commented on GitHub (Aug 8, 2018): Great too hear that removing `auditd` fixed it; reducing the number of audit rules, using syslog/journald to send the logs to a remote server and replacing auditd halt actions with ignore might be an option as well. You can reset the pam_tally2 locks with `pam_tally2 --user=genkgo --reset`, but this should reset automatically when you enter the correct username and password.
Author
Owner

@frederikbosch commented on GitHub (Aug 8, 2018):

You can reset the pam_tally2 locks with pam_tally2 --user=genkgo --reset

Well, that simply does not seem to be the case. The only thing that is working is removing /var/log/faillog.

What did I do.

  • I created a dev Docker machine (based on solita/ubuntu-systemd:16.04) to start playing with all scripts before using it in production again.
  • Afterwards I created a user, found out logging in was a problem due to nproc, so increased nproc to 4096 which I need anyway. Logging in worked.
  • Then, I started logging with the wrong username/password and got blocked after 5 trials, as expected.
  • Then, I tried to reset the counter with pam_tally2 --user genkgo --reset. But that has no consequences. You can find this out by requesting the number of failures pam_tally2 --user genkgo. That says 0, while at login I get Account locked due to 9 failed logins.
  • Finally I removed /var/log/faillog, logging in worked again.
<!-- gh-comment-id:411400590 --> @frederikbosch commented on GitHub (Aug 8, 2018): > You can reset the pam_tally2 locks with pam_tally2 --user=genkgo --reset Well, that simply does not seem to be the case. The only thing that is working is removing `/var/log/faillog`. What did I do. - I created a dev Docker machine (based on [solita/ubuntu-systemd:16.04](https://github.com/solita/docker-systemd)) to start playing with all scripts before using it in production again. - Afterwards I created a user, found out logging in was a problem due to nproc, so increased nproc to 4096 which [I need anyway](https://www.elastic.co/guide/en/elasticsearch/reference/master/max-number-of-threads.html). Logging in worked. - Then, I started logging with the wrong username/password and got blocked after 5 trials, as expected. - Then, I tried to reset the counter with `pam_tally2 --user genkgo --reset`. But that has no consequences. You can find this out by requesting the number of failures `pam_tally2 --user genkgo`. That says 0, while at login I get `Account locked due to 9 failed logins`. - Finally I removed `/var/log/faillog`, logging in worked again.
Author
Owner

@frederikbosch commented on GitHub (Aug 8, 2018):

And that confirms the behaviour I got yesterday on my VM, which was exactly the same. pam_tally2 --user genkgo --reset says OK and pam_tally2 --user genkgo says 0. Login says Account locked due to 9 failed logins.

<!-- gh-comment-id:411401684 --> @frederikbosch commented on GitHub (Aug 8, 2018): And that confirms the behaviour I got yesterday on my VM, which was exactly the same. `pam_tally2 --user genkgo --reset` says OK and `pam_tally2 --user genkgo` says 0. Login says `Account locked due to 9 failed logins`.
Author
Owner

@konstruktoid commented on GitHub (Aug 8, 2018):

You're correct, but the issue is that i use

sed -i '/^$/a auth required pam_tally2.so file=/var/log/faillog deny=5 unlock_time=900' "$COMMONAUTH"

which changes the default pam_tally2 log from /var/log/tallylog to /var/log/faillog (which is the pam_tally default...)

Appending --file /var/log/faillog to your reset will fix this issue.

Will fix so default file location is used.

<!-- gh-comment-id:411418157 --> @konstruktoid commented on GitHub (Aug 8, 2018): You're correct, but the issue is that i use ``` sed -i '/^$/a auth required pam_tally2.so file=/var/log/faillog deny=5 unlock_time=900' "$COMMONAUTH" ``` which changes the default pam_tally2 log from `/var/log/tallylog` to `/var/log/faillog` (which is the pam_tally default...) Appending `--file /var/log/faillog` to your reset will fix this issue. Will fix so default file location is used.
Author
Owner

@frederikbosch commented on GitHub (Aug 8, 2018):

Thanks!

<!-- gh-comment-id:411440549 --> @frederikbosch commented on GitHub (Aug 8, 2018): Thanks!
Author
Owner

@frederikbosch commented on GitHub (Aug 9, 2018):

I think I am able to manage. Thanks for your swift replies and help!

<!-- gh-comment-id:411745762 --> @frederikbosch commented on GitHub (Aug 9, 2018): I think I am able to manage. Thanks for your swift replies and help!
Author
Owner

@frederikbosch commented on GitHub (Aug 10, 2018):

@konstruktoid I am still playing with the audit backlog issue. For instance, when I ran the command docker container prune the audit backlog limit exceeded again. This was when I already had the number at 524288. I guess this is due to the watch /var/lib/docker.

So I am wondering what the best thing to do is. Increase the number even further? Change -f? Why would you even want to have it at 2 (panic when a failure occurs)?

<!-- gh-comment-id:412070444 --> @frederikbosch commented on GitHub (Aug 10, 2018): @konstruktoid I am still playing with the audit backlog issue. For instance, when I ran the command `docker container prune` the audit backlog limit exceeded again. This was when I already had the number at `524288`. I guess this is due to the watch `/var/lib/docker`. So I am wondering what the best thing to do is. Increase the number even further? Change `-f`? Why would you even want to have it at 2 (panic when a failure occurs)?
Author
Owner

@frederikbosch commented on GitHub (Aug 20, 2018):

I have asked a question regarding this topic at the Auditd mailing list. I will let you know the outcome or maybe I can create a PR to improve some configuration if necessary.

<!-- gh-comment-id:414327752 --> @frederikbosch commented on GitHub (Aug 20, 2018): I have asked a question regarding this topic at the [Auditd mailing list](https://www.redhat.com/archives/linux-audit/2018-August/msg00026.html). I will let you know the outcome or maybe I can create a PR to improve some configuration if necessary.
Author
Owner

@konstruktoid commented on GitHub (Aug 21, 2018):

Thanks @frederikbosch for the post to the mailing list, interesting discussion indeed, and you and Steve Grubb (https://www.redhat.com/archives/linux-audit/2018-August/msg00029.html) are both correct.
The rules are agressive, but they are also a catch all solution and if you don't need that kind of logging the auditd settings and rules should be changed accordingly.

<!-- gh-comment-id:414605517 --> @konstruktoid commented on GitHub (Aug 21, 2018): Thanks @frederikbosch for the post to the mailing list, interesting discussion indeed, and you and Steve Grubb (https://www.redhat.com/archives/linux-audit/2018-August/msg00029.html) are both correct. The rules are agressive, but they are also a catch all solution and if you don't need that kind of logging the auditd settings and rules should be changed accordingly.
Author
Owner

@frederikbosch commented on GitHub (Aug 21, 2018):

@konstruktoid Clear. But I would suggest to decrease some of the aggressiveness, because my system became almost immediately unusable. As soon as systemd started my backup app, it hit the backlog limit and consequently restarted. With another machine I also used this hardening package and while I was installing an updated kernel I hit the same limit during the update. I guess we should prevent that, even if the rules are aggressive by default.

Maybe we can split the rules: hardening.rules, hardening-aggressive.rules and hardening-docker.rules? I guess auditd initializes these rules in order of filename, so that would make hardening.rules first and the others would be supplementary. What do you think about that?

<!-- gh-comment-id:414643208 --> @frederikbosch commented on GitHub (Aug 21, 2018): @konstruktoid Clear. But I would suggest to decrease some of the aggressiveness, because my system became almost immediately unusable. As soon as systemd started my backup app, it hit the backlog limit and consequently restarted. With another machine I also used this hardening package and while I was installing an updated kernel I hit the same limit during the update. I guess we should prevent that, even if the rules are aggressive by default. Maybe we can split the rules: `hardening.rules`, `hardening-aggressive.rules` and `hardening-docker.rules`? I guess auditd initializes these rules in order of filename, so that would make `hardening.rules` first and the others would be supplementary. What do you think about that?
Author
Owner

@frederikbosch commented on GitHub (Aug 22, 2018):

The command aureport --start today --key --summary gives me the following.

Key Summary Report
===========================
total  key
===========================
63164  tmp
16060  docker
7206  delete
6007  admin_user_home
2760  auditlog
1595  specialfiles
675  perm_mod
69  systemd
54  systemd_tools
36  init
15  sshd
12  cron
5  login
5  actions
4  access
3  privileged
1  audit_rules_networkconfig_modification

My suggestion would to move docker rules to a specific hardening-docker.rules and at least the following the keys to hardening-aggressive.rules: tmp, delete and admin_user_home. Maybe we can tighten the delete key a little bit more. I will create a PR for this.

<!-- gh-comment-id:415003637 --> @frederikbosch commented on GitHub (Aug 22, 2018): The command `aureport --start today --key --summary` gives me the following. ``` Key Summary Report =========================== total key =========================== 63164 tmp 16060 docker 7206 delete 6007 admin_user_home 2760 auditlog 1595 specialfiles 675 perm_mod 69 systemd 54 systemd_tools 36 init 15 sshd 12 cron 5 login 5 actions 4 access 3 privileged 1 audit_rules_networkconfig_modification ``` My suggestion would to move docker rules to a specific `hardening-docker.rules` and at least the following the keys to `hardening-aggressive.rules`: tmp, delete and admin_user_home. Maybe we can tighten the delete key a little bit more. I will create a PR for this.
Author
Owner

@konstruktoid commented on GitHub (Aug 23, 2018):

Ref the PR #30

<!-- gh-comment-id:415328032 --> @konstruktoid commented on GitHub (Aug 23, 2018): Ref the PR #30
Author
Owner

@frederikbosch commented on GitHub (Aug 23, 2018):

@konstruktoid Based on the suggestion of the audit mailing list I also updated /tmp and /var/tmp to be mounted with noexec. Then the rules for key tmp are not necessary anymore.

It only gave trouble with docker/compose of which I found a solution.

Is that something you want in this package too, noexec for tmp?

<!-- gh-comment-id:415329315 --> @frederikbosch commented on GitHub (Aug 23, 2018): @konstruktoid Based on the suggestion of the audit mailing list I also updated `/tmp` and `/var/tmp` to be mounted with `noexec`. Then the rules for key `tmp` are not necessary anymore. It only gave trouble with docker/compose of which I found [a solution](https://github.com/docker/compose/issues/1339#issuecomment-415068438). Is that something you want in this package too, `noexec` for tmp?
Author
Owner

@frederikbosch commented on GitHub (Aug 23, 2018):

Sorry, I see you already suggest it in the systemd document. Maybe we should delete watching the tmp folders in the audit rules, even in the aggressive rules. The number of logs are beyond comprehension in my opinion. I doubt if that is a good thing.

<!-- gh-comment-id:415331366 --> @frederikbosch commented on GitHub (Aug 23, 2018): Sorry, I see you already suggest it in the systemd document. Maybe we should delete watching the tmp folders in the audit rules, even in the aggressive rules. The number of logs are beyond comprehension in my opinion. I doubt if that is a good thing.
Author
Owner

@konstruktoid commented on GitHub (Aug 23, 2018):

I would love to mount /tmp and /var/tmp with noexec, but that would break any upgrades and package installations without also modifying the behavior of dpkg and apt.

<!-- gh-comment-id:415345303 --> @konstruktoid commented on GitHub (Aug 23, 2018): I would love to mount `/tmp` and `/var/tmp` with `noexec`, but that would break any upgrades and package installations without also modifying the behavior of `dpkg` and `apt`.
Author
Owner

@konstruktoid commented on GitHub (Aug 23, 2018):

And (after reading the new messages in the RedHat thread) the Docker rules are based on the CIS Docker Community Edition Benchmark v1.1.0.

<!-- gh-comment-id:415349802 --> @konstruktoid commented on GitHub (Aug 23, 2018): And (after reading the new messages in the RedHat thread) the Docker rules are based on the CIS Docker Community Edition Benchmark v1.1.0.
Author
Owner

@konstruktoid commented on GitHub (Aug 23, 2018):

Will mess around with managing /tmp et al.

<!-- gh-comment-id:415350292 --> @konstruktoid commented on GitHub (Aug 23, 2018): Will mess around with managing `/tmp` et al.
Author
Owner

@frederikbosch commented on GitHub (Aug 23, 2018):

@konstruktoid I investigated this too. At least in Ubuntu 18.04 there is no problem anymore with noexec on tmp filesystems. I have done some updates/upgrades with apt, including new kernel installation, that works fine.

I read that on previous versions, 14.04 for instance, there were problems with apt, dpkg and specifically initramfs, and that you had to remount the tmpfs in order to install updates/upgrades. I cannot tell you unfortunately which version fixed that. I have no idea.

<!-- gh-comment-id:415352895 --> @frederikbosch commented on GitHub (Aug 23, 2018): @konstruktoid I investigated this too. At least in Ubuntu 18.04 there is no problem anymore with `noexec` on tmp filesystems. I have done some updates/upgrades with apt, including new kernel installation, that works fine. I read that on previous versions, 14.04 for instance, there were problems with apt, dpkg and specifically `initramfs`, and that you had to [remount](https://askubuntu.com/questions/574259/will-mounting-tmp-with-noexec-and-nosuid-cause-problems) the tmpfs in order to install updates/upgrades. I cannot tell you unfortunately which version fixed that. I have no idea.
Author
Owner

@frederikbosch commented on GitHub (Aug 23, 2018):

And (after reading the new messages in the RedHat thread) the Docker rules are based on the CIS Docker Community Edition Benchmark v1.1.0.

Yes, I found that out too. I read version 1.13.0 and leaves me with some questions on auditing /var/lib/docker. It leads to so many log entries. I wonder how that can be useful. Every single change a container makes will be logged because of watching /var/lib/docker/containers. Therefore I applied yesterday to the CIS Workbench so I can ask a question around that topic over there.

<!-- gh-comment-id:415353948 --> @frederikbosch commented on GitHub (Aug 23, 2018): > And (after reading the new messages in the RedHat thread) the Docker rules are based on the CIS Docker Community Edition Benchmark v1.1.0. Yes, I found that out too. I read version 1.13.0 and leaves me with some questions on auditing `/var/lib/docker`. It leads to so many log entries. I wonder how that can be useful. Every single change a container makes will be logged because of watching `/var/lib/docker/containers`. Therefore I applied yesterday to the CIS Workbench so I can ask a question around that topic over there.
Author
Owner

@konstruktoid commented on GitHub (Aug 23, 2018):

Interesting about noexec in 18.04 and you're correct, however I added the Pre/Post dpkg configuration "just in case": github.com/konstruktoid/hardening@f86b0c1233

But I haven't seen anything regarding when this behavior changed.

<!-- gh-comment-id:415375144 --> @konstruktoid commented on GitHub (Aug 23, 2018): Interesting about `noexec` in 18.04 and you're correct, however I added the Pre/Post dpkg configuration "just in case": https://github.com/konstruktoid/hardening/commit/f86b0c1233a1850e02f03f96a11bcc10b8ce8a90 But I haven't seen anything regarding when this behavior changed.
Author
Owner

@frederikbosch commented on GitHub (Aug 23, 2018):

@konstruktoid Did you test commit f86b0c1? When I was investigating I tried to execute the mount commands manually on my Ubuntu 18.04 system. That failed because /tmp and /var/tmp are not in fstab. I did not find out how to (re)mount a tmpfs mounted through systemd.

<!-- gh-comment-id:415382875 --> @frederikbosch commented on GitHub (Aug 23, 2018): @konstruktoid Did you test commit f86b0c1? When I was investigating I tried to execute the `mount` commands manually on my Ubuntu 18.04 system. That failed because `/tmp` and `/var/tmp` are not in fstab. I did not find out how to (re)mount a tmpfs mounted through systemd.
Author
Owner

@konstruktoid commented on GitHub (Aug 23, 2018):

You should be able to use the mount command as you normally do.

$ mount | grep '/tmp'
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noexec)
tmpfs on /var/tmp type tmpfs (rw,nosuid,nodev,noexec)
$ sudo mount -oremount,exec /var/tmp
$ mount | grep '/tmp'
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noexec)
tmpfs on /var/tmp type tmpfs (rw,nosuid,nodev)
$ sudo mount -oremount,exec /tmp
$ mount | grep '/tmp'
tmpfs on /tmp type tmpfs (rw,nosuid,nodev)
tmpfs on /var/tmp type tmpfs (rw,nosuid,nodev)
$ sudo mount -oremount,noexec,nosuid,nodev /tmp
$ mount | grep '/tmp'
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noexec)
tmpfs on /var/tmp type tmpfs (rw,nosuid,nodev)
$ grep '/tmp' /etc/fstab
$ systemctl -a | grep -i 'tmp.mount'
  tmp.mount                                                                           loaded    active   mounted   Temporary Directory
  var-tmp.mount                                                                       loaded    active   mounted   Temporary Directory
$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.1 LTS
Release:	18.04
Codename:	bionic
<!-- gh-comment-id:415394822 --> @konstruktoid commented on GitHub (Aug 23, 2018): You should be able to use the `mount` command as you normally do. ``` $ mount | grep '/tmp' tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noexec) tmpfs on /var/tmp type tmpfs (rw,nosuid,nodev,noexec) $ sudo mount -oremount,exec /var/tmp $ mount | grep '/tmp' tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noexec) tmpfs on /var/tmp type tmpfs (rw,nosuid,nodev) $ sudo mount -oremount,exec /tmp $ mount | grep '/tmp' tmpfs on /tmp type tmpfs (rw,nosuid,nodev) tmpfs on /var/tmp type tmpfs (rw,nosuid,nodev) $ sudo mount -oremount,noexec,nosuid,nodev /tmp $ mount | grep '/tmp' tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noexec) tmpfs on /var/tmp type tmpfs (rw,nosuid,nodev) $ grep '/tmp' /etc/fstab $ systemctl -a | grep -i 'tmp.mount' tmp.mount loaded active mounted Temporary Directory var-tmp.mount loaded active mounted Temporary Directory $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.1 LTS Release: 18.04 Codename: bionic ```
Author
Owner

@frederikbosch commented on GitHub (Aug 23, 2018):

Ah, cool, I misunderstood the remount within the mount command. Now I get it. My bad. Thanks!

<!-- gh-comment-id:415395620 --> @frederikbosch commented on GitHub (Aug 23, 2018): Ah, cool, I misunderstood the remount within the `mount` command. Now I get it. My bad. Thanks!
Author
Owner

@frederikbosch commented on GitHub (Aug 30, 2018):

Fixed by #30

<!-- gh-comment-id:417206032 --> @frederikbosch commented on GitHub (Aug 30, 2018): Fixed by #30
Author
Owner

@frederikbosch commented on GitHub (Sep 3, 2018):

For your info, I have opened a discussion at the CIS Workbench regarding the Docker audit rules, especially auditing /var/lib/docker.

<!-- gh-comment-id:418100479 --> @frederikbosch commented on GitHub (Sep 3, 2018): For your info, I have [opened a discussion at the CIS Workbench](https://workbench.cisecurity.org/community/37/discussions/4032) regarding the Docker audit rules, especially auditing `/var/lib/docker`.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/hardening#10
No description provided.