Setup Lab Environment using CentOS Linux and Oracle VirtualBox

Last week I started Red Hat Server Hardening (RH413) online training from Network Nuts so thought of setting up my lab environment to exercise the quiz at the end of each unit. I decided to use the open source CentOS Linux and Oracle VirtualBox on my Dell Latitude Windows 7 laptop so it won't hurt my pocket with extra cost. As this is going to be a lab environment and we experiment and break things up so it is a good practice to have a baseline image of the operating system which we can use to create multiple identical virtual machines within few minutes when needed using Clone feature. VirtualBox also has another great feature and that is point in time snapshot. Let's check out the step by step process to build the lab environment,
  1. Download the latest available VirtualBox binaries for our platform from, https://www.virtualbox.org/wiki/Downloads and go ahead with the installation. The installation is pretty straight forward, just double click the binary file and follow the wizard. At the time of writing this article the VirtualBox version was 4.3.6

  2. Download the "CentOS-6.5-x86_64-minimal.iso" from the CentOS website. We can download it from the nearest mirror to our location.

    i386: http://isoredirect.centos.org/centos/6/isos/i386/
    x86_64: http://isoredirect.centos.org/centos/6/isos/x86_64/

  3. Create virtual machine with type Linux and version Red Hat (64 bit). All the default installation parameters should be fine like memory 512M and hard disk 8G as we are going to do the minimal installation. Note: I am assuming we have x86_64 supported hardware and downloaded 64 bit ISO file.

  4. Once the virtual machine is created then configure the following,
    • Click on Storage and under IDE Controller select Empty CD/DVD Drive. On the right side under the Attributes browse the path and select the ISO file we downloaded in step #2.
    • Click on Network and change the Attached to to Bridge Adapter and Adapter Type to Paravirtualized Network (virtio-net).
    • Also, I prefer to disable any unwanted component like Audio, Serial Ports and USB.

  5. Start the virtual machine and follow the wizard to install the CentOS Linux.

  6. The default CentOS network configuration is set to not start at boot time but to change the configuration,

    # vi /etc/sysconfig/network-scripts/ifcfg-eth0

    Change the parameter ONBOOT=no to ONBOOT=yes and save the file wq! and now to restart the network service,

    # service network restart

  7. Go ahead and apply all the system patches,

    # yum -y upgrade

    And reboot the system,

    # reboot

  8. Before we proceed with the VirtualBox Guest Additions dependencies, we need to configure the Extra Packages for Enterprise Linux (EPEL) repository to install DKMS (Dynamic Kernel Module Support Framework) originally developed by Dell. If DKMS is not used then the VirtualBox Guest Additions will need to be re-installed after every kernel update. We can browse the latest available EPEL release at http://dl.fedoraproject.org/pub/epel/6/x86_64/repoview/epel-release.html and at the time of writing this article the latest available package was epel-release-6-8.noarch. To install/configure the EPEL,

    # rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

  9. Due to security reasons we might want to disable the EPEL repository and enable explicitly only when required, to disable the EPEL repository,

    # vi /etc/yum.repos.d/epel.repo

    Under [epel] change the line enabled=1 to enabled=0 and save the file wq!

  10. Now install the VirtualBox Guest Additions dependencies,

    # yum -y install kernel-devel gcc make perl dkms --enablerepo epel

  11. On virtual machines window, click Devices and then Insert Guest Additions CD image..

  12. To mount the CD drive in CentOS,

    # mount /dev/cdrom /mnt

  13. To install the VirtualBox Guest Additions,

    # /mnt/VBoxLinuxAdditions.run

    The output will be,

    Verifying archive integrity... All good.
    Uncompressing VirtualBox 4.3.6 Guest Additions for Linux............
    VirtualBox Guest Additions installer
    Removing installed version 4.3.6 of VirtualBox Guest Additions...
    Copying additional installer modules ...
    Installing additional modules ...
    Removing existing VirtualBox DKMS kernel modules           [  OK  ]
    Removing existing VirtualBox non-DKMS kernel modules       [  OK  ]
    Building the VirtualBox Guest Additions kernel modules
    Building the main Guest Additions module                   [  OK  ]
    Building the shared folder support module                  [  OK  ]
    Building the OpenGL support module                         [FAILED]
    (Look at /var/log/vboxadd-install.log to find out what went wrong)
    Doing non-kernel setup of the Guest Additions              [  OK  ]
    Installing the Window System drivers                       [FAILED]
    (Could not find the X.Org or XFree86 Window System.)


    The output shows two failures, the second one is for Window System drivers which we don't care because this is CentOS system not Windows but first one failed to build the OpenGL support module.

    OpenGL (Open Graphics Library) is a cross-language, multi-platform application programming interface (API) for rendering 2D and 3D computer graphics. The API is typically used to interact with a Graphics processing unit (GPU), to achieve hardware-accelerated rendering. - Wikipedia

    If we do tail on the log file /var/log/vboxadd-install.log and look near the bottom then we will notice there are four files that VirtualBox Guest Additions couldn't find,

    include/drm/drmP.h:76:21: error: drm/drm.h: No such file or directory
    include/drm/drmP.h:77:27: error: drm/drm_sarea.h: No such file or directory
    ...
    include/drm/drm_crtc.h:35:26: error: drm/drm_mode.h: No such file or directory
    include/drm/drm_crtc.h:37:28: error: drm/drm_fourcc.h: No such file or directory


    But if we check what provide these files then the packages are already installed on the system,

    # yum whatprovides "*drm/drm.h"

    Now let's check if these files are present in kernel source,

    # ls /usr/src/kernels/$(uname -r)/include/drm | grep -E "^drm_fourcc.h$|^drm.h$|^drm_mode.h$|^drm_sarea.h$"

    If this does not return any output then it mean the files are not present in the kernel source and we need to create symbolic links manually,

    # cd /usr/src/kernels/$(uname -r)/include/drm
    # ln -s /usr/include/drm/drm.h drm.h
    # ln -s /usr/include/drm/drm_sarea.h drm_sarea.h
    # ln -s /usr/include/drm/drm_mode.h drm_mode.h
    # ln -s /usr/include/drm/drm_fourcc.h drm_fourcc.h

  14. Install the VirtualBox Guest Additions again,

    # /mnt/VBoxLinuxAdditions.run

    Hurry!! Happy days, everything is OK this time.

  15. When we reboot the system and check the activities happening in the background then we will notice following message in very first few lines,

    Starting udev: piix4_smbus 0000:00:07.0: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr

    This is an error caused by VM having no smbus but operating system always try to load the module. It doesn't affect anything and can be safely ignored but is a bit annoying so to fix this let's check if the module is being loaded,

    # lsmod | grep -i piix4

    If yes, then go ahead and blacklist the module,

    # echo "# Blacklist the smbus i2c_piix4 module" >> /etc/modprobe.d/blacklist.conf
    # echo "blacklist i2c_piix4" >> /etc/modprobe.d/blacklist.conf

  16. Finally reboot the system,

    # reboot

  17. Before we lock down this baseline image we can install three more packages used frequently,

    # yum -y install wget man vim-enhance
Now we are good to power off the machine, just right click and clone to spin up new virtual machine. If we clone the machine there are two recurring tasks that we need to perform every time. First, rename the network interface eth1 back to eth0 which I have already documented but those are manual steps. Stay tuned for my upcoming posts for scripted way to fix the issue. Second, update the host name in /etc/hosts and /etc/sysconfig/network files. I couldn't find a way to get the VM name from CentOS guest but I am checking at the VirtualBox forum if this is even possible, otherwise stay tuned for my next article on scripted (less manual) way to update the host name.

Any feedback will be highly appreciated.

Suggested Posts,

This post appeared on the softlexicon.com by Sumit Goel. Copyright © 2012–2014 – softlexicon.com and Sumit Goel. All rights reserved. Not to be reproduced for commercial purposes without written permission.

Collect Centrify DirectControl Debug Log

In this post I am gonna provide you the steps to collect the debug logs that should be provided while reporting an authentication failure issue with Centrify DirectControl. If you have the paid version then you can simply send an email to support@centrify.com with the logs files otherwise Centrify Forum is the one great stop shop for you. Make sure you run these steps before fixing the issue like restarting the system or restarting the centrifydc service.

These steps are for RHEL (Red Hat Enterprise Linux) users but other distributions should have similar steps just the directory structure might change a little bit,
  1. Login to the host as root.
  2. Make sure /var, /var/log and /tmp have sufficient free space
  3. Clear debug log: /usr/share/centrifydc/bin/addebug clear
  4. Turn on debug mode: /usr/share/centrifydc/bin/addebug on
  5. Run: adquery user <AD_user> -A > /tmp/adqueryuser.txt
  6. Run: dzinfo <AD_user> -A > /tmp/dzinfo.txt
  7. Reproduce issue
  8. Run: adinfo -t
  9. Turn off debug mode: /usr/share/centrifydc/bin/addebug off
  10. Send these files to support or attach them while asking help at forum.
    a) 
    /tmp/adinfo_support.tar.gz (DirectControl 5.0.x or before) or /var/centrify/tmp/adinfo_support.tar.gz (DirectControl 5.1)
    b) /tmp/adqueryuser.txt
    c) /tmp/dzinfo.txt

This post appeared on the softlexicon.com by Sumit Goel. Copyright © 2012–2013 – softlexicon.com and Sumit Goel. All rights reserved. Not to be reproduced for commercial purposes without written permission.

Python Script: Check Ping Status and Lookup Hostname from IP List

Today I got a request to disable insecure SNMP v1 / v2c from a list of IP addresses. If you are still using SNMP v1 / v2c then it's really the time to disable the version and configure secured SNMP v3. Sometime ago I wrote a post on how to install and configure Net-SNMP v3 in Red Hat Enterprise Linux 4, you might wanna check that out as well. So, I had no idea what these servers are because I like to memorize my servers with their host name. And I didn't want to run nslookup or host or dig one by one on each IP. Also, I wanted to check whether these IP are alive / ping-able on my network or not. I could have used for loop and made one complicated command but for couple of reasons I chose not to do that,

  • I might get a long list of IPs again and writing one time long complicated command using for loop may not be the best way.

  • I had a list of few hundred IPs and for loop is slow and I wanted something fast.

So I decided to write a script pingIPGetHostname.py in Python and using it's multiprocessing module to make things fast. I encourage you to please download it from GitHub and give it try and don't hesitate to make a comment on this post if you have any issue or some suggestions to improve the script.


Usage: pingIPGetHostname.py [ -i ip-address] [ -f filename]

You may run the script with option -h or --help to check the usage,

$ pingIPGetHostname.py -h

or,

$ pingIPGetHostname.py --help

You may run the script with option -i or --ip-address by giving one IP address,

$ pingIPGetHostname.py -i 8.8.8.8

or,

$ pingIPGetHostname.py --ip-address 8.8.8.8

If you have a list of IPs then simply put them in a file (one IP per line) and run the script with -f or --filename option by giving the file name and file path,

$ pingIPGetHostname.py -f /home/username/ip-list.txt

or,

$ pingIPGetHostname.py --filename /home/username/ip-list.txt


This post appeared on the softlexicon.com by Sumit Goel. Copyright © 2012–2013 – softlexicon.com and Sumit Goel. All rights reserved. Not to be reproduced for commercial purposes without written permission.

adclient: DEBUG util.except (cims::RPC) : NetLogon::authenticate failed: Buffer Overflow

If you running DirectControl 5.0.x and have addebug enabled then you may see below authenticate failed messages pretty much every 30 seconds in your /var/log/centrifydc.log file,

adclient[12219]: DIAG  <bg:updateOS> smb.rpc.netlogon authenticate - useAuthen3=1.

adclient[12219]: DEBUG <bg:updateOS> util.except (cims::RPC) : NetLogon::authenticate failed: Buffer Overflow (reference ../smb/rpcclient/netlogon.cpp:247 rc: -2147483643)

These messages are getting logged because your computer's samAccountName (in layman's terms hostname) is greater than 15 characters long on DirectControl 5.0.x. This has been fixed in 5.1.0 and in current latest version that is 5.1.2 at the time of writing this blog post.

Every 30 seconds Centrify's adclient checks to see if the correct OS version and tatoo are set. If it is not, then it will first try to update these via NETLOGON API. If this fails too, then it will try with LDAP. However since the computer's samAccountName (in layman's terms hostnameis greater than 15 characters long, the NETLOGON API throws an exception so the LDAP method is never tried. This means the agent process will report a failed event every 30 seconds.

To check if addebug is enabled,

/usr/share/centrifydc/bin/addebug status

To turn on the addebug,

/usr/share/centrifydc/bin/addebug on

To turn off the addebug,

/usr/share/centrifydc/bin/addebug off

So, if you are still running DirectControl 5.0.x then it's time to upgrade to latest available version.


This post appeared on the softlexicon.com by Sumit Goel. Copyright © 2012–2013 – softlexicon.com and Sumit Goel. All rights reserved. Not to be reproduced for commercial purposes without written permission.

VMware vSphere: Design Workshop Class

It was the first day of my VMware vSphere: Design Workshop class and I was quite excited to learn the designing stuff, kind of my second step after VCP (VMware Certified Professionals) towards the vSphere Architect role. The class was taught by Patrick Fong trainer from www.ivtsys.com at VMware Singapore and I find him straight forward and nice guy. It’s a slides based training class and one of the good things about that is it covers most of the topics and details, big or small, that are necessary from the VMware’s perspective but it leaves less time to discuss the real life designs and the experiences. I want to see more production designs to talk about and little less theory as VMware has done a great job on white papers, documentation and knowledge base.

Although the first half for some of the attendees in the class was dry but I find it useful because I never thought of using the discussed approach while designing vSphere. It is very crucial to interview the stakeholders and have proper conceptual and logical design. This is where we should spend our most of the design time because if we get that right then engineering design is comparatively easy. The module talks about lot of things but in real life it’s always not possible to imitate that exactly. We should have some standards based on organization objectives and following the standards with VMware best practices is a key to good design for an organization.

The second half was completely a technical discussion around storage design, very important in virtualization world. As per the rough estimate more than 50% VMware support cases are due to storage issues. It’s not a bad idea to reserve a good chunk of the budget for efficient storage in small-medium business and enterprise designs. One thing that I like the most is VMware and Storage vendors are working together to create plugins and providing good information about the storage allocation, RAID groups, disk etc. within the vSphere client so the Virtualization and Storage engineers have transparency about the platform while troubleshooting an operational incident.

At the end of the day we talked about a sample scenario and it was little brainstorming but I want to see that more in the class. I think overall I enjoyed the first day of the class and would recommend to all those who want to jump start their career in vSphere designing from administration.

Any feedback will be highly appreciated.

Suggested Posts,

This post appeared on the softlexicon.com by Sumit Goel. Copyright © 2012–2013 – softlexicon.com and Sumit Goel. All rights reserved. Not to be reproduced for commercial purposes without written permission.

Setup Net-SNMP v3 on VMware ESX 4.0

Most enterprises have central monitoring system which is used to monitor pretty much every system/network gear in the infrastructure. In this post we will configure secure user based Net-SNMP v3 Agent on VMware ESX 4.0 so these hosts can also be monitored from the same central monitoring system in a secured manner. If you want to install and configured Net-SNMP v3 on RHEL4 systems then check out my other post. You need to perform these steps as root user so login to ESX 4.0 console as root now.

Before making any change let's backup the ESX 4.0 host configuration,

# cp -ap /etc/vmware/esx.conf /etc/vmware/esx.conf.`date +%F-%H%M%S`

To open the port UDP/161 for a specific IP Address,

# esxcfg-firewall --ipruleAdd xxx.xxx.xxx.xxx,161,udp,ACCEPT,"snmpd"

Make sure to replace xxx.xxx.xxx.xxx with the IP Address that will be polling the information using SNMP v3 credentials. If you have more than one IP Address then the command can be repeated with the other IP Address.

To stop the Net-SNMP Agent if already running,

# /etc/init.d/snmpd stop

To move the default file where SNMP v3 user's localized authentication and privacy keys are stored,

# mv /var/net-snmp/snmpd.conf /var/net-snmp/snmpd.conf.`date +%F-%H%M%S`

To move the default Net-SNMP configuration file,

# mv /etc/snmp/snmpd.conf /etc/snmp/snmpd.conf.`date +%F-%H%M%S`

We need to start and stop the Net-SNMP Agent to generate the unique Engine ID and create SNMP v3 user,

# /etc/init.d/snmpd start
# /etc/init.d/snmpd stop

Now let's create a Net-SNMP v3 read-only user with MD5 authentication and AES encryption but make sure to replace the xxxxxxxx with your pass phrases and username to whatever name you want to give to your SNMP v3 user,

# echo "createUser username MD5 xxxxxxxx AES xxxxxxxx" >> /var/net-snmp/snmpd.conf
# echo "rouser username" > /etc/snmp/snmpd.conf

Note: The minimum pass phrase length is 8 characters so make sure to choose two different strong alpha numeric pass phrases one for authentication and other for encryption.

To start the Net-SNMP Agent at boot time,

# chkconfig snmpd on

To start the Net-SNMP Agent now,

# /etc/init.d/snmpd start

We have successfully completed the Net-SNMP v3 setup here on VMware ESX 4.0 but now let's use snmpwalk to test if we are able to poll the information correctly,

# snmpwalk -v 3 -u username -l authPriv -a MD5 -A xxxxxxxx -x AES -X xxxxxxxx localhost sysDesc

Where,
    xxxxxxxx are your authentication and encryption pass phrases
    username is your SNMP v3 user name

If your hardware vendor of ESX host if Dell and OMSA is installed on the host then you have an option to poll the hardware events via Net-SNMP v3 and to take advantage of this feature you need to enable the SNMP in OMSA,

# /etc/init.d/dataeng enablesnmp

This should add below line in Net-SNMP configuration file /etc/snmp/snmpd.conf,

smuxpeer .1.3.6.1.4.1.674.10892.1

But if in case you don't see the line in /etc/snmp/snmpd.conf then go ahead and add this line manually,

# echo "smuxpeer .1.3.6.1.4.1.674.10892.1" >> /etc/snmp/snmpd.conf

To restart the OMSA services,

# srvadmin-services.sh restart

In most cases srvadmin-services.sh is located at /opt/dell/srvadmin/sbin/srvadmin-services.sh

Now just restart the Net-SNMP Agent and you are done,

# /etc/init.d/snmpd restart

Any feedback will be highly appreciated.

Suggested Posts,

This post appeared on the softlexicon.com by Sumit Goel. Copyright © 2012–2013 – softlexicon.com and Sumit Goel. All rights reserved. Not to be reproduced for commercial purposes without written permission.

Bash Script: Monitor Inode Usage for UFS in Solaris 10

A file is the smallest unit of storage in the Unix file system (UFS). In Unix and Unix-like operating systems, each file is associated with an inode (Index Node) that stores the attributes like permissions, owner, group, size, type, time stamp, and etc. of file system objects like regular file, directory, and etc. Note that an inode contains all the information about a file except its name, which is kept in a directory. The size of an inode is 128 bytes. The inode allocation in UFS is static unlike XFS or NTFS that can easily cause lack of inodes if you have lot of small files. This makes the situation critical to place inode monitoring in place and do some trend analysis to prevent downtime. So here is a bash shell script to monitor the inode usage for UFS in Solaris 10. The script can send you the email notification if the inode usage goes over 90% and also logs the data in CSV format for tracking the growth. You need to run the script from cronjobs so that it can work even if you are not in front of your server.

I encourage you to please download the script inodeUsageCheck.sh from GitHub and give it try and don't hesitate to make a comment on this post if you have any issue or some suggestions to improve the script.

You can check the script here:

https://github.com/sumitgoel17/sysAdminScripts/blob/master/inodeUsageCheck.sh

Usage: inodeUsageCheck.sh

You should change the email address for your notification in the script and space separated multiple email addresses can defined. If you want to exclude some of the partitions then update the EXCLUDE_LIST variable and "|" (pipe) separated multiple partitions can be defined. Also, the default path to save the CSV file is /tmp/inode_usage_data.csv but if you want to change that then simply update the INODE_USAGE_DATA variable.

In a UFS volume, Inodes are numbered sequentially, starting at 0. The first two inodes are reserved for historical reasons, followed by the inode for the root directory, which is always inode 2. - Wikipedia

Any feedback will be highly appreciated.

Suggested Posts,

This post appeared on the softlexicon.com by Sumit Goel. Copyright © 2012–2013 – softlexicon.com and Sumit Goel. All rights reserved. Not to be reproduced for commercial purposes without written permission.

Bash Script: List Last Modified Files on Linux Server

Today I am going to share a simple bash shell script to list the last modified files in a linux server directory. If I Google search this topic then I get several articles / blog posts suggesting to install some tools and then use that tool to find the information but I don't want to install anything on my server so I just went ahead and created one bash shell script that uses find command and placed it in my cronjob that notifies me via email if the content of the file(s) is changed or a new file is created in the directory.

I encourage you to please download the script lastModifiedFiles.sh from GitHub and give it try and don't hesitate to make a comment on this post if you have any issue or some suggestions to improve the script.

You can check the script here:

https://github.com/sumitgoel17/sysAdminScripts/blob/master/lastModifiedFiles.sh

Usage: lastModifiedFiles.sh <directory path>

Make sure to update the correct notification email address in the script and if you want to run this script from cronjob then you have an option to hard code the directory path in the script. If you have defined the path in command line and also have hard coded path then command line argument will be used. Also, remember the lookup time in the script while setting up the cronjob for accurate information.

Any feedback will be highly appreciated.

Suggested Posts,

This post appeared on the softlexicon.com by Sumit Goel. Copyright © 2012–2013 – softlexicon.com and Sumit Goel. All rights reserved. Not to be reproduced for commercial purposes without written permission.

Use SHRED over RM to Delete Sensitive Data in Linux

Are you looking for a way to securely delete the sensitive data (or files) from the hard drive in Linux? If yes, then this post is for you. I want to show you the shred command in Linux that overwrite the specified file(s) to hide and delete its contents. We are very much use to of using rm command in Linux to delete the files but in the background it just breaks the link (or unlink) the data block from its index number and the content of the file remains on the hard drive which can possibly be recovered using data recovery software or hardware appliances that makes rm command insufficient when it comes to destroy the data files. shred provides a mechanism to repeatedly overwrite the data file(s) and optionally delete it in order to make it harder for even very expensive hardware probing to recover the data.

Important things to know about shred,
  • shred only works on file(s), not on directories.
  • shred overwrite the file content 3 times with random pattern by default but using -n or --iterations option with shred command the number of overwrites can be increased or decreased.
  • shred does not delete file(s) by default but using -u or --remove option the specified file(s) can be deleted.

Let's see it in action,

[root@sldc-lab-lv6-a ~]# dd if=/dev/zero of=./secret.doc bs=1024 count=1024
1024+0 records in
1024+0 records out
1048576 bytes (1.0 MB) copied, 0.00276469 s, 379 MB/s
[root@sldc-lab-lv6-a ~]# ls -l secret.doc
-rw-r--r--. 1 root root 1048576 Feb  2 02:22 secret.doc
[root@sldc-lab-lv6-a ~]#

We have created a file secret.doc using dd command  and just imagine this is our sensitive data file that we want to destroy completely.

[root@sldc-lab-lv6-a ~]# shred -zuv secret.doc
shred: secret.doc: pass 1/4 (random)...
shred: secret.doc: pass 2/4 (random)...
shred: secret.doc: pass 3/4 (random)...
shred: secret.doc: pass 4/4 (000000)...
shred: secret.doc: removing
shred: secret.doc: renamed to 0000000000
shred: 0000000000: renamed to 000000000
shred: 000000000: renamed to 00000000
shred: 00000000: renamed to 0000000
shred: 0000000: renamed to 000000
shred: 000000: renamed to 00000
shred: 00000: renamed to 0000
shred: 0000: renamed to 000
shred: 000: renamed to 00
shred: 00: renamed to 0
shred: secret.doc: removed
[root@sldc-lab-lv6-a ~]#

In the above command we used three options,
    -z, add a final overwrite with zeros to hide shredding
    -u, truncate and remove file after overwriting
    -v, show progress

But, if you want to use shred command in shell script then you can skip -v and add -f that change permissions of the file to allow writing if necessary.

# shred -fzu secret.doc

CAUTION from the shred manual page,

Note that shred relies on a very important assumption: that the file system overwrites data in place. This is the traditional way to do things, but many modern file system designs do not satisfy this assumption. The following are examples of file systems on which shred is not effective, or is not guaranteed to be effective in all file system modes:
  • log-structured or journaled file systems, such as those supplied with AIX and Solaris (and JFS, ReiserFS, XFS, Ext3, etc.)
  • file systems that write redundant data and carry on even if some writes fail, such as RAID-based file systems
  • file systems that make snapshots, such as Network Appliance’s NFS server
  • file systems that cache in temporary locations, such as NFS version 3 clients
  • compressed file systems
In the case of ext3 file systems, the above disclaimer applies (and shred is thus of limited effectiveness) only in data=journal mode, which journals file data in addition to just metadata. In both the data=ordered (default) and data=writeback modes, shred works as usual. Ext3 journaling modes can be changed by adding the data=something option to the mount options for a particular file system in the /etc/fstab file.

In addition, file system backups and remote mirrors may contain copies of the file that cannot be removed, and that will allow a shredded file to be recovered later.

Any feedback will be highly appreciated.

Suggested Posts,

This post appeared on the softlexicon.com by Sumit Goel. Copyright © 2012–2014 – softlexicon.com and Sumit Goel. All rights reserved. Not to be reproduced for commercial purposes without written permission.

Crontab: User not allowed to access to (crontab) because of pam configuration

Cron is a daemon in Linux that executes scheduled commands. Cron looks for /var/spool/cron directory for crontab files which are named after user accounts in /etc/passwd file and then found crontabs are loaded into the memory. Cron also searches for /etc/crontab file and the files in the /etc/cron.d directory. On Red Hat systems, crond supports access control with PAM (Pluggable Authentication Modules). A PAM configuration file for crond is installed in /etc/pam.d/crond. Crond loads the PAM environment from the pam_env module, but these can be overridden by settings in the crontab file.

Today my system user account threw below error while listing the crontab,

[root@server01 ~]# crontab -l -u sumitgoel

User account has expired
You (sumitgoel) are not allowed to access to (crontab) because of pam configuration.
[root@server01 ~]# su - sumitgoel
sumitgoel@server01 ~ $ crontab -l

User account has expired
You (sumitgoel) are not allowed to access to (crontab) because of pam configuration.
sumitgoel@server01 ~ $

So the first thing to check here is the user account password expiry information and chage is a nice command to show the account aging information,

# chage -l <username>

Most likely the user account password has expired and now we just need to reset the password of the user account to fix the issue. If this is your service account and the password is used at countless places where you just cannot change the password on the fly then simply disable the password expiration for the account,

# chage -I -1 -m 0 -M 99999 -E -1 <username>

You should be all good now but several other things can be checked if you have this issue,

  • Make sure crond is running using command: /etc/init.d/crond status
  • Check logs for any errors in /var/log/cron and /var/log/messages files.
  • Make sure the user is not listed in /etc/cron.deny file.
  • If /etc/cron.allow file exists, then username must be listed in there to allow the use of cron jobs.

This post appeared on the softlexicon.com by Sumit Goel. Copyright © 2012–2013 – softlexicon.com and Sumit Goel. All rights reserved. Not to be reproduced for commercial purposes without written permission.