Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

August 20, 2020

Simple way to Boot up CentOS 7 / RHEL 7 in single user mode

The simplest way I found to boot up CentOS or RHEL version 7 is as follows:

  • Boot up or reboot your server
  • Choose the desired kernel from the Grub boot loader menu by moving with your keyboard's arrow keys
  • Press "e" (for Edit ) on your keyboard then scroll down the lines till you see the line that begins with "linux16" word
  • Navigate to the end of the line and type “rd.break” at end of the line
  • Hold down "Ctrl+x" to boot up that kernel you just edited


  • Once booting up is finished, you should see a message saying "Entering Emergency Mode" and you should be presented with a shell prompt 
  • Remount /sysroot in ‘rw’ mode with the following commands:
    • switch_root:/# mount -o remount,rw /sysroot
    • switch_root:/# chroot /sysroot


  • Once mounting the fs is done successfully,  you can proceed with whatever the reason was for booting in Single user mode, like checking/fixing partitions, resetting root's password, disabling/enabling service, etc..
Once you are finished doing what you need to do while in Single User Mode, type exit or hold down the "Ctrl+d" keys to exit that shell, then reboot your system your usual way like shutdown -r now or reboot etc..

That's all.


Share:

July 17, 2020

Error: Rpmdb changed underneath us

Problem:

Ran into this error while attempting to apply a yum update on a CentOS 6 machine to update its kernel packages.
Error: Rpmdb changed underneath us 
The same scenario applies if you get this error as well:
error: can not open Packages database in /var/lib/rpm

Cause:

The problem has to do with corruption in the RPM database files under the /var/lib/rpm directory.

Solution:

  • Check for any processes that might be currently running and having a lock on the rpm database and kill them if they exist:
    • ps -aufx | grep /var/lib/rpm
  • Delete the temporary DB files:
    • rm -fv /var/lib/rpm/__*\
  • Rebuild your server RPM database using the below command:
    • rpm --rebuilddb -v -v

After completing these steps, attempt to run a "yum update" again and it most likely would work fine now. 
It worked for me.



Original Solution Post:
Share:

April 29, 2020

Postfix fails with (error: postdrop: warning: mail_queue_enter: create file maildrop/randomfilename.xxxxx: Permission denied)

Problem: Postfix delivery fails with error

(postdrop: warning: mail_queue_enter: create file maildrop/randomfilename.xxxxx: Permission denied)


** see note below

Cause:

/usr/sbin/postdrop has incorrect permissions. 
Correct the permissions for /usr/sbin/postdrop are as follows:
# ll /usr/sbin/postdrop

-rwxr-sr-x. 1 root postdrop 180808 Aug 23  2018 /usr/sbin/postdrop

Solution:

Fix permissions with the following:


rpm --setperms postfix
rpm --setugids postfix

In order to prepare for this next part, you need to make sure that you have the yum-plugin-verify.noarch installed from your standard CentOS or RedHat YUM repository.

You can figure out the default permissions for whatever file that maybe having a permissions issue on your Linux system.

To prepare for this task, you need to make sure that you have the yum plugin
by doing the following:
  • figure out which package provides the file you are troubleshooting.
    • On my CentOS 6 system, this can be accomplished by running the command 
    • rpm -q --whatprovides /usr/sbin/postdrop which will produce the following output indicating the package name that provided our postdrop file when it was installed:
      • postfix-2.6.6-8.el6.x86_64
    • Next step is to run the yum verify-all command against the package name to see the default ownership & permissions for its files as follows:
    • yum verify-all postfix-2.6.6-8.el6.x86_64 
    • The command above will produce something similar to the following output:

  • Just please not that this screenshot is not indicative of the problem at hand because the permissions issues were fixed. This screenshot part is only complaining about checksums & mtime values since this postfix install was updated and modified multiple times.

Share:

April 27, 2020

resolv.conf reverts to old DNS entries

/etc/resolv.conf keeps reverting back to its old entries after updating your DNS server list whether manually or via the setup front-end tool for setting up your Network, authentication, services etc on RHEL or CentOS version 5,6,7.

The solution comes from Redhat's KB article entitled "How to make persistent changes to the /etc/resolv.conf?"https://access.redhat.com/solutions/7412


The issue is that DNS servers in /etc/resolv.conf changed after a reboot or network service restart.

If a single ifcfg-file both specifies a nameserver using DNS1 and also gets a nameserver via DHCP, both nameservers will be placed in resolv.conf.


Root Cause:

- From the script /etc/sysconfig/network-scripts/ifdown-post if the "RESOLV_MODS=no" or "PEERDNS=no" is not present in the relevant /etc/sysconfig/network-scripts/ifcfg-* files, the contents of /etc/resolv.conf could get overwritten with /etc/resolv.conf.save.
- /etc/sysconfig/network-scripts/ifup-post script, checks for the presence of "RESOLV_MODS=no" or "PEERDNS=no"


Resolution:

The change in my situation was due to the ifcfg-eth0 file directives DNS1 and DNS2 which lead to modification of resolv.conf

In my particular situation, the solution was to mark the /etc/resolv.conf as immutable with this command:

chattr +i /etc/resolv.conf
to prevent any tool or configuration from modifying it.

For diagnosing the issue, look for entries similar to the following in your /var/log/messages:

Oct 14 12:40:52 hostname NET[22961]: /etc/sysconfig/network-scripts/ifdown-post : updated /etc/resolv.conf
Oct 14 12:40:57 hostname NET[23256]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf

Share:

December 18, 2018

Can't delete a symbolic link to a Linux directory!!

I've come across this issue once where I wanted to delete a symbolically linked directory (named foo for this example) on a CentOS box using the "rm" command, and I got this error message:

rm: cannot remove `foo/': Is a directory

which was a little frustrating although simple enough to resolve.

The mistake that led to generating this error message is that I used the autocomplete line by using the tab key to finish off the name of the directory, which is fine for most purposes, but in this particular instance of removing a symbolic link, it introduced an undesirable factor which is the forward slash tacked to the end of the directory name.
So the typed command looked like this: rm foo/ instead of just this rm foo

Basically, the RM command in bash refused to unlink the symbolic link because it saw it as a directory with content within.

So just in case someone else comes across this same issue, all you need to do is remove the trailing slash from the end of your command and the rm command will successfully remove your symbolically linked directory without any fuss.

Another approach is to use the "unlink" command followed by the name of your symbolic link and it'll also work just fine. Just note that the "unlink" command also will generate an error if it sees a trailing slash after the directory name.

So the correct syntax for removing a symbolically linked directory is this:

rm foo

Or

unlink foo

Share:

August 01, 2018

Delete Postfix queued emails From/To specific user


mailq | tail -n +2 | awk 'BEGIN { RS = "" } / user@domain\.com$/ { print $1 }' | tr -d '*!' | postsuper -d -

Or, if using sudo, just add sudo to the part immediately preceding "postsuper":


mailq | tail -n +2 | awk 'BEGIN { RS = "" } / user@domain\.com$/ { print $1 }' | tr -d '*!' | sudo postsuper -d -


Note: This command line was run on a CentOS 6.10 box with Postfix version 2.6.6
Share:

October 25, 2017

How to insert charachter/text into multiple lines in the Vi Editor

  1. Press <Esc> to enter command mode
  2. Press <Ctrl> + <V> to enter visual block mode
  3. Move <Up> / <Down> to select the columns of text in the lines you want to insert text
  4. Hold down <Shift> + <i> and type the text you want to insert
  5. Press <Esc> , then wait 1 second and the inserted text will appear on every line you selected in step 3
  6. Press <w> (To save) And <q> (To exit Vi), if you're done editing this file

Credit: Original Tip found at https://stackoverflow.com/a/253391
Share:

July 11, 2017

How to Install JAVA 8 (JDK/JRE 8u131) on CentOS/RHEL 7/6 and Fedora

After a long wait, finally Java SE Development Kit 8 is available to download. JDK 8 has been released on Mar,18 2014 for general availability with the many featured enhancements. You can find all the enhancements in JDK 8 here.
This article will help you to Install JAVA 8 (JDK/JRE 8u131) or update on your system. Read the instruction carefully before downloading Java from Linux command line. To Install Java 8 in Ubuntu and LinuxMint read This Article.


Downloading Latest Java Archive

Download latest Java SE Development Kit 8 release from its official download page or use following commands to download from shell.

For 64Bit

# cd /opt/
# wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz"

# tar xzf jdk-8u131-linux-x64.tar.gz

For 32Bit

# cd /opt/
# wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-i586.tar.gz"

# tar xzf jdk-8u131-linux-i586.tar.gz 

Install Java with Alternatives

After extracting archive file use alternatives command to install it. alternatives command is available in chkconfig package.
# cd /opt/jdk1.8.0_131/
# alternatives --install /usr/bin/java java /opt/jdk1.8.0_131/bin/java 2
# alternatives --config java


There are 3 programs which provide 'java'.

  Selection    Command
-----------------------------------------------
*  1           /opt/jdk1.7.0_71/bin/java
 + 2           /opt/jdk1.8.0_45/bin/java
   3           /opt/jdk1.8.0_91/bin/java
   4           /opt/jdk1.8.0_131/bin/java

Enter to keep the current selection[+], or type selection number: 4

At this point JAVA 8 has been successfully installed on your system. We also recommend to setup javac and jar commands path using alternatives
# alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_131/bin/jar 2
# alternatives --install /usr/bin/javac javac /opt/jdk1.8.0_131/bin/javac 2
# alternatives --set jar /opt/jdk1.8.0_131/bin/jar
# alternatives --set javac /opt/jdk1.8.0_131/bin/javac 

Check Installed Java Version

Check the installed Java version on your system using following command.
root@tecadmin ~# java -version

java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode) 

Configuring Environment Variables

    Most of Java based application’s uses environment variables to work. Set the Java environment variables using following commands

  • Setup JAVA_HOME Variable
  • # export JAVA_HOME=/opt/jdk1.8.0_131
    
  • Setup JRE_HOME Variable
  • # export JRE_HOME=/opt/jdk1.8.0_131/jre
    
  • Setup PATH Variable
  • # export PATH=$PATH:/opt/jdk1.8.0_131/bin:/opt/jdk1.8.0_131/jre/bin
    
Also put all above environment variables in /etc/environment file for auto loading on system boot.

Article Source & Credit:
Original article posted at the following URL: 
https://tecadmin.net/install-java-8-on-centos-rhel-and-fedora/
Share:

May 12, 2017

Find O.S Install - Deployment Date

Commands to find out the initial deployment or installation of your operating systems.


Linux Systems:

tune2fs -l /dev/sda1 | grep 'Filesystem created:'
OR
tune2fs -l /dev/sdb1* | grep 'Filesystem created:'


Windows Systems:

systeminfo|find /i "original"


Share:

April 07, 2016

Too many files open - RHEL 5-7



RHEL 5,6,7

Issue

  • How to correct the error "Too many files open"
  • Error during login "Too many open files" and the session gets terminated automatically.

Resolution

  • This error is generated when the open file limit for a user or system exceeds the default setting.

SYSTEM Wide settings

  • To see the settings for maximum open files for the OS level, use following command:
        # cat /proc/sys/fs/file-max
    
  • This value means that the maximum number of files all processes running on the system can open. By default this number will automatically vary according to the amount of RAM in the system. As a rough guideline it will be approximately 100000 files per GB of RAM, so something like 400000 files on a 4GB machine and 1600000 on a 16GB machine. To change the system wide maximum open files, as root edit the /etc/sysctl.conf and add the following to the end of the file:
        fs.file-max = 495000
    
    Note: The above example will set the maximum number of files to 495,000 and will take effect when the system is rebooted.
  • Then issue the following command to activate this change to the live system:
        # sysctl -p
    

Per USER Settings

  • To see the setting for maximum open files for a user, as root issue the following commands:
        # su - <user>
        $ ulimit -n
    
  • The default setting for this is usually 1024. If more is needed for a specific user then as root modify it in the /etc/security/limits.conf file:
        user - nofile 2048
    
    This will set the maximum open files for the specific "user" to 2048 files.
** WARNING **
The limits module that handles the setting of these values first reads /etc/security/limits.conf and then reads each file matching/etc/security/limits.d/*.conf This means that any changes you make in /etc/security/limits.conf may be overridden by a subsequent file. These files should be reviewed as a set to ensure they meet your requirements.
  • To do a system wide increase for all users then as root edit /etc/security/limits.conf file and add the following:
        * - nofile 2048
    
  • This sets the maximum open files for ALL users to 2048 files. These setting will require a reboot to become active.
Share:

January 16, 2015

Header V3 RSA/SHA1 Signature, key ID BAD - Bug Resolved


For CentOS servers an update for the nss-softokn package was release today – nss-softokn-3.14.3-19However, nss-softokn-3.14.3-19 needs nss-softokn-freebl-3.14.3-19 to operate properly, and vice versa, but those packages do not have checks in place to make sure that a matching version of the other package are also installed.

Thus if you yum update only installed one of the packages you will end up with a broken YUM and RPM.

You might see error messages like these when trying to run YUM and RPM commands:

error: rpmts_HdrFromFdno: Header V3 RSA/SHA1 Signature, key ID xxx BAD

error: rpmdbNextIterator: skipping h# 1784 Header V3 RSA/SHA1 Signature, key ID xxx BAD


Most of the time you will have had nss-softokn-3.14.3-19 installed but not nss-softokn-freebl-3.14.3-19

To fix this you have to:

1. Manually download nss-softokn-freebl-3.14.3-19


yumdownloader nss-softokn-freebl

or wget the RPMs

64-Bit servers / x86_64 run

wget ftp://195.220.108.108/linux/centos/6.6/updates/x86_64/Packages/nss-softokn-freebl-3.14.3-19.el6_6.x86_64.rpm

32-Bit Servers / i686 run

wget ftp://195.220.108.108/linux/centos/6.6/updates/i386/Packages/nss-softokn-freebl-3.14.3-19.el6_6.i686.rpm

Note: The FTP IP address above grabs the rpm package from an RPMFIND mirror in France, but you can get it from any other mirror that you usually use.

2. Extract the RPM

64-Bit servers / x86_64 run

rpm2cpio nss-softokn-freebl-3.14.3-19.el6_6.x86_64.rpm | cpio -idmv

32-Bit Servers / i686 run

rpm2cpio nss-softokn-freebl-3.14.3-19.el6_6.i686.rpm | cpio -idmv

3. Copy .libfreeblpriv3.* to correct location

64-Bit servers / x86_64 run

cp ./lib64/libfreeblpriv3.* /lib64

32-Bit Servers / i686 run

cp ./lib/libfreeblpriv3.* /lib


4. Rerun Yum Update to update nss-softokn-freebl and FIX YUM and RPM

yum update

-----------------------------------------------------------------------------------

Bug Report: https://bugzilla.redhat.com/show_bug.cgi?id=1182337


Share:

March 13, 2014

March 07, 2013

June 25, 2012

Standard I/O redirection

The shell and many UNIX commands take their input from standard input (stdin), write output to standard output (stdout), and write error output to standard error (stderr). By default, standard input is connected to the terminal keyboard and standard output and error to the terminal screen.

The way of indicating an end-of-file on the default standard input, a terminal, is usually <Ctrl-d>.
Redirection of I/O, for example to a file, is accomplished by specifying the destination on the command line using a redirection metacharacter followed by the desired destination.

C Shell Family

Some of the forms of redirection for the C shell family are:
Character
Action
>
Redirect standard output
>&
Redirect standard output and standard error
<
Redirect standard input
>!
Redirect standard output; overwrite file if it exists
>&!
Redirect standard output and standard error; overwrite file if it exists
|
Redirect standard output to another command (pipe)
>>
Append standard output
>>&
Append standard output and standard error


The form of a command with standard input and output redirection is:
% command -[options] [arguments] < input file  > output file
If you are using csh and do not have the noclobber variable set, using > and >& to redirect output will overwrite any existing file of that name. Setting noclobber prevents this. Using >! and >&! always forces the file to be overwritten. Use >> and >>& to append output to existing files.
Redirection may fail under some circumstances: 1) if you have the variable noclobber set and you attempt to redirect output to an existing file without forcing an overwrite, 2) if you redirect output to a file you don't have write access to, and 3) if you redirect output to a directory.

Examples:

% who > names
Redirect standard output to a file named names

% (pwd; ls -l) > out

Redirect output of both commands to a file named out

% pwd; ls -l > out
Redirect output of ls command only to a file named out

Input redirection can be useful, for example, if you have written a FORTRAN program which expects input from the terminal but you want it to read from a file. In the following example, myprog, which was written to read standard input and write standard output, is redirected to read myin and write myout:

% myprog < myin > myout

You can suppress redirected output and/or errors by sending it to the null device, /dev/null. The example shows redirection of both output and errors:
% who >& /dev/null

To redirect standard error and output to different files, you can use grouping:
% (cat myfile > myout) >& myerror


Bourne Shell Family

The Bourne shell uses a different format for redirection which includes numbers. The numbers refer to the file descriptor numbers (0 standard input, 1 standard output, 2 standard error). For example, 2> redirects file descriptor 2, or standard error. &n is the syntax for redirecting to a specific open file. For example 2>&1 redirects 2 (standard error) to 1 (standard output); if 1 has been redirected to a file, 2 goes there too. Other file descriptor numbers are assigned sequentially to other open files, or can be explicitly referenced in the shell scripts. Some of the forms of redirection for the Bourne shell family are:


Character
Action
>
Redirect standard output
2>
Redirect standard error
2>&1
Redirect standard error to standard output
<
Redirect standard input
|
Pipe standard output to another command
>>
Append to standard output
2>&1|
Pipe standard output and standard error to another command


Note that < and > assume standard input and output, respectively, as the default, so the numbers 0 and 1 can be left off.
The form of a command with standard input and output redirection is:
$ command -[options] [arguments] < input file > output file

Redirection may fail under some circumstances:
1) if you have the variable noclobber set and you attempt to redirect output to an existing file without forcing an overwrite
2) if you redirect output to a file you don't have write access to, and 3) if you redirect output to a directory.

Examples:

$ who > names
Direct standard output to a file named names
$ (pwd; ls -l) > out

Direct output of both commands to a file named out
$ pwd; ls -l > out

Direct output of ls command only to a file named out
Input redirection can be useful if you have written a program which expects input from the terminal and you want to provide it from a file. In the following example, myprog, which was written to read standard input and write standard output, is redirected to read myin and write myout.
$ myprog < myin > myout

You can suppress redirected output and/or error by sending it to the null device, /dev/null. The example shows redirection of standard error only:
$ who 2> /dev/null

To redirect standard error and output to different files (note that grouping is not necessary in Bourne shell):
$ cat myfile > myout 2> myerror


A. Bash and other modern shell provides I/O redirection facility. There are 3 default standard files (standard streams) open:
[a] stdin - Use to get input (keyboard) i.e. data going into a program.
[b] stdout - Use to write information (screen)
[c] stderr - Use to write error message (screen)
Understanding I/O streams numbers



The Unix / Linux standard I/O streams with numbers:
Handle
Name
Description
0
stdin
Standard input
1
stdout
Standard output
2
stderr
Standard error

Redirecting the standard error stream to a file

The following will redirect program error message to a file called error.log:
$ program-name 2> error.log
$ command1 2> error.log

Redirecting the standard error (stderr) and stdout to file

Use the following syntax:
$ command-name &>file
OR
$ command > file-name 2>&1
Another useful example:
# find /usr/home -name .profile 2>&1 | more

Redirect stderr to stdout

Use the command as follows:
$ command-name 2>&1
Share:

June 07, 2012

Yum Error: Unable to read consumer identity

Problem:

Yum update throws the error: "Unable to read consumer identity"


Resolution:

Please note: This solution was applied to fix the above yum update error on a machine running RedHat Enterprise release 5.8 Tikanga.

The solution is to use RHN Classic and to disable subscription-manager by editing the katello plugin configuration file and set "enabled" value to '0'

Disable the plugin by editing the file /etc/yum/pluginconf.d/katello.conf  then change the value "enabled=1" to "enabled=0" and save file.

Once the change is performed and saved, execute following commands:

rm -rf /var/cache/yum/*
yum clean all
Share:

May 30, 2012

October 05, 2011

Logout without killing running Linux jobs

To logout of your *nix bash session without killing your active jobs is simply done by using either the nohup or disown commands which allow you to leave a job or script running even after you logout.

The syntax is simple:

nohup command-name &

Then type exit or CTRL-D to logout as usual

disown [-ar] [-h] [jobspec ...]
Without options, each jobspec is removed from the table of active jobs. If the -h option is given, each jobspec is not removed from the table, but is marked so that SIGHUP is not sent to the job if the shell receives a SIGHUP. If no jobspec is present, and neither the -a nor the -r option is supplied, the current job is used.

If no jobspec is supplied, the -a option means to remove or mark all jobs; the -r option without a jobspec argument restricts operation to running jobs. The return value is 0 unless a jobspec does not specify a valid job.

Share: