Home / Knowledgebase

Category: Knowledgebase

Linux, Windows, Virtualization based technical articles, reviews, and How-TOs.

How to reset the root password on redhat 8

So you end up here because you have lost your root password. On Red Hat Enterprise Linux 8, it is possible to have the scripts that run from the initramfs pause at certain points, provide a root shell, and then continue when that shell exits. This is mostly meant for debugging, but you can also use this method to reset a lost root password. To access that root shell, follow these steps:

  • Reboot your server.
  • Interrupt the boot loader countdown by pressing any key, except Enter.
  • Move the cursor to the kernel entry to boot.
  • Press e to edit the selected entry.
  • Move the cursor to the kernel command line (the line that starts with linux).
  • Append rd.break. With that option, the system breaks just before the system hands control from the initramfs to the actual system.
  • Press Ctrl+x to boot with the changes.
  • To reset the root password from this point, use the following procedure:
  • Remount /sysroot as read/write.

switch_root:/# mount -o remount,rw /sysroot

  • Switch into a chroot jail, where /sysroot is treated as the root of the file-system tree.

switch_root:/# chroot /sysroot

  • Set a new root password.

sh-4.4# passwd root

  • Ensure that all unlabeled files, including /etc/shadow at this point, get relabeled during boot.

sh-4.4# touch /.autorelabel

  • Type exit twice. The first command exits the chroot jail, and the second command exits the initramfs debug shell.
  • At this point, the system continues booting, performs a full SELinux relabel, and then reboots again.
If you are looking to learn Linux or host your website, you can buy VPS Hosting.

What is VPS Hosting?

A VPS is a complete server with its own operating system and virtual hardware built on top of physical server hardware. A Linux or Windows-based operating system known as a hypervisor is used to build virtual servers, datastores, virtual switches, virtual routers, virtual CPUs, and RAM. Some leading hypervisors are VMware, Citrix Xenserver, and KVM. With rapid provisioning of a VPS, you can scale horizontally to handle bursts in computing resources.

An important advantage of a VPS is that you can replicate and clone a VPS easily and within a short time. You can increase resources like CPU, RAM, and storage instantly by asking your VPS hosting provider. With SeiMaxim virtualization technology, you can scale your VPS up to 24TB RAM and 768 vCPUs, leaving our competitors far behind in this field. You can meet the demands of high-performance applications and memory-intensive databases, including SAP HANA and Epic Cache Operational database.

A greater advantage of using hypervisor is in the field of graphics visualization, rendering, and streaming. SeiMaxim VPS offers 3-D professional graphics that included GPU Pass-through and hardware-based GPU sharing with NVIDIA vGPU™, AMD MxGPU™, and Intel GVT-g™. A pass-through GPU is not abstracted at all but remains one physical device. Each hosted VPS gets its own dedicated GPU, eliminating the software abstraction and the performance penalty that goes with it. This GPU Pass-Through feature is ideally intended for graphics power users, such as CAD designers and Molecular modelers.

To cut the cost of a single VPS with a dedicated GPU, a shared GPU can be implemented.  Shared GPU allows one physical GPU to be used by multiple VPS at the same time. Because a portion of a physical GPU is used, performance is greater than emulated graphics, and there is no need for one card per VPS. This feature enables resource optimization and increases the performance of the VPS. The graphics workload of each VPS is passed directly to the GPU without processing by the hypervisor.

How To Monitor VMware envirnment with Grafana

This step-by-step guide uses the Official telegraph vSphere plugin to pull metrics from vCenter. We will pull metrics such as compute, network and storage resources. Before starting with this guide, I assume you have a freshly installed operating system, ubuntu 20. so let’s being with our work.

Step: 1 Install Grafana on Ubuntu

This tutorial tested on freshly installed OS Ubuntu 20.04.

  • Start your Grafana installation.

wget https://dl.grafana.com/oss/release/grafana_7.1.3_amd64.deb

sudo dpkg -i grafana_7.1.3_amd64.deb

  • Now start and enable your Grafana service.

sudo systemctl start grafana-server.service

sudo systemctl enable grafana-server.service

  • Check Grafana service status.

sudo systemctl status grafana-server.service

  • At this point, Grafana is installed, and you can log in to your Grafana by following

url: http://[your Grafana server ip]:3000

The default username/password is admin/admin

  • Upon the first login, Grafana will ask you to change the password.
  • Be careful HTTP is not a secure protocol. You can further secure it by putting SSL certificates.

Step: 3 Install Influx DB

  • Inquire about the available InfluxDB version in your apt-cache by the following command.

sudo apt-cache policy influxdb

It will be the last stable version of InfluxDB. We will use a later version 1.8 of InfluxDB, so we will update the apt cache first and add the required information to the repository.

wget -qO- https://repos.influxdata.com/influxdb.key | sudo apt-key add -

source /etc/lsb-release

echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list

sudo apt update

sudo apt-cache policy influxdb

sudo apt update

sudo apt-cache policy influxdb

sudo apt install influxdb -y

  • Check the status and ensure that it sustains over the reboot.

sudo systemctl start influxdb

sudo systemctl status influxdb

sudo systemctl enable influxdb

  • The InfluxDB will listen on port 8086, and if your server is on the internet, then depending on any existing firewall rules, anybody may be able to query the server using the URL

https://[your domain name or ip]:8086/metrics

  • On my local machine where I am doing this test, is not having any firewall enabled, but if you have allowed or using public IPs, you can prevent direct access by doing these commands

iptables -A INPUT -p tcp -s localhost --dport 8086 -j ACCEPT

iptables -A INPUT -p tcp --dport 8086 -j DROP

Step: 4 Install Telegraf

  • Now we are going to install telegraf.

sudo apt install telegraf -y

  • Start Telegraf and ensure it starts in case of reboot.

sudo systemctl start telegraf

sudo systemctl status telegraf

sudo systemctl enable telegraf

  • Configure Telegraf to pull Monitoring metrics from vCenter, so here we will configure Telegraf main configuration file:
  • In this /etc/telegraf/telegraf first, you need to add information for influxdb.
  • change your influxdb credentials.

————————————————————————————————————————————–

[[outputs.influxdb]]
urls = ["http://<Address_of_influxdb_server>:8086"]
database = "vmware"
timeout = "0s"

#only with if you are using authentication for DB

#username = "USERNAME_OF_DB"

#password = "PASSWD_OF_DB"

————————————————————————————————————————————-

# Read metrics from VMware vCenter
[[inputs.vsphere]]
## List of vCenter URLs to be monitored. These three lines must be uncommented
## and edited for the plugin to work.
vcenters = [ "https://<vCenter_IP>/sdk" ]
username = "administrator@vsphere.local"
password = "PASSWD"
#
## VMs
## Typical VM metrics (if omitted or empty, all metrics are collected)
vm_metric_include = [
"cpu.demand.average",
"cpu.idle.summation",
"cpu.latency.average",
"cpu.readiness.average",
"cpu.ready.summation",
"cpu.run.summation",
"cpu.usagemhz.average",
"cpu.used.summation",
"cpu.wait.summation",
"mem.active.average",
"mem.granted.average",
"mem.latency.average",
"mem.swapin.average",
"mem.swapinRate.average",
"mem.swapout.average",
"mem.swapoutRate.average",
"mem.usage.average",
"mem.vmmemctl.average",
"net.bytesRx.average",
"net.bytesTx.average",
"net.droppedRx.summation",
"net.droppedTx.summation",
"net.usage.average",
"power.power.average",
"virtualDisk.numberReadAveraged.average",
"virtualDisk.numberWriteAveraged.average",
"virtualDisk.read.average",
"virtualDisk.readOIO.latest",
"virtualDisk.throughput.usage.average",
"virtualDisk.totalReadLatency.average",
"virtualDisk.totalWriteLatency.average",
"virtualDisk.write.average",
"virtualDisk.writeOIO.latest",
"sys.uptime.latest",
]
# vm_metric_exclude = [] ## Nothing is excluded by default
# vm_instances = true ## true by default
#
## Hosts
## Typical host metrics (if omitted or empty, all metrics are collected)
host_metric_include = [
"cpu.coreUtilization.average",
"cpu.costop.summation",
"cpu.demand.average",
"cpu.idle.summation",
"cpu.latency.average",
"cpu.readiness.average",
"cpu.ready.summation",
"cpu.swapwait.summation",
"cpu.usage.average",
"cpu.usagemhz.average",
"cpu.used.summation",
"cpu.utilization.average",
"cpu.wait.summation",
"disk.deviceReadLatency.average",
"disk.deviceWriteLatency.average",
"disk.kernelReadLatency.average",
"disk.kernelWriteLatency.average",
"disk.numberReadAveraged.average",
"disk.numberWriteAveraged.average",
"disk.read.average",
"disk.totalReadLatency.average",
"disk.totalWriteLatency.average",
"disk.write.average",
"mem.active.average",
"mem.latency.average",
"mem.state.latest",
"mem.swapin.average",
"mem.swapinRate.average",
"mem.swapout.average",
"mem.swapoutRate.average",
"mem.totalCapacity.average",
"mem.usage.average",
"mem.vmmemctl.average",
"net.bytesRx.average",
"net.bytesTx.average",
"net.droppedRx.summation",
"net.droppedTx.summation",
"net.errorsRx.summation",
"net.errorsTx.summation",
"net.usage.average",
"power.power.average",
"storageAdapter.numberReadAveraged.average",
"storageAdapter.numberWriteAveraged.average",
"storageAdapter.read.average",
"storageAdapter.write.average",
"sys.uptime.latest",
]
# host_metric_exclude = [] ## Nothing excluded by default
# host_instances = true ## true by default
#
## Clusters
cluster_metric_include = [] ## if omitted or empty, all metrics are collected
# cluster_metric_exclude = [] ## Nothing excluded by default
# cluster_instances = false ## false by default
#
## Datastores
datastore_metric_include = [] ## if omitted or empty, all metrics are collected
# datastore_metric_exclude = [] ## Nothing excluded by default
# datastore_instances = false ## false by default for Datastores only
#
## Datacenters
datacenter_metric_include = [] ## if omitted or empty, all metrics are collected
# datacenter_metric_exclude = [ "*" ] ## Datacenters are not collected by default.
# datacenter_instances = false ## false by default for Datastores only
#
## Plugin Settings
## separator character to use for measurement and field names (default: "_")
# separator = "_"
#
## number of objects to retreive per query for realtime resources (vms and hosts)
## set to 64 for vCenter 5.5 and 6.0 (default: 256)
# max_query_objects = 256
#
## number of metrics to retreive per query for non-realtime resources (clusters and datastores)
## set to 64 for vCenter 5.5 and 6.0 (default: 256)
# max_query_metrics = 256
#
## number of go routines to use for collection and discovery of objects and metrics
# collect_concurrency = 1
# discover_concurrency = 1
#
## whether or not to force discovery of new objects on initial gather call before collecting metrics
## when true for large environments, this may cause errors for time elapsed while collecting metrics
## when false (default), the first collection cycle may result in no or limited metrics while objects are discovered
# force_discover_on_init = false
#
## the interval before (re)discovering objects subject to metrics collection (default: 300s)
# object_discovery_interval = "300s"
#
## timeout applies to any of the api request made to vcenter
# timeout = "60s"
#
## Optional SSL Config
# ssl_ca = "/path/to/cafile"
# ssl_cert = "/path/to/certfile"
# ssl_key = "/path/to/keyfile"
## Use SSL but skip chain & host verification
insecure_skip_verify = true

—————————————————————————————————————

  • You only need to change the credential of vcenter and influxdb
  • Start and enable telegraf service after making the changes.
  • sudo systemctl restart telegraf
  • sudo systemctl enable telegraf

Step: 4.1 Check InfluxDB Metrics

  • We need to confirm that our metrics are being pushed to InfluxDB and that we can see them.
  • If you are using authentication then open  InfluxDB shell like this:

$ influx -username 'username' -password 'PASSWD'

  • We need to confirm that our metrics pushed to InfluxDB and that we can see them.
    If you are using authentication, then open the InfluxDB shell by this:

$ influx

  • Then:

> USE vmware

  • Using database vmware:
  • Check if there is an inflow of time series metrics.

> SHOW MEASUREMENTS

name: measurements

name

—-

cpu

disk

diskio

kernel

mem

processes

swap

system

vsphere_cluster_clusterServices

vsphere_cluster_mem

vsphere_cluster_vmop

vsphere_datacenter_vmop

vsphere_datastore_datastore

vsphere_datastore_disk

vsphere_host_cpu

vsphere_host_disk

vsphere_host_mem

vsphere_host_net

vsphere_host_power

vsphere_host_storageAdapter

vsphere_host_sys

vsphere_vm_cpu

vsphere_vm_mem

vsphere_vm_net

vsphere_vm_power

vsphere_vm_sys

vsphere_vm_virtualDisk

Step 5: Add InfluxDB Data Source to Grafana

  • Login to Grafana and add InfluxDB data source
  • Click on the configuration icon and then click datasource.
  • Click Add influxDB data source.
  • Insert all the relevant information under HTTP and influxDB details shown into the red boxes below:
  • If you used a password in your influxDB you might put it here.

Grafana

Step 6: Import Grafana Dashboards

  • The last action is to create or import Grafana dashboards:
  • Building a Grafana dashboard is a lengthy process, so we are using a community dashboard built by Jorge de la Cruz.

Grafana

  • We will import this pre-build Grafana dashboard #8159. The moment you did import, you will see your Grafana dashboard.

Grafana

RSYNC: File size in destination is larger than source

You may notice that after running rsync your file size in destination becomes larger than the source. This is most likely due to sparse files. To allow rsync to manage sparser files more efficiently so they took less space use the -S flag.

rsync -S sparse-file /home/sparser-file

After rsync is done, check the destination file with the du command and the filesize will be almost the same as the source file.

How to use rsync to backup an entire server?

Yes, It is possible to backup entire server files using rsync over the network to another server or to a locally attached disk. rsync is easy to set up but it is not a complete backup solution. Make sure you should not use rsync to backup server files to tape devices. You could completely clone a server but it is the slowest backup method. You should back up only the data and use third-party cloning tools to backup system files.

You can use rsync to perform differential backups so later backups only copy files that are changed since the last backup is done. It should be noted that rsync cannot backup online databases so use third party software like cPanel to backup databases. You should use following options with rsync:

ownerships, permissions, preserve timestamps, extended attributes (to preserve SELinux attributes), and ACLs.

Make sure you use the same rsync version on source and destination servers.

  • To backup server filesystem to an external disk (attached with USB or other hardware) and mounted as /media

rsync -AXav --progress --del --exclude "/sys/*" --exclude "/media/*" --exclude "/proc/*" --exclude "/selinux/*" --exclude "/mnt/*" / /media/

  • To backup server filesystem over a network to another server (which is computing.seimaxim.com in this How-To):

rsync -AXavz –progress –del –bwlimit=1000 –exclude “/sys/*” –exclude “/selinux/*” –exclude “/proc/*” –exclude “/media/*” –exclude “/mnt/*” / root@computing.seimaxim.com:/media/

In above command -z option is used for compression to increase data transfer speed.

 

 

How to resolve error resize2fs: Operation not permitted While trying to add group – ext4_group_add: No reserved GDT blocks, can’t resize

The root cause of this error is that the pool of reserved GDT blocks is not available, or the filesystem does not support online resizing. Note that the Ext3 and Ext4 filesystem metadata layout is fixed. mkfs reserves space for future disk resizing, but that is only 1024x the filesystem size during initial disk creation or the upper block count limit of 2^32, priority given to the lowest. The third root cause of the error is that the journal is too small.

  • To resolve this error:

check if online resizing is available for the filesystem. You can check this with resize_inode in the dumpe2fs output. If resize_inode text is not present in the dumpe2fs output given below, then the filesystem does not support online resizing. It would be best if you then unmounted the filesystem and then resize it.

dumpe2fs /dev/vg_test/lv_ext3 | grep -i features
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Journal features: journal_incompat_revoke

  • Check the output of dumpe2fs. If the Reserved GDT block line is missing, then the GDT block count is 0. The solution, in this case, is to resize the filesystem offline, i.e., after unmounting the filesystem.
  • The third scenario is when resize fails in both offline and online mode. The solution, in this case, is to remove the journal while the filesystem is offline and recreate the journal with a larger size.

First, check the size of the journal with;

dumpe2fs /dev/vg_test/lv_ext3 | grep Journal\ size
Journal size: 32M

Now remove the journal with

tune2fs -O ^has_journal /dev/vg_test/lv_ext3
Creating journal inode: done

Verify the journal size you just create with;

dumpe2fs /dev/vg_test/lv_ext3 | grep Journal\ size
Journal size: 148M

Now resize with resize2fs /dev/vg_test/lv_ext3

If you still get errors, use the option -J size=journal-size, where journal-size is in megabytes.

 

 

How to setup ftp server on CentOS 7?

  • To setup ftp server on CentOS 7, perform following steps:
  • Install vsftpd package with yum -y install vsftpd
  • Edit the range of ports that is to be used by ftp service in /etc/vsftpd/vsftpd.conf

pasv_min_port=3000
pasv_max_port=3500

  • Use systemctl command to enable vsftpd at boot time:

systemctl enable vsftpd.service
systemctl start vsftpd.service

  • Open ftp port in firewall with:

firewall-cmd --add-port=21/tcp --add-port=3000-3500/tcp --permanent
systemctl restart firewalld.service

  • To set selinux which will allow regular uer to get and put files to server:

setenforce 1
setsebool -P ftpd_full_access 1

Performance testing and benchmarking tools for Linux

Disclaimer: Links given on this page to external websites are provided for convenience only. SeiMaxim has not checked the following external links and is not responsible for their content or link availability. The inclusion of any link on this page to an external website does not imply endorsement by SeiMaxim of the website or their entities, products, or services. You must agree that SeiMaxim is not responsible or liable for any loss or expenses that may result due to your use of the external site or external content.

The following Linux benchmarking and performance tools are available from external sources:

Configure sftp server with restricted chroot users with ssh keys without affecting normal user access

  • Login on the Linux server (sftp) as root and create a new user account with the following Shell commands:

useradd seimaxim-user
passwd seimaxim-user

  • On the client system copy the ssh keys to the server:

ssh-copy-id seimaxim-user@seimaxim-server

  • On the client system verify the ssh keys so that a password-less login can be made to the server:

ssh seimaxim-uer@seimaxim-server

  • Verify sftp connection is working passwordless from the client system to server:

sftp seimaxim-user@seimaxim-server

  • At this stage, seimaxim-user from client system can ssh and sftp with entering password and have access to all directories. Now make necessary changes to chroot seimaxim-user caged to a specific directory.
  • On Linux server create a new group to add chroot seimaxim-user with groupadd sftpuser
  • Make a directory for chrooot seimaxim-user with mkdir /files
  • Make a subdirectory for seimaxim-user that has to be chroot with mkdir /files/seimaxim-user
  • Create a home directory for seimaxim-user with mkdir /files/seimaxim-user/home
  • Add seimaxim-user to new group you added in previous steps which sftpuser in our case with usermod -aG sftpuser seimaxim-user
  • Modify permissions of home directory /files/seimaxim-user/home of seimaxim-user with chown seimaxim-user:ftpuser /files/seimaxim-user/home
  • Open /etc/ssh/sshd_config in text editor like vi and add following code:

Subsystem sftp internal-sftp -d /home
Match Group sftpuser
ChrootDirectory /files/%u

  • Restart sshd service with systemctl restart sshd
  • Now try to connect via ssh and as user seimaxim-user from the client system to the server. You will not be able to connect via ssh but only through sftp. Also, try connecting with sftp which will be connected to the server without any issue. This solution will allow other users to connect through ssh to the server.

When connecting to VNC either screen is black or icons are shown but no menu or screen background

  • This issue occurs due to changes in the default service unit file. The changed file which is causing the error is given below;

[Service]
Type=forking
User=<USER>

# Clean any existing files in /tmp/.X11-unix environment
ExecStartPre=-/usr/bin/vncserver -kill %i
ExecStart=/usr/bin/vncserver %i
PIDFile=/home/<USER>/.vnc/%H%i.pid
ExecStop=-/usr/bin/vncserver -kill %i

  • The correct file is shown below;

[Service]
Type=forking

# Clean any existing files in /tmp/.X11-unix environment
ExecStartPre=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'
ExecStart=/sbin/runuser -l <USER> -c "/usr/bin/vncserver %i"
PIDFile=/home/<USER>/.vnc/%H%i.pid
ExecStop=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'

  • To resolve this error, apply the latest OS updates.

 

How to limit per user VNC sessions count?

You can use the following PERL script to limit VNC sessions per user.

  • Create a new file name vncserver and add the following content.

# Start the VNC Server
# Maximum sessions is limited to 2 per user in server.
&vnclimit();
}
}

sub vnclimit {
$countoutput = `ps -u $ENV{USER} | grep -i Xvnc | wc -l`;
if ($countoutput >= 1) {
print "Your vncsession is $ENV{USER} user. Maximum sessions is limited to 2 per user only!\n";
print "Execute 'vncserver -list' to list the current session\n\n";
print "Contact your server admin to increase the number of sessions.\n";
exit;
}
}

  • After creating the file, run it as vncserver on the command line.

How to configure virtual network computing [vncserver] in Linux CentOS Server 7 and 8?

  • In Linux CentOS 7 and 8 install tigervnc-server using yum with yum install tigervnc-server tigervnc
  • Install X Window System on CentOS 8 with yum group install GNOME base-x or yum groupinstall "Server with GUI".  On CentOS 7 install X Window System with yum groupinstall gnome-desktop x11 fonts or yum groupinstall "Server with GUI"
  • Set the Linux server to boot directly into the graphical user interface systemctl start graphical.target
  • After installing X Window System configure the VNC service by creating a VNC user account with useradd <yourusername> and set a password with passwd <yourusername>
  • Login to the server and create VNC password with vncpasswd
  • Create a VNC configuration file for the user <yourusername> with cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service
  • Edit the /etc/systemd/system/vncserver@:1.service file and replace option “USER” with VNC user <yourusername>
  • Change 1 in /etc/systemd/system/vncserver@:1.service for every next VNC user. You should create one file for each user instance.
  • To change color depth, resolution, and other remote desktop options, add required values in ExecStart= as ExecStart=/sbin/runuser -l testuser1 -c "/usr/bin/vncserver %i -geometry 1024x768 -depth 24"
  • You must open VNC port in firewall with firewall-cmd --permanent --zone=public --add-port 5901/tcp and then reload firewall with firewall-cmd reload
  • Reload configuration with sytemctl daemon-reload
  • Enable the VNC service and make sure it starts at your next boot with systemctl enable vncserver@:1.service and systemctl start vncserver@:1.service
  • To configure the desktop environment for VNC on the server look xstartup file in ~/.vnc/xstartup. Following is on Gnome desktop;For KDE, xstartup is

# cat ~/.vnc/xstartup

#!/bin/sh
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
vncconfig -iconic &
dbus-launch --exit-with-session gnome-session &

  • For KDE, xstartup is;

# cat ~/.vnc/xstartup

#!/bin/sh
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
#vncconfig -iconic &
#dbus-launch --exit-with-session gnome-session &
startkde &

  • The last step is to install vncviewer on your local PC and adding IP address::port number of the remote server with vncviewer vncserver-ipaddress::59XX or vncviewer vncserver-ipaddress::5901
  • Note that 1 in 5901 will have to be changed for each instance of vncserver as in vncserver@:1.service
  • For any queries, chat with us or leave a comment. We will be happy to help to troubleshoot your server.

 

How to Install and Configure VNC on Debian 9 and Kali Linux 2020.2

This is a quick guide to installing VNC on Debian 9 and Kali Linux

  • Login to your server as root.
  • Install VNC server with apt-get install tightvncserver
  • If you get the following error then you can install tightvncserver from Debian or Kali installation ISO image.

root@server:/home/user# apt install tightvncserver -y
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package tightvncserver is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source

E: Package 'tightvncserver' has no installation candidate

  • To install tightvnc from ISO image, Mount Debian or Kali image on /media/cdrom with mount -t iso9660 /dev/sr0 /media/cdrom -o loop

tightvnc .deb packages [tightvncserver_1.3.9-9.1_amd64.deb xtightvncviewer_1.3.9-9.1_amd64.deb] is located in /media/cdrom/pool/main/t/tightvnc

  • Change directory to /media/cdrom/pool/main/t/tightvnc with cd /media/cdrom/pool/main/t/tightvnc
  • Install tightvncserevr with dpkg -i tightvncserver_1.3.9-9.1_amd64.deb
  • Edit xstartup in /home/youraccount/.vnc/xstartup with vi and add following code:

#!/bin/sh
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
startxfce4 &
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey &
vncconfig -iconic &

  • Start vnc server by executing the following command:

vncserver

  • You will be prompted to enter and verify vnc password. Make sure your password is less than 8 characters else it will be truncated to 8 characters.
  • After vnc password is set you will have the option to set a view-only password which is optional.
  • You may kill any instance of vncserver by executing commands vncserver -kill :1
  • ~/.vnc/xstartup must have executable permission set. You may set these permissions with the command chmod +x ~/.vnc/xstartup
  • If you did the above steps correctly, TightVNC server is already running on your server waiting for an incoming connection.
  • To connect to vnc server from your local PC, install Tight vnc viewer. Open vncviewer and enter the IP address and listening port on the server 85.19.219.89::5906
  • If your vncserver is listening on port :1 then you should enter 85.19.219.89::5901
  • If your vncserver is listening on port :2 then you should enter 85.19.219.89::5902
  • Make sure to check on which port your vnc server is running and then edit port :5901 in vncviewer on your local PC/Server.

How to setup FTP in Linux based server

  • Login to the server as root and install vsftpd with yum install vsftpd ftp -y
  • Use vi editor to open /etc/vsftpd/vsftpd.conf [vi /etc/vsftpd/vsftpd.conf] and add/change following options:

anonymous_enable=NO
ascii_upload_enable=YES
ascii_download_enable=YES
use_localtime=YES

  • Enable and start the vsftpd service.

systemctl enable vsftpd
systemctl start vsftpd

  • Allow the ftp service and port 21 via firewall.

firewall-cmd --permanent --add-port=21/tcp
firewall-cmd --permanent --add-service=ftp

  • Reload firewall

firewall-cmd --reload

If you want users to restrict to their home directories, change permissions of home directory with

chmod -R go-rx /home/userdirectory

To test FTP server from client-side:

ftp ftp.yourservername.com

How to change the port of discovery container

  • Use podman to create new network podman network create
  • Check under /etc/cni/net.d/ you will find file /etc/cni/net.d/cni-podman-2.conflist
  • In your favorite file editor open /usr/share/containers/libpod.conf
  • Change line cni_default_network = "podman" in configuration file /usr/share/containers/libpod.conf to cni_default_network = "cni-podman2"
  • Reboot server
  • Restart container with podman start discovery dsc-db
  • Check the network status.
  • A new network cni-podman2 will be present with a new IP 192.168.0.1/24

Linux Server crash with general protection fault: 0000 [#1] SMP aio_complete+0xe2/0x310

The server crashes due to an invalid pointer when executing an I/O generating from the aio code. A discrepancy with how the kernel updates the tail pointer with memory-mapped aio queues can corrupt the tracking of aio I/O operations. This causes aio queues killed prematurely while I/O operations are still active.

To resolve this error, update Linux Kernel to kernel-3.10.0-1127.18.2.

How Anisble Manage Configuration Files

This article will discuss, where the Ansible configuration files are located and how Ansible selects them and how we can edit default settings.

Configuring Ansible:

The Ansible behavior can be customized by modifying settings in the Ansible configuration files. Ansible chooses its configuration file from one of many locations on the control node.

  •  /etc/ansible/ansible.cfg
    This file contains the base configuration of the Ansible. It is used if no other configuration file is found.
  • ~/.ansible.cfg
    This ~/.ansible.cfg configuration is used instead of the /etc/ansible/ansible.cfg because Ansible for .ansible.cfg in the home directory of the user.
  • ./ansible.cfg
    If the Ansible command is executed in the directory where the ansible.cfg is also present ./ansible.cfg will be used.

Recommendations of Ansible configuration files:

Ansible recommends creating a file in the directory from where you run the ansible command.

Varibale ANSIBLE_CONFIG

To define the location of the configuration file Ansible gives you a more handy option to define the configuration file by allowing you to change the environment variable named ASNIBLE_CONFIG. If you define this ANSIBLE_CONFIG variable, Ansible uses the configuration file that the variable specifies instead of any of the previously mentioned configuration file.

Configuration File Precedence:

Ansible Configuration File Precedence Table
First preference  Environment variable ANSIBLE_CONFIG overrides all other configuration files. If this variable is not settled, then second preference will be taken
Second preference The directory in which the ansible command was run is then checked for configuration file ‘ansible.cfg’. If this file is not present, then Ansible goes to third preference.
Third Preference The user’s home directory is checked for a .ansible.cfg file.
fourth preference The global /etc/ansible/ansible.cfg file is only used if no other configuration file is found.

 

Due to Ansible’s capability to handle configuration from multiple locations, sometimes it makes the user confused to determine the active configuration file.

So how use can determine which file is active?

How to check which Ansible configuration file is being used?

You can run the ansible –version command to identify which version of Ansible is installed and which configuration file is used.

[ali@controller /]$ ansible --version
ansible 2.9.16
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ali/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Aug 24 2020, 17:57:11) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
[ali@controller /]$

If you need servers to practice Ansible or Linux?

SeiMaxim is a leading Dutch web hosting company and provides resources to learn Ansible and Linux. If you want to get virtual servers to learn Ansible you can place your order and use code SE-ANSIBLE211 to rent two servers in just 18 USD.

How to configure a bonding device in a Linux server

Multiple bonding modes in a Linux operating system are given below:

  • balance-alb (fault tolerance and load balancing)
  • balance-tlb (fault tolerance and load balancing)
  • active-backup (fault tolerance)
  • broadcast (fault tolerance)
  • balance-rr (fault tolerance and load balancing)
  • 802.3ad (fault tolerance and load balancing)
  • balance-xor (fault tolerance and load balancing)

We will use Network Manager CLI to add a bonding device to a Linux server.

  • Run the nmcli command as root on SHELL nmcli con add type bond ifname mode active-backup
  • Assign IP address with nmcli connection modify ipv4.addresses
  • Make static IP address nmcli connection modify ipv4.method manual
  • Add bond slave to bonding device with nmcli con add type bond-slave ifname master
  • Add the second slave with nmcli con add type bond-slave ifname master
  • check bonding configuration with nmcli connection show
  • Restart server network with systemctl restart network

YUM error: Peer cert cannot be verified or peer cert invalid” or ‘certificate verify failed`

The error produced during the yum update

Error: failed to retrieve repodata/-primary.xml.gz
error was [Errno 14] Peer cert cannot be verified or peer cert invalid

Perform the following steps to resolve the yum error:

  • check and correct the date and time of the server.
  • Disable SSL verification by adding sslverify=false in /etc/yum.conf
  • Delete all repos and create a new yum repository.
  • Check /etc/hosts file for any false DNS resolutions of servers.

Kickstart fails to form boot partition [Not enough space in filesystems for the current software selection]

The kickstart automatic installation of the Linux operating system fails but the normal install is successful. To resolve this issue follow the steps given below:

  • Add clearpart --all --drives=${devname} --initlabel in kickstart disk section. This will delete partition table of disk.
  • If the above option does not resolve the issue add zerombr option above clearpart command. The zerombr option will initialize and destroy all invalid partition tables.
  • If above both steps does not work than boot into rescue mode of Linux OS and use dmraid or wipefs as follows:

dmraid -r -E /dev/sda
wipefs -fa /dev/sda