Skip to main content

LVM - Logical Volume Manager

This post will help you to know about below,

What is the use of LVM?
What is LVM?
How to create LVM?
How to do LVM resize?
Why by default root partition is create using LVM?
How to diagnosis and troubleshoot LVM issues?

Let's suppose you have three hard disk of below size-
Hard disk1 - 100GB
Hard disk2 - 200GB
 Few usage of LVM(use cases)?
1. Total available storage in your system is 300GB, and you have a requirement to store a file of size 250GB without split/cut. However total space is available in the system but you can't store that file because you don't have a single slot of 250GB.
2. Let's suppose you have a 250 GB file and want to store in your system, but 250GB space is not available in the system, so that you need to increase your partition size.
3. Let's suppose you have two partitions in your system /dev/sda1 and /dev/sda2, due to some requirement you want to reduce the size of /dev/sda2 and want to increase the size of /dev/sda1.
[root@client2 ~]# fdisk -l

Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0000dc1c

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200    41943039    19921920   8e  Linux LVM

Disk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdc: 4294 MB, 4294967296 bytes, 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/rhel-root: 18.2 GB, 18249416704 bytes, 35643392 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/rhel-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

How to create LVM?
In My example I'll create LVM of 7GB from two partitions 4GB and 3GB
[root@client2 ~]# fdisk -cu <hard disk name> - RHEL6
[root@client2 ~]# fdisk <hard disk name> - RHEL7
[root@client2 ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xc345c0e3.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-10485759, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): +4G
Partition 1 of type Linux and of size 4 GiB is set

Command (m for help): p

Disk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xc345c0e3

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048     8390655     4194304   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@client2 ~]# partprobe
Warning: Unable to open /dev/sr0 read-write (Read-only file system).  /dev/sr0 has been opened read-only.

[root@client2 ~]# fdisk -l /dev/sdb

Disk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xc345c0e3

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048     8390655     4194304   83  Linux
[root@client2 ~]#

Let's create one more partition.
[root@client2 ~]# fdisk -l /dev/sdc

Disk /dev/sdc: 4294 MB, 4294967296 bytes, 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@client2 ~]# fdisk /dev/sdc
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x2f65f40e.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-8388607, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-8388607, default 8388607): +3G
Partition 1 of type Linux and of size 3 GiB is set

Command (m for help): p

Disk /dev/sdc: 4294 MB, 4294967296 bytes, 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x2f65f40e

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048     6293503     3145728   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

[root@client2 ~]# partprobe
Warning: Unable to open /dev/sr0 read-write (Read-only file system).  /dev/sr0 has been opened read-only.
[root@client2 ~]#

[root@client2 ~]# fdisk -l /dev/sdc

Disk /dev/sdc: 4294 MB, 4294967296 bytes, 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x2f65f40e

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048     6293503     3145728   83  Linux
[root@client2 ~]#

Till here we have created a normal partitions /dev/sdb1 and /dev/sdc1 but we won't format these. Because we want to use these partition in such a way so that we have total 7GB storage not separate 4GB and 3GB.
For that we will do below -
pvcreate -> vgcreate -> lvcreate
[root@client2 ~]# pvcreate /dev/sdb1
  Physical volume "/dev/sdb1" successfully created.
[root@client2 ~]# pvcreate /dev/sdc1
  Physical volume "/dev/sdc1" successfully created.

Check pv summery using below command.
[root@client2 ~]# pvs
  PV         VG   Fmt  Attr PSize  PFree
  /dev/sda2  rhel lvm2 a--  19.00g    0
  /dev/sdb1       lvm2 ---   4.00g 4.00g
  /dev/sdc1       lvm2 ---   3.00g 3.00g
[root@client2 ~]#

Check pv details using below command.
[root@client2 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               rhel
  PV Size               19.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              4863
  Free PE               0
  Allocated PE          4863
  PV UUID               N3HJV0-NFAD-3Rbp-0emp-5Bsv-ICs3-L28oDZ

  "/dev/sdc1" is a new physical volume of "3.00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdc1
  VG Name
  PV Size               3.00 GiB
  Allocatable           NO
  PE Size               0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               K5DYEd-tFqt-kRy6-75M0-vA4g-yNTD-pwUWnv

  "/dev/sdb1" is a new physical volume of "4.00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb1
  VG Name
  PV Size               4.00 GiB
  Allocatable           NO
  PE Size               0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               hBd7rB-geII-uWCQ-YzRW-2uw4-nyV1-ufVRZA

[root@client2 ~]#

pv created, now let's create vg
[root@client2 ~]# vgcreate <vg name> <pv1> <pv2>
[root@client2 ~]# vgcreate myvg /dev/sdb1 /dev/sdc1
  Volume group "myvg" successfully created
[root@client2 ~]#

[root@client2 ~]# vgs
  VG   #PV #LV #SN Attr   VSize  VFree
  myvg   2   0   0 wz--n-  6.99g 6.99g
  rhel   1   2   0 wz--n- 19.00g    0
[root@client2 ~]#

[root@client2 ~]# vgdisplay myvg
  --- Volume group ---
  VG Name               myvg
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               6.99 GiB
  PE Size               4.00 MiB
  Total PE              1790
  Alloc PE / Size       0 / 0
  Free  PE / Size       1790 / 6.99 GiB
  VG UUID               IHVe3d-rnFY-abEl-H75h-AOn6-Oimj-NfyJw3

[root@client2 ~]#

Have a look on VG Size, where it seems like we have a single hard disk(we can say virtual hard disk to understand) of around 7GB (6.77GiB). Actually it is known as volume group.

Now we can create a partition(lv) of 6GB.
[root@client2 ~]# lvcreate --name <lv name> --size 6G <vg name(from which vg you want to take the space)>
[root@client2 ~]# lvcreate --name mylv --size 6G myvg
[root@client2 ~]# lvdisplay /dev/myvg/mylv
  --- Logical volume ---
  LV Path                /dev/myvg/mylv
  LV Name                mylv
  VG Name                myvg
  LV UUID                8mDgKj-yQRM-jtXT-DEpx-193t-Sdqy-n9w4vl
  LV Write Access        read/write
  LV Creation host, time client2.example.com, 2019-02-05 13:29:46 +0530
  LV Status              available
  # open                 0
  LV Size                6.00 GiB
  Current LE             1536
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2

[root@client2 ~]#

[root@client2 ~]# mkfs.ext4 /dev/myvg/mylv
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
393216 inodes, 1572864 blocks
78643 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1610612736
48 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

[root@client2 ~]#
[root@client2 ~]# mkdir /media/mylvm
[root@client2 ~]# mount /dev/myvg/mylv /media/mylvm/
[root@client2 ~]# df -hT |grep /media/mylvm
/dev/mapper/myvg-mylv ext4      5.8G   24M  5.5G   1% /media/mylvm

Note:- The size of partition is not showing exactly 6GB because some part of storage has been used by system to store Metadata. Same as when we attach a 16GB Pen Drive, we never get 16GB space for storage.

To save the data and automatically mount lvm partition let's make it permanent by adding configuration in /etc/fstab

vim /etc/fstab
/dev/myvg/mylv  /media/mylvm/   ext4    defaults        0 0

These are the steps can be used to create LVM partition.
Also the beauty of LVM is that we can increase and decrease the size of LVM partitions.
LVM partition size increase(lvextend) can be done easily at real time(online) without interruption.

Let's see how to resize LVM partitions-
lvextend if some free space available in vg.
[root@client2 ~]# lvextend --size <new size of partition> <partition name>
[root@client2 ~]# lvextend --size 6.8G /dev/myvg/mylv
  Rounding size to boundary between physical extents: 6.80 GiB.
  Size of logical volume myvg/mylv changed from 6.00 GiB (1536 extents) to 6.80 GiB (1741 extents).
  Logical volume myvg/mylv successfully resized.
[root@client2 ~]#

Note:- If I run df -hT command to check the partition size, it will show older size only while we can see lvm size has been increased.
[root@client2 ~]# df -hT |grep /media/mylvm
/dev/mapper/myvg-mylv ext4      5.8G   24M  5.5G   1% /media/mylvm
[root@client2 ~]#

Why so? Because the extended part of partition is not formatted till now, hence we can store any data in it. So we have format extended part.
[root@client2 ~]# resize2fs /dev/myvg/mylv
resize2fs 1.42.9 (28-Dec-2013)
Filesystem at /dev/myvg/mylv is mounted on /media/mylvm; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
The filesystem on /dev/myvg/mylv is now 2621440 blocks long.

[root@client2 ~]# df -hT |grep /media/mylvm
/dev/mapper/myvg-mylv ext4      6.5G   27M  6.3G   1% /media/mylvm
[root@client2 ~]# 

In case if you don't have unallocated space in my vg and hard disks then you have to attach a new hard disk. If unallocated space is available you can create PV then add PV to VG and at last add VG space to LV.

lvextend if some free space is not available in vg.
[root@client2 ~]# pvcreate /dev/sdd1
[root@client2 ~]# vgextend myvg /dev/sdd1
  Volume group "myvg" successfully extended
[root@client2 ~]# vgdisplay myvg
  --- Volume group ---
  VG Name               myvg
  System ID
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               12.99 GiB
  PE Size               4.00 MiB
  Total PE              3325
  Alloc PE / Size       1741 / 6.80 GiB
  Free  PE / Size       1584 / 6.19 GiB
  VG UUID               IHVe3d-rnFY-abEl-H75h-AOn6-Oimj-NfyJw3

[root@client2 ~]#
Now you have free space available in vg, so you can easily extend your lv by two command.

LV size before extend
[root@client2 ~]# lvdisplay /dev/myvg/mylv
  --- Logical volume ---
  LV Path                /dev/myvg/mylv
  LV Name                mylv
  VG Name                myvg
  LV UUID                8mDgKj-yQRM-jtXT-DEpx-193t-Sdqy-n9w4vl
  LV Write Access        read/write
  LV Creation host, time client2.example.com, 2019-02-05 13:29:46 +0530
  LV Status              available
  # open                 1
  LV Size                6.80 GiB
  Current LE             1741
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2

[root@client2 ~]# lvextend --size 10G /dev/myvg/mylv
  Size of logical volume myvg/mylv changed from 6.80 GiB (1741 extents) to 10.00 GiB (2560 extents).
  Logical volume myvg/mylv successfully resized.
[root@client2 ~]# resize2fs /dev/myvg/mylv

LV size after extend
[root@client2 ~]# lvdisplay /dev/myvg/mylv
  --- Logical volume ---
  LV Path                /dev/myvg/mylv
  LV Name                mylv
  VG Name                myvg
  LV UUID                8mDgKj-yQRM-jtXT-DEpx-193t-Sdqy-n9w4vl
  LV Write Access        read/write
  LV Creation host, time client2.example.com, 2019-02-05 13:29:46 +0530
  LV Status              available
  # open                 1
  LV Size                10.00 GiB
  Current LE             2560
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2

[root@client2 ~]# lvextend --size 10G /dev/myvg/mylv -> After lvextend partition size will be 10G
[root@client2 ~]# lvextend --size +10G /dev/myvg/mylv -> After lvextend size of lvm partition will be increased by 10G. If previously it was 10G then after lvextend partition size will be 10+10=20G.

Note:- If the partition formatted in xfs file system then instead of resize2fs command please use xfs_growfs command.
[root@client2 ~]# xfs_growfs /dev/myvg1/mylv1

LVM reduce:-
We use 5 steps to reduce a lvm partition.
1. umount
[root@client2 ~]# umount <lvm partitiion>
[root@client2 ~]# umount /media/mylvm/

2. scan(clean)
[root@client2 ~]# e2fsck -f <lvm partitiion>
[root@client2 ~]# e2fsck -f /dev/myvg/mylv
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/myvg/mylv: 11/655360 files (0.0% non-contiguous), 80815/2621440 blocks
[root@client2 ~]#

3. resize2fs
[root@client2 ~]# resize2fs <lvm partitiion> <new size>
[root@client2 ~]# resize2fs /dev/myvg/mylv 8G
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/myvg/mylv to 262144 (4k) blocks.
The filesystem on /dev/myvg/mylv is now 262144 blocks long.

4. lvreduce
[root@client2 ~]# lvreduce --size <new size> <lvm partitiion>
[root@client2 ~]# lvreduce --size 8G /dev/myvg/mylv
  WARNING: Reducing active logical volume to 8.00 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce myvg/mylv? [y/n]: y
  Size of logical volume myvg/mylv changed from 10.00 GiB (2560 extents) to 8.00 GiB (2048 extents).
  Logical volume myvg/mylv successfully resized.
[root@client2 ~]#

5. mount
Please remount the partition using mount -a command, it will do two things for you. 
-> Check if anything wrong with the partition
-> Mount the partition.
[root@client2 ~]# mount -a
[root@client2 ~]# df -hT |grep mylvm
/dev/mapper/myvg-mylv ext4      7.8G   27M  7.4G   1% /media/mylvm
[root@client2 ~]#

If partition mounted without any error message, then whatever you have done is correct and perfect.

Why by default root partition is create using LVM?
By default /root partition is create using LVM because there may be use case of partition resize (extend/reduce). Mostly, we don't do lvreduce on root partition but if required we can do. We can't do partition resizing on normal partitions.

How to diagnosis and troubleshoot LVM issues?
We may face two kind of issue in LVM, there might be LVM corrupt after resize(extend/reduce) operation and might accidentally LVM removed. If these kind of issue occurred, how we can diagnosis and troubleshoot the issue? and after that how can we restore data?

First of all we have to find where are the lvm backup and archives are stored. For this we can refer LVM configuration file.

/etc/lvm/lvm.conf
In this file we have backup block at line number 316 in Rhel6 and 621in Rhel7 approximately. Also there is lots of important information available related to LVM backup and restore. We can configure as per our requirement.
Let's navigate to archive directory and see what are the information stored over there.
[root@localhost backup]# cd /etc/lvm/
[root@localhost lvm]# ll
total 52
drwx------. 2 root root  4096 Feb  2 15:04 archive
drwx------. 2 root root  4096 Feb  2 15:04 backup
drwx------. 2 root root  4096 Jan 23  2013 cache
-rw-r--r--. 1 root root 37554 Feb  2 15:21 lvm.conf
[root@localhost lvm]# cd backup/
[root@localhost backup]# ll
total 8
-rw-------. 1 root root 1330 Feb  2 15:04 myvg
-rw-------. 1 root root 1811 Dec  3 18:06 VolGroup
[root@localhost archive]# cd ../archive
[root@localhost archive]# ll
total 12
-rw-------. 1 root root  890 Feb  2 15:03 myvg_00000-977012114.vg
-rw-------. 1 root root  902 Feb  2 15:04 myvg_00001-420855079.vg
-rw-------. 1 root root 1812 Dec  3 18:06 VolGroup_00000-730179541.vg
To understand we can say it metadata of all the VG(volume group) but metadata term won't be correct over here. While these also store the stats like at what time, which operation you performed on a particular VG. Also if you read any of these files you will see that from which PV this VG has been created, which command used to create.

[root@localhost archive]# cat myvg_00001-420855079.vg
# Generated by LVM2 version 2.02.98(2)-RHEL6 (2012-10-15): Sat Feb  2 15:04:07 2019

contents = "Text Format Volume Group"
version = 1

description = "Created *before* executing 'lvcreate --name mylv --size 3G myvg'"

creation_host = "localhost.localdomain" # Linux localhost.localdomain 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64
creation_time = 1549100047      # Sat Feb  2 15:04:07 2019

myvg {
        id = "ZJqAlL-9BYC-PKds-saj8-k49X-Owuj-FuazdH"
        seqno = 1
        format = "lvm2" # informational
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 8192              # 4 Megabytes
        max_lv = 0
        max_pv = 0
        metadata_copies = 0

        physical_volumes {

                pv0 {
                        id = "z5jgdw-PYph-3DbD-0aJx-Thmd-icOR-kYt0cF"
                        device = "/dev/sdb1"    # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 8388608      # 4 Gigabytes
                        pe_start = 2048
                        pe_count = 1023 # 3.99609 Gigabytes
                }
        }

}
[root@localhost archive]#
 Waw! very interesting information stored over here.
This is just for deep level knowledge, while we have a command to read all these information without known these directories.
Please run below command for the options available in this command.
[root@localhost backup]# vgcfgrestore -h
  vgcfgrestore: Restore volume group configuration

vgcfgrestore
        [-d|--debug]
        [-f|--file filename]
        [--force]
        [-l[l]|--list [--list]]
        [-M|--metadatatype 1|2]
        [-h|--help]
        [-t|--test]
        [-v|--verbose]
        [--version]
        VolumeGroupName
[root@localhost backup]#

I am running this command with -l(list) option, which will help me to know same information we have checked above.
[root@localhost backup]# vgcfgrestore -l myvg

  File:         /etc/lvm/archive/myvg_00000-977012114.vg
  VG name:      myvg
  Description:  Created *before* executing 'vgcreate myvg /dev/sdb1'
  Backup Time:  Sat Feb  2 15:03:42 2019


  File:         /etc/lvm/archive/myvg_00001-420855079.vg
  VG name:      myvg
  Description:  Created *before* executing 'lvcreate --name mylv --size 3G myvg'
  Backup Time:  Sat Feb  2 15:04:07 2019


  File:         /etc/lvm/backup/myvg
  VG name:      myvg
  Description:  Created *after* executing 'lvcreate --name mylv --size 3G myvg'
  Backup Time:  Sat Feb  2 15:04:07 2019

Now we will do a interesting practical on on of vg, below are the VG's available as of now.
[root@localhost backup]# vgs
  VG       #PV #LV #SN Attr   VSize  VFree
  VolGroup   1   2   0 wz--n- 19.51g       0
  myvg       1   1   0 wz--n-  4.00g 1020.00m
[root@localhost backup]#

Let's suppose accidentally I have removed myvg,
[root@localhost backup]# vgremove myvg
Do you really want to remove volume group "myvg" containing 1 logical volumes? [y/n]: y
  Logical volume myvg/mylv contains a filesystem in use.
[root@localhost backup]# umount /media/mylvm/
[root@localhost backup]# vgremove myvg
Do you really want to remove volume group "myvg" containing 1 logical volumes? [y/n]: y
Do you really want to remove active logical volume mylv? [y/n]: y
  Logical volume "mylv" successfully removed
  Volume group "myvg" successfully removed
[root@localhost backup]# vgdisplay myvg
  Volume group "myvg" not found

[root@localhost backup]#

So we know an archive and a backup has been created for the same in /etc/lvm/ directory before removing it.
[root@localhost backup]# vgcfgrestore -l myvg

  File:         /etc/lvm/archive/myvg_00000-977012114.vg
  VG name:      myvg
  Description:  Created *before* executing 'vgcreate myvg /dev/sdb1'
  Backup Time:  Sat Feb  2 15:03:42 2019


  File:         /etc/lvm/archive/myvg_00001-420855079.vg
  VG name:      myvg
  Description:  Created *before* executing 'lvcreate --name mylv --size 3G myvg'
  Backup Time:  Sat Feb  2 15:04:07 2019


  File:         /etc/lvm/archive/myvg_00002-568209402.vg
  VG name:      myvg
  Description:  Created *before* executing 'vgremove myvg'
  Backup Time:  Sat Feb  2 16:13:36 2019


  File:         /etc/lvm/archive/myvg_00003-1684804794.vg
  VG name:      myvg
  Description:  Created *before* executing 'vgremove myvg'
  Backup Time:  Sat Feb  2 16:13:36 2019

[root@localhost backup]#
Here you can see some has removed myvg using 'vgremove myvg' command. So with the help of these logs we can restore deleted vg. Let's see how?
[root@localhost backup]# vgcfgrestore -f /etc/lvm/archive/myvg_00003-1684804794.vg myvg
  Restored volume group myvg
[root@localhost backup]# vgdisplay myvg
  --- Volume group ---
  VG Name               myvg
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               4.00 GiB
  PE Size               4.00 MiB
  Total PE              1023
  Alloc PE / Size       0 / 0
  Free  PE / Size       1023 / 4.00 GiB
  VG UUID               ZJqAlL-9BYC-PKds-saj8-k49X-Owuj-FuazdH

[root@localhost backup]#

Now let's do the same on lvm mylv.
[root@localhost backup]# lvs
  LV      VG       Attr      LSize  Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup -wi-ao--- 15.57g
  lv_swap VolGroup -wi-ao---  3.94g
  mylv    myvg     -wi-a----  3.00g
[root@localhost backup]# lvremove /dev/mapper/myvg-mylv
Do you really want to remove active logical volume mylv? [y/n]: y
  Logical volume "mylv" successfully removed
[root@localhost backup]# lvdisplay /dev/mapper/myvg-mylv
  One or more specified logical volume(s) not found.
[root@localhost backup]#
So this trouble came in the system, how to fix it? again we will take the help of VG archive.
[root@localhost backup]# vgcfgrestore -l myvg

  File:         /etc/lvm/archive/myvg_00000-977012114.vg
  VG name:      myvg
  Description:  Created *before* executing 'vgcreate myvg /dev/sdb1'
  Backup Time:  Sat Feb  2 15:03:42 2019


  File:         /etc/lvm/archive/myvg_00001-420855079.vg
  VG name:      myvg
  Description:  Created *before* executing 'lvcreate --name mylv --size 3G myvg'
  Backup Time:  Sat Feb  2 15:04:07 2019


  File:         /etc/lvm/archive/myvg_00002-568209402.vg
  VG name:      myvg
  Description:  Created *before* executing 'vgremove myvg'
  Backup Time:  Sat Feb  2 16:13:36 2019


  File:         /etc/lvm/archive/myvg_00003-1684804794.vg
  VG name:      myvg
  Description:  Created *before* executing 'vgremove myvg'
  Backup Time:  Sat Feb  2 16:13:36 2019


  File:         /etc/lvm/archive/myvg_00004-94774788.vg
  VG name:      myvg
  Description:  Created *before* executing 'vgdisplay myvg'
  Backup Time:  Sat Feb  2 16:23:49 2019


  File:         /etc/lvm/archive/myvg_00005-850255730.vg
  VG name:      myvg
  Description:  Created *before* executing 'lvcreate --name mylv --size 3G myvg'
  Backup Time:  Sat Feb  2 16:31:02 2019


  File:         /etc/lvm/archive/myvg_00006-773367454.vg
  VG name:      myvg
  Description:  Created *before* executing 'lvremove /dev/mapper/myvg-mylv'
  Backup Time:  Sat Feb  2 16:34:08 2019


  File:         /etc/lvm/backup/myvg
  VG name:      myvg
  Description:  Created *after* executing 'lvremove /dev/mapper/myvg-mylv'
  Backup Time:  Sat Feb  2 16:34:08 2019

[root@localhost backup]# vgcfgrestore -f /etc/lvm/archive/myvg_00006-773367454.vg myvg
  Restored volume group myvg
[root@localhost backup]# lvdisplay /dev/myvg/mylv
  --- Logical volume ---
  LV Path                /dev/myvg/mylv
  LV Name                mylv
  VG Name                myvg
  LV UUID                ibGRGN-C2LN-vA3G-4cGj-DduG-gURC-HX5Gyb
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2019-02-02 16:31:02 +0530
  LV Status              NOT available
  LV Size                3.00 GiB
  Current LE             768
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

[root@localhost backup]#
Yes! my deleted lv restored, but let's see if I can remount it.
No! we are not able to mount it because for us LVM is a device, which is available but for system LVM is a services and that has to restarted so that latest changes can come up in available state. Which is Not available as of now(please check in above command output). So use below command to restat LVM services.
[root@localhost backup]# lvchange -ay /dev/myvg/mylv
[root@localhost backup]# lvdisplay /dev/myvg/mylv
  --- Logical volume ---
  LV Path                /dev/myvg/mylv
  LV Name                mylv
  VG Name                myvg
  LV UUID                ibGRGN-C2LN-vA3G-4cGj-DduG-gURC-HX5Gyb
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2019-02-02 16:31:02 +0530
  LV Status              available
  # open                 0
  LV Size                3.00 GiB
  Current LE             768
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2
Now thee state has been changed to available, now we can use the partition further.
[root@localhost backup]# mkfs.ext4 /dev/myvg/mylv
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
196608 inodes, 786432 blocks
39321 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=805306368
24 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912

Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@localhost backup]# mount /dev/myvg/mylv /media/mylvm
[root@localhost backup]# cd /media/mylvm/
[root@localhost mylvm]# touch afadsf
[root@localhost mylvm]# mkdir tesing
[root@localhost mylvm]# ll
total 20
-rw-r--r--. 1 root root     0 Feb  2 17:02 afadsf
drwx------. 2 root root 16384 Feb  2 16:53 lost+found
drwxr-xr-x. 2 root root  4096 Feb  2 17:02 tesing
[root@localhost mylvm]#
Above testing show the restored partition is working fine and we con perform all the operation.

Comments

Post a Comment

Please share your experience.....

Popular posts from this blog

error: db5 error(11) from dbenv->open: Resource temporarily unavailable

If rpm command is not working in your system and it is giving an error message( error: db5 error(11) from dbenv->open: Resource temporarily unavailable ). What is the root cause of this issue? How to fix this issue?   just a single command- [root@localhost rpm]# rpm --rebuilddb Detailed error message- [root@localhost rpm]# rpm -q firefox ^Cerror: db5 error(11) from dbenv->open: Resource temporarily unavailable error: cannot open Packages index using db5 - Resource temporarily unavailable (11) error: cannot open Packages database in /var/lib/rpm ^Cerror: db5 error(11) from dbenv->open: Resource temporarily unavailable error: cannot open Packages database in /var/lib/rpm package firefox is not installed [root@localhost rpm]# RPM manage a database in which it store all information related to packages installed in our system. /var/lib/rpm, this is directory where this information is available. [root@localhost rpm]# cd /var/lib/rpm ...

Failed to get D-Bus connection: Operation not permitted

" Failed to get D-Bus connection: Operation not permitted " - systemctl command is not working in Docker container. If systemctl command is not working in your container and giving subjected error message then simple solution of this error is, create container with -- privileged option and also provide init file full path  /usr/sbin/init [root@server109 ~]# docker container run -dit --privileged --name systemctl_not_working_centos1 centos:7 /usr/sbin/init For detailed explanation and understanding I am writing more about it, please have look below. If we have a daemon based program(httpd, sshd, jenkins, docker etc.) running inside a container and we would like to start/stop or check status of daemon inside docker then it becomes difficult for us to perform such operations , because by default systemctl and service  commands don't work inside docker. Normally we run below commands to check services status in Linux systems. [root@server109 ~]# systemctl status ...

AWS cloud automation using Terraform

In this post I'll create multiple resources in AWS cloud using Terraform . Terraform is an infrastructure as code( IAC ) software which can do lots of things but it is superb in cloud automation. To use Terraform we have write code in a high-level configuration language known as Hashicorp Configuration Language , optionally we can write code in JSON as well. I'll create below service using Terraform- 1. Create the key-pair and security group which allow inbound traffic on port 80 and 22 2. Launch EC2 instance. 3. To create EC2 instance use same key and security group which created in step 1 4. Launch Volume(EBS) and mount this volume into /var/www/html directory 5. Upload index.php file and an image on GitHub repository 6. Clone GitHub repository into /var/www/html 7. Create S3 bucket, copy images from GitHub repo into it and set permission to public readable 8 Create a CloudFront use S3 bucket(which contains images) and use the CloudFront URL to update code in /var/w...