Playing With Oracle ASM and Multipath Disks - Configuring the ISCSI and the ASM Disks

Written by lolima | Published 2023/06/05
Tech Story Tags: oracle | asm | truenas | multipath | iscsi | technology | software-engineering | software-development

TLDRIn the second part of the TrueNAS tutorial, we will configure the iSCSI, the multipath, and the ASM disks. For the sake of simplicity, I will use a VM previously configured with Oracle Restart and one database already used in other tests. I only added two new network interfaces to the VM.via the TL;DR App

Hi, and welcome back to the second part. If you missed the first part, where we configured the TrueNAS, please look here. Now we will configure the iSCSI, the multipath, and the ASM disks.

Oracle Environment

For the sake of simplicity, I will use a VM previously configured with Oracle Restart and one database already used in other tests. I only added two new network interfaces to the VM to use the multipath.

I’ve added the TrueNAS VM IPs to the /etc/hosts.

Configuring the new network interfaces

You must configure the new internal networks with IPs in the same network as the TrueNAS portal (10.1.1.1/10.1.2.1).

Configuring the iSCSI

--discover the iscsi target
root #> iscsiadm -m discovery -t sendtargets -p truenas1
10.1.2.1:3260,1 iqn.2005-10.org.freenas.ctl:has03
10.1.1.1:3260,1 iqn.2005-10.org.freenas.ctl:has03

--enable automatic login during startup
root #> iscsiadm -m node --op update -n node.startup -v automatic

--login to both network interfaces
root #> iscsiadm -m node -p truenas1 --login
root #> iscsiadm -m node -p truenas2 --login

Configuring the multipath

--enable and start the service
[email protected]:/ $> systemctl enable multipathd
[email protected]:/ $> systemctl start multipathd

--create the multipath.conf
cat <<EOF>/etc/multipath.conf
defaults {
    find_multipaths yes
    user_friendly_names yes
    failback immediate
}

blacklist {
     devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
     devnode "^hd[a-z][[0-9]*]"
     devnode "^sda[[0-9]*]"
     devnode "^cciss!c[0-9]d[0-9]*"
}
EOF

[email protected]:/ $> ll /etc/multipath.conf
-rw-r--r--. 1 root root 849 Apr 30 10:24 /etc/multipath.conf

--reload the service
[email protected]:/ $> systemctl reload multipathd

--check if the disks exist
[email protected]:/ $> ll /dev/mapper
total 0
crw-------. 1 root root 10, 236 Apr 30 09:34 control
lrwxrwxrwx. 1 root root       7 Apr 30 10:24 mpathd -> ../dm-4
lrwxrwxrwx. 1 root root       7 Apr 30 10:24 mpathe -> ../dm-2
lrwxrwxrwx. 1 root root       7 Apr 30 10:24 mpathf -> ../dm-3
lrwxrwxrwx. 1 root root       7 Apr 30 09:34 ol-root -> ../dm-0
lrwxrwxrwx. 1 root root       7 Apr 30 09:34 ol-swap -> ../dm-1

Using the multipath command, you can check if the disks use both networks.

[email protected]:/ $> multipath -ll

mpathe (36589cfc000000580ddee277e9cda411e) dm-2 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active   <--- network 1
| `- 7:0:0:2 sdf 8:80  active ready running
`-+- policy='service-time 0' prio=50 status=enabled  <--- network 2
  `- 8:0:0:2 sdi 8:128 active ready running

mpathd (36589cfc00000025b4072a1536ade2f9d) dm-4 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:1 sde 8:64  active ready running
`-+- policy='service-time 0' prio=50 status=enabled
  `- 8:0:0:1 sdh 8:112 active ready running

mpathf (36589cfc000000f5feabc996aa1a12876) dm-3 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:3 sdg 8:96  active ready running
`-+- policy='service-time 0' prio=50 status=enabled
  `- 8:0:0:3 sdj 8:144 active ready running

Let’s simplify our lives by adding aliases to the disks. I’ve updated the multipath.conf file and did reload the service.

[email protected]:/ $> cat /etc/multipath.conf
defaults {
    find_multipaths yes
    user_friendly_names yes
    failback immediate
}

blacklist {
     devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
     devnode "^hd[a-z][[0-9]*]"
     devnode "^sda[[0-9]*]"
     devnode "^cciss!c[0-9]d[0-9]*"
}

multipaths {
    multipath {
        wwid    36589cfc000000580ddee277e9cda411e
        alias   mpdata1
    }
    multipath {
        wwid    36589cfc00000025b4072a1536ade2f9d
        alias   mpdata2
    }
    multipath {
        wwid    36589cfc000000f5feabc996aa1a12876
        alias   mpfra1
    }
}

[email protected]:/ $> systemctl reload multipathd

Now we have better names (aliases).

[email protected]:/ $> multipath -ll

mpfra1 (36589cfc000000f5feabc996aa1a12876) dm-3 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:3 sdg 8:96  active ready running
`-+- policy='service-time 0' prio=50 status=enabled
  `- 8:0:0:3 sdj 8:144 active ready running

mpdata2 (36589cfc00000025b4072a1536ade2f9d) dm-4 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:1 sde 8:64  active ready running
`-+- policy='service-time 0' prio=50 status=enabled
  `- 8:0:0:1 sdh 8:112 active ready running

mpdata1 (36589cfc000000580ddee277e9cda411e) dm-2 TrueNAS,iSCSI Disk
size=5.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 7:0:0:2 sdf 8:80  active ready running
`-+- policy='service-time 0' prio=50 status=enabled
  `- 8:0:0:2 sdi 8:128 active ready running

Nice, it worked. Now it’s time to use those disks in our ASM. The VM has03 already has an Oracle Restart installation, so I already have 3 ASM local disks.

Let’s add our multipath disks to the UDEV.

--find the disks' UUID
[email protected]:/ $> udevadm info --query=all --name=/dev/mapper/mpdata1 | grep UUID
E: DM_UUID=mpath-36589cfc000000580ddee277e9cda411e

[email protected]:/ $> udevadm info --query=all --name=/dev/mapper/mpdata2 | grep UUID
E: DM_UUID=mpath-36589cfc00000025b4072a1536ade2f9d

[email protected]:/ $> udevadm info --query=all --name=/dev/mapper/mpfra1 | grep UUID
E: DM_UUID=mpath-36589cfc000000f5feabc996aa1a12876

--edit the udev conf file to add the new disks and rules, using the previously UUIDs
[email protected]:/ $> cat /etc/udev/rules.d/96-asm.rules
#multipath disks
KERNEL=="dm-*", SUBSYSTEM=="block", ENV{DM_UUID}=="mpath-36589cfc000000580ddee277e9cda411e", SYMLINK+="oracleasm/mpdata1", OWNER="grid", GROUP="asmadmin", MODE="0660", OPTIONS:="nowatch"
KERNEL=="dm-*", SUBSYSTEM=="block", ENV{DM_UUID}=="mpath-36589cfc00000025b4072a1536ade2f9d", SYMLINK+="oracleasm/mpdata2", OWNER="grid", GROUP="asmadmin", MODE="0660", OPTIONS:="nowatch"
KERNEL=="dm-*", SUBSYSTEM=="block", ENV{DM_UUID}=="mpath-36589cfc000000f5feabc996aa1a12876", SYMLINK+="oracleasm/mpfra1", OWNER="grid", GROUP="asmadmin", MODE="0660", OPTIONS:="nowatch"
#local disks
KERNEL=="sd*", SUBSYSTEM=="block", ENV{ID_SERIAL}=="VBOX_HARDDISK_VB647c8062-550861fd", SYMLINK+="oracleasm/data1", OWNER="grid", GROUP="asmadmin", MODE="0660", OPTIONS:="nowatch"
KERNEL=="sd*", SUBSYSTEM=="block", ENV{ID_SERIAL}=="VBOX_HARDDISK_VB84084872-e9ff80ef", SYMLINK+="oracleasm/data2", OWNER="grid", GROUP="asmadmin", MODE="0660", OPTIONS:="nowatch"
KERNEL=="sd*", SUBSYSTEM=="block", ENV{ID_SERIAL}=="VBOX_HARDDISK_VB9de588bd-fea7d130", SYMLINK+="oracleasm/fra1", OWNER="grid", GROUP="asmadmin", MODE="0660", OPTIONS:="nowatch"

--reload UDEV
[email protected]:/ $>udevadm control --reload-rules && udevadm trigger

--check if the disks exist
[email protected]:/ $> ll /dev/oracleasm
total 0
lrwxrwxrwx. 1 root root 6 Apr 30 10:58 data1 -> ../sdb
lrwxrwxrwx. 1 root root 6 Apr 30 10:58 data2 -> ../sdc
lrwxrwxrwx. 1 root root 6 Apr 30 10:58 fra1 -> ../sdd
lrwxrwxrwx. 1 root root 7 Apr 30 10:58 mpdata1 -> ../dm-2
lrwxrwxrwx. 1 root root 7 Apr 30 10:58 mpdata2 -> ../dm-4
lrwxrwxrwx. 1 root root 7 Apr 30 10:58 mpfra1 -> ../dm-3

Configuring the Oracle ASM

Now let’s swap our local disks with the multipath disks on the ASM diskgroups.

--check the candidates disks
[email protected]:[GRID]:/home/grid $> asmcmd lsdsk --candidate
Path
/dev/oracleasm/mpdata1
/dev/oracleasm/mpdata2
/dev/oracleasm/mpfra1

--check the current disks' names
--I removed some columns from the result below
[email protected]:[GRID]:/home/grid $> asmcmd lsdsk -k
Total_MB  Free_MB  OS_MB  Name       Failgroup  Path
    5120     3468   5120  DATA_0000  DATA_0000  /dev/oracleasm/data1
    5120     3464   5120  DATA_0001  DATA_0001  /dev/oracleasm/data2
    5120     3323   5120  FRA_0000   FRA_0000   /dev/oracleasm/fra1

--swap the disks
[email protected]:[GRID]:/home/grid $> sqlplus / as sysasm

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Apr 30 11:05:50 2022
Version 19.14.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.14.0.0.0

SQL> alter diskgroup fra add disk '/dev/oracleasm/mpfra1' drop disk fra_0000 rebalance power 1024;

Diskgroup altered.

SQL> alter diskgroup data add disk '/dev/oracleasm/mpdata1', '/dev/oracleasm/mpdata2' drop disk data_0000, data_0001 rebalance power 1024;

Diskgroup altered.

SQL>

Now we wait until the rebalance ends and validate the new disks.

Voilà, everything working like a charm!

That’s the end of the second part. In the next and last part, we will run some tests on our new multipath disks. If you missed the first part, where we configured the TrueNAS, please look here.

À la prochaine.


Written by lolima | Hovering around technology for the last 30 years.
Published by HackerNoon on 2023/06/05