Issue with the IBM Storwize V5100 (8.3.1.1) with HyperSwap Topology
Hello,
I found an issue with the IBM Storwize V5100 (8.3.1.1 of 29-02-2020) with HyperSwap Topology.
I use STOR2RRD version 2.70-1.
I confgured a stretch cluster on two V5100 for volume replication with HyperSwap function.
This topology use two storage controller enclosure to form an unique stretched cluster.
However the chassis serial number is maintained for both enclosure.
I connected the STOR2RRD to this storage when the configuration node was executing on one node of the first enclosure, and all worked fine for some days.
Now I executed a failover test and, after this, the configuration node has been moved on one node of the second enclosure.
After this movement, STOR2RRD cannot longer acquire data: I think because the serial number is now different.
This is the log on STOR2RRD:
error.log-DS20
scp: /dumps/svc.config.cron.xml_XXX02NN-2: No such file or directory
Tue Mar 24 17:35:09 2020 - ERROR - svcconfig.pl: ERROR: Cannot get time from XML file /home/lpar2rrd/stor2rrd/data/DS20/svc.config.cron.xml_XXX02NN-2. Exiting.
- Return code: 1
output.log-DS20
Tue Mar 24 17:35:09 2020 - INFO - svcconfig.pl: (2.1.1) Config XML file:svc.config.cron.xml_XXX02NN-2
Tue Mar 24 17:35:09 2020 - INFO - svcconfig.pl: Config file /home/lpar2rrd/stor2rrd/data/DS20/svc.config.cron.xml_XXX02NN-2 is not readable.
Tue Mar 24 17:35:09 2020 - INFO - svcconfig.pl: Cannot open file /home/lpar2rrd/stor2rrd/data/DS20/svc.config.cron.xml_XXX02NN-2.
/home/lpar2rrd/stor2rrd/bin/svc_stor_load.sh: 2020-03-24_17:35 : Command svcconfig.pl ends with return code 1
ls -l data/DS20/svc*
-rw-r--r-- 1 lpar2rrd lpar2rrd 1289894 Mar 24 02:00 data/DS20/svc.config.backup.xml.check
-rw-r--r-- 1 lpar2rrd lpar2rrd 1317574 Mar 24 02:00 data/DS20/svc.config.cron.xml_XXX02E9-1
Can you help to fix ?
Regards.
Comments
-
Status update: this night at 00:00 the STOR2RRD start again to acquire data.
11.00-11.30 -> failover test and stor2rrd stop data acquisition as explained
00.00 -> stor2rrd start working fine
If this is normal or by design, no problem: failover in only a rare possibility.
-
storage probably did not provide data, definitelly not by design
Howdy, Stranger!
Categories
- 1.6K All Categories
- 48 XORMON NG
- 25 XORMON
- 153 LPAR2RRD
- 13 VMware
- 16 IBM i
- 2 oVirt / RHV
- 4 MS Windows and Hyper-V
- Solaris / OracleVM
- XenServer / Citrix
- Nutanix
- 7 Database
- 2 Cloud
- 10 Kubernetes / OpenShift / Docker
- 124 STOR2RRD
- 19 SAN
- 7 LAN
- 17 IBM
- 3 EMC
- 12 Hitachi
- 5 NetApp
- 15 HPE
- Lenovo
- 1 Huawei
- 2 Dell
- Fujitsu
- 2 DataCore
- INFINIDAT
- 3 Pure Storage
- Oracle