Comments
-
Sent email with logs.
-
Hello, uploaded logs
-
Hello, Sent you the outputs a few days ago. Any tips?
-
Hello, I've uploaded logs. Thank you.
-
Hello, Sorry for delay, ldm list-netstat -p…
-
Im talking about lpar2rrd and solaris hosts monitoring.
-
Hello, Yes, it's already replaced. I'll send you commands output when we have another failed ps.
-
Hello, This helped. Thank you.
-
Logs uploaded.
-
Hello, Ok, thank you.
-
Hello, This helped. Was it included in 6.16 release? Thank you very much
-
Hello, Any advice how i can fix it? Or do i need to wait a fix from you? Thank you.
-
Hello, No, there are no switches in AG mode present in the output of the second command. Only Native mode switches are shown. Even though i use account with "SAN system administrator" and "ALL fabrics" roles.
-
total 15968-rw-r--r-- 1 lpar2rrd lpar2rrd 7 Mar 20 11:51 agent.cfg-rw-r--r-- 1 lpar2rrd lpar2rrd 3825840 Mar 20 18:19 cpu.mmm-rw-r--r-- 1 lpar2rrd lpar2rrd 1212 Mar 20 17:35 FS.csvdrwxr-xr-x 2 lpar2rrd lpar2rrd 45056 Mar 20 18:06 JOB-rw-r--r-- 1 lpar2rrd lpar2rrd 0 Mar 20 18:19 ldom-rw-r--r-- 1 lpar2rrd lpar2rrd 5738368…
-
Yes, this fixed the issue. Thank you.
-
./bin/config_check.sh cluster-rbo-kt=========================STORAGE: cluster-rbo-kt: SWIZ : sample rate: 300 seconds========================= TCP connection to "10.99.255.122" on port "22" is ok ssh -o ConnectTimeout=80 -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey -o SendEnv=no stor2rrd@10.99.255.122…
-
I opened a case with ibm support. Will let you know the result.
-
Yeah i just rebooted hmc. This error appeared on hmc Serviceable Events Overview. And repeated 6 time for the past 2 days. And one more alert E212E151 Explanation Licensed Internal Code failure on the Hardware Management Console (HMC).Response CPU Alert: The SE HMC overall was way too busy for too long. Error reason =…
-
E332FFFF Explanation This error occurs when the HMC receives notification that a particular Java code string is corrupted.Problem determination This is the reason hmc hanged.
-
We have 2 hmc connected to same servers. So if one of them takes to long to answer we have the issue described above.
-
I figured it out. It's one of our hmc hangs. Sorry to bother you.
-
Hello, Here i am again. We'he got new issue. Now we have blanks in hmc graphics. Logs look like this LPARSUTIL2 : tst-rep-ah-72LPARSUTIL3 : tst-rep-ah-72 ts for 05/20/2019 09:39:30 is OK?LPARSUTIL2 : tst_rep_ahLPARSUTIL3 : tst_rep_ah ts for 05/20/2019 09:39:30 is OK?LPARSUTIL2 : nes-t1a-app6LPARSUTIL3 : nes-t1a-app6 ts for…
-
This helped, issue solved. Thank you very much.
-
Hello, Thank you for your help. Server appeared on the web interface. And now i can see statistics from lpar agents. However we have new issue. We don't have most of statistics from hmc from all servers since the moment i replaced the file. bu cpu pool is still there There are messages about invalid jsons in logs. last rec…
-
First. today i applied qos to a set of luns. bandwidth limit - 1.2gb/s. And this is what i see. luns in raid1. Second. Native 3par monitoring SSMC has 2 types of reports. "Exported Volumes Performance" and "Physical Drive Performance". And as you can see, we have higher load on disks, than on lun according to this graphs.…
-
Hi, I figured out what the matter in this case. stor2rrd shows bandwidth on back-end. So it might be significantly bigger, depends on raid level. 1 transaction on fronted = 2 transaction on back-end for raid 1.
-
1. 3par 8450, all flash 2.On VMware side we used different methods. Including vcenter performance statistic/ All of them show same numbers. 3. Tried dd. With small load all seems ok. But with high load... As an example the host on this screenshot has only four 8gb ports. It's AIX host on power8 system. It can't be 4200…
-
We've got same issue with TZ ALMT
Howdy, Stranger!