Comments
-
great, thanks for letting us know!
-
There might be a problem with vio_daemon stuck or not communicating with HMC (which unfortunately impacts all vioses): https://forum.xorux.com/discussion/comment/3450#Comment_3450 https://www.ibm.com/support/pages/node/629995 It can be also related to resolving the vioses hostname/s, dns. also this might help: Please,…
-
upgrade to the latest 7.61, it shoudl resolve it, let us know if does not
-
just a note for AIX users where Limit.pm does not exist with perl-5.30.3-2 at least. in stor2rrd application directory (/home/stor2rrd/stor2rrd usually) open: vi bin/svcperf.pl and put this behind line: use Storable qw(retrieve nstore); # Bug: Max. recursion depth with nested structures exceeded…
-
send us a screenshot(s) via support at xorux dot com
-
thanks for your info, we will test if cpdumps is done automaticly, if so, we will make necessary updates to exclude necessity of usage restr admin role
-
uncomment this line in lpar2rrd's crontab, wait an hour, reload the web browser then #0,5,10,15,20,25,30,35,40,45,50,55 * * * * /home/stor2rrd/stor2rrd/load_vspgperf.sh > /home/stor2rrd/stor2rrd/load_vspgperf.out 2>&1 --> 0,5,10,15,20,25,30,35,40,45,50,55 * * * * /home/stor2rrd/stor2rrd/load_vspgperf.sh >…
-
send us logs Note a short problem description in the text field of the upload form. cd /home/stor2rrd/stor2rrd # or where is your STOR2RRD working dir tar cvhf logs.tar logs tmp/*txt gzip -9 logs.tar Send us logs.tar.gz via https://upload.stor2rrd.com
-
Hi, do you say that storwize 8.5.x firmware does not work with our monitoring at all? What is exact problem/error? restricted admin: it is all about cmd "svctask cpdumps" ( https://stor2rrd.com/storwize-rights.php ) which requires admin rights. Let us check if it can run under monitoring role under new firmware
-
Hi, I was talking about AIX/Linux/Solaris in terms of "CPU OS", we have no eqvivalent for IBM i without idle cycles. Thanks for the ideas about the idle cycles, we will try to do something with that.
-
BTW any experience how CMC handles capacity allocated to CPU dedicated lpars? Does it count all entitlement (allocation) or actual allocation - idle? Any changes if CPU allocated lpar is in shared (can provide not used CPU cycles to the pool) or non shared CPU mode?
-
ok, then most important metrics in regards of CMC and its pools in terms of IBM payment for overcoming it is CPU allocation (no matter about idles) which we monitoring now. I am right? On the other hand would be handy to have some other sets of graph called like CPU Utilisation (allocation - idle) for another admin view to…
-
so they are 2 different things, I am right? CMC historical report uses allocation - idle CMC pool capacity monitoring uses allocation for checking if you reach the allocated limit
-
Hi, what we present is CPU allocation rather than real CPU utilisation which is always lower. We take into consideration idle unly for CPU dedicated LPARs where it is only the way to get any CPU utilisation number. Are you sure that CMC displays CPU utilisation ( allocation - idle)? I am quite surprised then, would not…
-
Hi, this is a new graph, new data, before main CPU graph was CPU pool, however it does not include CPU dedicated LPARs load. You can see that you have all history in CPU pool graph.
-
Hi, yep, that is the bug, already fixed (no UI menu is refreshed when only Linux agent data is in the tool) upgrade to: https://www.lpar2rrd.com/download-temp/lpar2rrd-7.61.tar upgrade docu: https://lpar2rrd.com/upgrade.php
-
I did not understand what fixed it, different PERL5LIB? It is normally set here cd /home/stor2rrd/stor2rrd grep PERL5LIB= etc/stor2rrd.cfg
-
su - stor2rrd cd /home/stor2rrd/stor2rrd . etc/stor2rrd.cfg $PERL -MNet::OpenSSH -e 'print "Version: $Net::OpenSSH::VERSION\n"' rpm -qa| grep -i openssh
-
Hi, ok, and is a problem to use "Local user" instead of "Remote user"? Then it works, right? I am not sure where the difference is, not sure what means remote user, LDAP ones?
-
Hi, su - stor2rrd # lpar2rrd on the virtual appliance) cd /home/stor2rrd/stor2rrd ./bin/config_check.sh <storage alias>
-
Hi, under root: rpm -e perl-Net-OpenSSH-0.62-1.el7.noarch it should resolve that. This might happen on our old virtuall appliances.
-
hi, 7.40 brings restrictuon to 12 storage devices and 12 SAN switches, you will be hit by that in case of upgrade. We support free users only on the latest product version.
-
sure, whatever Linux
-
Hi, httpd upgrade: https://lpar2rrd.com/AIX-yum-upgrade.php data migration to other platform: it is possible, but we provide scripting stuff and support only to our customers.
-
I strongly recomend tyo use any Linux for hosting of LPAR2RD, it works on AIX but there is a lot of compatability issues with Perl packages, and even when it works then next AIX TP/SP upgrade might easilly broke it down what causes another hours of work to make it work. Let me know if you have no Linux option,
-
sorry mised you noted the version. definitelly start by upgrading to the latest stor2rrd level 7.60 Then kill all hanging processes like san_rrupdfate.pl, there might be even some rrdtool connected to them
-
what is your stor2rrd product version?
-
Hi, I am affraid it would not be easy, any chance to use any stand-alone Linux or our virtual appliance instead?
-
Hi, it is not a realtime monitoring. We send data from agents randomly once a 20 - 30 minutes. It is not actually configurable.
-
follow this: https://forum.xorux.com/discussion/comment/5214#Comment_5214
Howdy, Stranger!