Comments
-
we could just provide some CLI based cmds to remove that particular peak
-
Hi, we do not know how to do it apart of removing peak data (automatically??) but then we will not present any peak. How to define such peak for removal? It could be different for every device, metric ... no way to do it automatically. perhaps to enable users to remove selected data ion their own ... we do not see a way…
-
no, use XorMon NG (LPAR2RRD modern successor) which does that and many more https://xormon.com/error-log-monitoring.php
-
Send us logs and screenshot from CPU OS graph. Note a short problem description in the text field of the upload form. under lpar2rrd user: cd `cat ~/.lpar2rrd_home` gzip -9 logs.tar Send us logs.tar.gz via https://upload.lpar2rrd.com
-
Hi 1.logs Send us logs: UI --> Settings --> Logs --> Support Logs: follow the form Send us generated file via https://upload.xormon.com Note a reference pls. 2.send also some screenshot via upload form to see storage name and volume which are missing
-
send us logs UI --> Settings --> Logs --> Support Logs: follow the form Send us generated file via https://upload.xormon.com Note a reference pls.
-
You must use Timescale community edition how is noted in the docu
-
is that about Xormon Original (lpar2rrd/stor2rrd app front-end) or Xormon Next Generation (NG)?
-
Send us logs: UI --> Settings --> Logs --> Support Logs: follow the form Send us generated file via https://upload.xormon.com Note a reference pls.
-
send us logs UI --> Settings --> Logs --> Support Logs: follow the form Send us generated file via https://upload.xormon.com Note a reference pls.
-
then ignore it if you get data in graphs
-
you can have whatever you want TZ in the HMC, just reboot the HMC after each change.
-
this is caused by not rebooting the HMC after the time zone change. It also happens after the HMC upgrade. reboot the the HMC, it should be ok after that.
-
is connection test ok? UI --> settings --> devices --> storage --> connection test for that storage Are you at the latest xormon 1.9.5, if not, upgrade at first if 1 + 2 are fine, send us log UI --> Settings --> Logs --> Support Logs: follow the form Send us generated file…
-
under root: rm /var/tmp/lpar2rrd-agent.out Then it will run, problem is you run it under root some time (test??) and output file is same, lpar2rrd user cannot write to that file, so exiting on its start
-
su - lpar2rrd crontab -l date ls -ltr /var/tmp tail /var/tmp/lpar*err
-
well, 8.10 is intended for XorMon NG, there are special enhancements (LVM info, errpt log ...) for it but generally it should work with LPAR2RRD server even older versions send us via support@xorux.com : ls -ltr /var/tmp/lpar* and logs from the lpar2rrd server Note a short problem description in the text field of the…
-
use rest api: https://xormon.com/public-api.php
-
we use volume ID as a unique key for storing in the database. When 2 items have same unique key then just one is saved, that is why you cannot see both volumes. No easy solution here, it would have to be re-designed all volumes data storing, through away all volumes data and create some other unique key based on volume id…
-
how 2 different volumes on different storage devices can share same volume ID? We expect that volume ID is ALWAYS unique.
-
send us logs and a few screenshots to document the issue (attach even screenshot from storage connection test) UI --> settings --> Logs --> support logs: follow the form Send us generated file via https://upload.xormon.com Note a reference pls.
-
above "volume id" in stor2rrd does not seem to be regular volume id used for host mapping purposes, that is perhaps why xormon NG does not show it, we will analyze it and let you know more info later on
-
what is your Xormon NG version?
-
Send us logs. Note a short problem description in the text field of the upload form. under lpar2rrd user: cd `cat ~/.lpar2rrd_home` gzip -9 logs.tar Send us logs.tar.gz via https://upload.lpar2rrd.com
-
they should be downloaded via web browser, check your browser download directory as a workaround you can generate logs manually: cd /home/xormon/xormon-ng # or your Xormon NG dir cd server-nest node dist/dump It will generate file: /home/xormon/xormon-ng/server-nest/files/tmp/logs.tar.gz send it via email or…
-
send us even logs if you can, we will check if we does not find anything else UI --> settings --> Logs --> support logs: follow the form Send us generated file via https://upload.xormon.com Note a reference pls.
-
4 CPUs might not be enough. IBMi OS agents are not so optimized on XorMon back-end as AIX/Linux ones where we expect thousands of them. 250 IBMi agents is quite a lot, never seen that. Try to double CPUs to 8. Let us know.
-
another fix from IBM for the same issue: https://www.ibm.com/support/pages/hmc-enhanced-gui-error-occurred-while-querying-sharedethernetadapterthe-system-currently-too-busy-complete-specified-request
-
we did not get logs, try to attach into email: support at xorux dot com.
-
send us logs UI --> settings --> Logs --> support logs: follow the form Send us generated file via https://upload.xormon.com Note a reference pls.
Howdy, Stranger!