Skip to main content

Hi guys! 

do you have any specific instruction on how to configure a cloned poller? This would be very handy for indrastructure taht is distributed in multiple sites. It is also much more convenient to clone a VM than install everything from scratch. What I’ve learned so far:

  • The usual suspects: IP, hostname , host SSH keys
  • copy gorone config (from guil) and past it on the poller
  • if it’s a new poller: configure engine for it on central
  • if you replace an old poller: Gorgone history, mentioned in here 

This is probably sufficient for checks to run, but for exapmle platform status->poller statistics stays empty for cloned satellites. Anything else we need to know?

 

Thanks in advance!

Hello

what I did in my setup where I create a new poller for each client in a private cloud (where I can clone)

I have setup a linux vm (alma), where I have put every necessary packages, network script config ready to be modified

I usually install a few thing like “postfix”, “mailx”, “net-snmp”, “net-snmp-utils”, “telnet” 

then I used the script unattended.sh from documentation installation with package 

I also make sure the service “gorgoned” “centengine” and “centreon” are disabled and do not start on boot

at this point I make sure the ssh key are ok, all the users are created, and I don’t change the central keys often, so i put the public keys where they need to be.

and it’s done, I have a poller template.

 

when I need a new poller, I simply clone this machine, configure network and be sure to get the correct firewall/nat rules to make it communicate with a central server

I next test if the ssh connection is ok (I usually allow ssh from the central to the poller for ease of maintenance)

then use the “add poller” wizard from the central web, input the name, ip, and “ip of the central” (which is often NATed)

I then copy the gorgoned data (from the web ui, you get a script you need paste on the poller)

enable the 3 centreon services, and reboot.

I then need to push the config on the central server and on the poller (I usually use the “restart” option at this time for both poller), it takes a few minutes to connect, but the poller should come online and green in the poller list on the webui

 

that is basically what you have to do with your method, but here you don”t need to cleanup anything, and no risk of having issues

 

things to do :

  • Test your template

clone it, try to make it work, then delete it from the central configuration

 

  • maintain your template

I usually update the OS with

]# yum update -x “centreon*” 

this will update the OS/Packages but not the centreon release

(if you have updated your central and your poller to a newer version of centreon, you should also update that in the template)

 

things not to do

  • preinstall plugin pack, they are deployed automatically when needed with the option from the monitoring connector/plugin page, “automatic installation of plugins” (if you have custom scripts/plugins then you can include them in the template)
  • trying to clone an existing working poller ;)

Hi @christophe.niel-ACT ! I basically did the same with my second poller (Debian), as the first one is physical

  • install using Debian Netinst and do basic networking setup
  • install dselect & run dselect update
  • old poller: dpkg --get-selections >sel.txt
  • copy sel.txt to new poller
  • new poller: dpkg --set-selections < sel.txt
  • new poller: apt-get dselect-upgrade
  • do the Centreon stuff

This all works out fine, but you still need to configure A LOT of things, i.e exim/postfix, Perl libraries that are not packaged, 3rd-party-software etc. Right-click->Clone->Clone to new VM, copy to another ESX and boot is way faster and more consistent. That is why I was asking. As you clone  your VMs, too: Can you check platform status->poller statistics for a cloned VM? Is it empty or is it just me? I suspect that the reason behind it is the key pair of Gorgone, it is the same for all cloned pollers. 

Even thou you have written not to clone running pollers (too late, I guess :)): Any sugestions how to fix it on Debian (i.e. how to purge and reinstall gorgone from scratch, it’s the only component that is actually quite exclusive for a poller?)

 


hmm, the problem with the statistic I have it also, I have not open a ticket to the support yet as I can easily by restarting all the service on the central

as I said I never cloned an existing poller, but sometimes the statistic page stops logging for all poller except the central poller. 

I said it’s not a good practice, as I clearly don’t know enough of the internal working and content of the various files in /etc/centreon* and /var/lib/centreon* to know if what you suggested in the orignal post is enough. (maybe someone from centreon  can tell)

but i’m not sure the statistic issue is correlated to the clones.

unless it is only the cloned poller that has empty statistics, for me it all pollers at the same time. what is the symptom for you?

 

there are some needs for SSH key between “centreon-engine” and “centreon” the central and the pollers, but that should be for the “broker-stat” service from the plugin pack. but I’m not sure if this is related to the engine statistic from the statistic page.

 

a quick reboot of the central fix the problem for me, I havent tried by restarting the daemon “cbd gorgoned centegine” combo on the master to see it was enough.

 

I have not yet pinpointed the issue as it doesn’t happen a lot, (3 times in the last 6 month), but I have noticed I had some mysql timeout sometimes (during vm snapshot) which were provoking issues with acknoledge, scheduled maintenance AND statistics.

 

 


Hi! For me, all non cloned pollers, including the clone source, provide all statistics as expected. Engine statistics run fine for all machines, including the cloned ones. Only the broker statistics are affected frot the latter. I think you are right, I will open a support ticket. Will post the result here if any.


Reply