Centreon offers the possibility to reverse the connection flow of the Gorgone process through the Gorgone Pull mode.
Context
Many situations can lead us to implement this feature. Consider a context in which a client has a distributed architecture with a Central server accessible from a public IP adresse (for example in a public Cloud) and many Pollers configured on different LANs. Of course, you cannot contact your remote Pollers from your Central because the Pollers' IPs are not accessible from the Internet and therefore not accessible from the Central server either. In this case, we will ask each Poller to initiate a connection to the public IP of the Central in order to recover its configuration.
Architecture
Consider the distributed architecture:
To identify incoming and outgoing connections between the two servers, we can run the ss command with the -plantu option:
oroot@Centreon_Central centos]# ss -plantu | grep 5669
tcp LISTEN 0 128 *:5669 *:* users:(("cbd",pid=4635,fd=9))
tcp ESTAB 0 0 10.25.10.88:5669 10.25.15.237:60486 users:(("cbd",pid=4635,fd=18))
tcp ESTAB 0 0 127.0.0.1:5669 127.0.0.1:53592 users:(("cbd",pid=4635,fd=17))
tcp ESTAB 0 0 127.0.0.1:53592 127.0.0.1:5669 users:(("centengine",pid=4673,fd=16))
rroot@Centreon_Central centos]# ss -plantu | grep 5556
tcp ESTAB 0 0 10.25.10.88:47216 10.25.15.237:5556 users:(("gorgone-proxy",pid=4774,fd=76))
We have two important network flows between the two Centreon servers:
- In order to transfer the collected data, the connection is made by Centreon Engine from the Poller to the Central Server on port 5669.
- In order to export the Centreon configuration and to transfer external commands, the connection is made by Centreon Gorgone from the Central Server to the Poller on port 5556.
Let's go back to our context: the connection from the Central Server to the Poller is not possible. We have then to start the Gorgone connection from the Poller. We will have the following network flows:
From | To | Protocol | Port | Application |
---|---|---|---|---|
Poller | Central server | ZMQ | TCP 5556 | Export of Centreon configuration |
Poller | Central server | BBDO | TCP 5669 | Transfer of collected data |
Prerequisite
The Remote Poller is already installed and Gorgone also ( it may not be possible to perform the Register the server step, but don't worry)
In our case, we have the following configuration:
- Central server:
- address: 10.25.10.88
- Poller:
- address: 10.25.15.237
Configuration on Poller side
- In the menu Configuration > Poller > Poller, edit the Poller configuration, select ZMQ as Gorgone connection protocol and define the suitable port (port 5556 is recommended).
- From the Pollers listing, click on the Display Gorgone configuration action icon on the line corresponding to the Poller
- A pop-in will show the configuration to copy into the Poller terminal. Click on Copy to clipboard
- Paste the content of the clipboard directly into the Poller terminal and hit the Enter key for the command to be applied.
- Edit the file /etc/centreon-gorgone/config.d/40-gorgoned.yaml, modify the section gorgonecore and add the pull module after replacing the IP address in the target_path column with the IP address of the Central server
name: distant-server
description: Configuration for distant server
gorgone:
gorgonecore:
id: 3
privkey: "/var/lib/centreon-gorgone/.keys/rsakey.priv.pem"
pubkey: "/var/lib/centreon-gorgone/.keys/rsakey.pub.pem"
modules:
- name: action
package: gorgone::modules::core::action::hooks
enable: true
- name: engine
package: gorgone::modules::centreon::engine::hooks
enable: true
command_file: "/var/lib/centreon-engine/rw/centengine.cmd"
- name: pull
package: "gorgone::modules::core::pull::hooks"
enable: true
target_type: tcp
target_path: 10.25.10.88:5556
ping: 1
- Run the following command to restart the Gorgone service:
systemctl restart gorgoned
- Make sure it is started by running the following command:
systemctl status gorgoned
- Retrieve the gorgone_key_thumbprint.pl
/root@poller tmp]# wget https://github.com/centreon/centreon-gorgone/blob/develop/contrib/gorgone_key_thumbprint.pl
- Edit the file and replace /etc/pki/gorgone/pubkey.pem by /var/lib/centreon-gorgone/.keys/rsakey.pub.pem
- Run perl gorgone_key_thumbprint.pl
It should result as follows:
eroot@poller ~]# perl gorgone_key_thumbprint.pl
2022-03-03 14:59:53 - INFO - File '/var/lib/centreon-gorgone/.keys/rsakey.pub.pem' JWK thumbprint: f1QbOx92vskVps1TGxxLOf8JS4Yc3Tak43jgRmSFL3U
- keep the rsa public keythumbprint: f1QbOx92vskVps1TGxxLOf8JS4Yc3Tak43jgRmSFL3U
Configuration on Central side
-
Edit the Gorgone configuration file /etc/centreon-gorgone/config.d/40-gorgoned.yaml and add the following line (key refers to the rsa public keythumbprint )
...
gorgone:
gorgonecore:
...
external_com_type: tcp
external_com_path: "*:5556"
authorized_clients:
- key: f1QbOx92vskVps1TGxxLOf8JS4Yc3Tak43jgRmSFL3U
...
modules:
...
- name: register
package: "gorgone::modules::core::register::hooks"
enable: true
config_file: /etc/centreon-gorgone/nodes-register-override.yml
...
- Then create a new /etc/centreon-gorgone/nodes-register-override.yml file with the following content:
nodes:
- id: 3
type: pull
prevail: 1
The parameter ID correspond to the Poller ID (retrieve in the Gorgone configuration file of the Poller)
- Run the following command to restart the Gorgone service:
systemctl restart gorgoned
- Make sure it is started by running the following command:
systemctl status gorgoned
- From the Pollers listing, select the Poller and click on Export configuration. Then check the four first boxes, select the Restart method and click on Export:
The Poller's engine will then start and connect to the Central Broker.
We got the following network flows:
As expected, we have:
tcp ESTAB 0 0 10.25.10.88:5556 10.25.15.237:60376
tcp ESTAB 0 0 10.25.10.88:5669 10.25.15.237:54180