There are regular questions about best practices when defining a target architecture to deploy Centreon in public coloud environments. This article provides some tips. Feel free to ask questions as comments below.
Regardless of the cloud provider, certain general considerations are listed in the documentation:
- Check the software compatibility
- Check the hardware requirements
- Calculate the sizing for data
For the DBMS, it will be necessary to check its compatibility with the version defined by Centreon for the version of Centreon. For example, for Centreon 21.10, you will need compatibility with MariaDB 10.5.
Disk access is also very important to reduce write and read latency.
I/O is also a critical point. For AWS, it is better to use "root" or "EBS" disk systems rather than "EFS" which is a network disk, and which with a high volume of collection will cause problems. The type of disc is also important. Prefer SSD (GP2) to optimize I/O.
Also pay attention to the network configuration. For example, AWS changed the MTU of network packets. This modification does not seem to currently impact the Centreon’s behaviour.
Finally, be careful of network addressing. On AWS there are "auto scaling groups" to recreate a machine automatically if the other is no longer accessible, but on restart, the IP will be changed and pollers will no longer be able to connect. It will then be necessary to use a service such as Route 53 + scripts to correct the network addressing when starting the new server which will be executed via a "hook".
Each cloud provider has their own services and constraints. It is difficult to make an exhaustive list so feel free to share your experience!