Hey @GHT Armor, In order to help as many users as possible, could you please post your question in English? That way, anyone else who might be experiencing a similar issue can also benefit from the solutions that will be suggested. And this can even help you reach more people in the community. Thanks for your understanding. Cheers
Hello @Fabrix, I will try to to my best.
We’ve got two Data Domain Appliance which are in sync for a Cyber Recovery environnement.
* There are 2 replications contexts from the production (Veeam) Data Domain to the Cyber Recovery Data Domain. These replication are running every day on a specific schedule.
* There is 1 replication context pre-created from the Cyber Recovery Data Domain to the production one. This replication is only there in order to make the recover process easier (in case the production data Domain is still available, and don’t need to be reconfigure from scratch). This replication context is only run during recover tests, indeed, the associated out-of-sync time is huge.
Regarding the 3 replications contexts, we cannot use the “Offset” option of the Centreon plugin to monitoring the production Data Domain replication, because it always report the “recovery replication context” as critical, due to his out-of-sync time.
The object of this “Evolution request” would be to have the avaibility to filter Replication context in the “replication mode” of the Centreon EMC DataDomain Plugin (run with “offset threehold”). We saw that an “instance” filter is already availlable in the “FileSystem” mode of the same plugin, We hope it may be possible to apply the same filtering option to the “replication mode”.
Since our Cyber Recovery has been integrated by Dell/EMC, we think that we may not be alone with an “out-of-sync” Recovery Replication context, and this evolution may be reused by several users.
Hope you will undestand what we are expecting.
Here is the result we have without filtering option :
/usr/lib/centreon/plugins/centreon_emc_datadomain.pl --plugin=storage::emc::DataDomain::plugin --mode=replication --hostname=DD-PROD-IP --snmp-version='2c' --snmp-community='my-community' --warning-status='%{state} =~ /disabledNeedsResync|uninitialized/i' --critical-status='%{state} =~ /initializing|recovering/i' --warning-offset='108000' --critical-offset='172800' --verbose
tcentreon@SRV-CENTREON ~]$ /usr/lib/centreon/plugins/centreon_emc_datadomain.pl --plugin=storage::emc::DataDomain::plugin --mode=replication --hostname=10.230.12.231 --snmp-version='2c' --snmp-community='ght-ro' --warning-status='%{state} =~ /disabledNeedsResync|uninitialized/i' --critical-status='%{state} =~ /initializing|recovering/i' --warning-offset='108000' --critical-offset='172800' --verbose
CRITICAL: Replication 'mtree://DD-CYB/data/col1/veeam_restore/mtree://DD-PROD/data/col1/veeam_restore_from_crs' last time peer sync : 22384919 seconds ago | 'offset_mtree://DD-PROD/data/col1/DD-PROD_SU01/mtree://DD-CYB/data/col1/DD-PROD_SU01_REPL'=24633;0:108000;0:172800;; 'offset_mtree://DD-CYB/data/col1/veeam_restore/mtree://DD-PROD/data/col1/veeam_restore_from_crs'=22384919;0:108000;0:172800;; 'offset_mtree://DD-PROD/data/col1/DD-PROD_SU02/mtree://DD-CYB/data/col1/DD-PROD_SU02_REPL'=19804;0:108000;0:172800;;
Replication 'mtree://DD-PROD/data/col1/DD-PROD_SU01/mtree://DD-CYB/data/col1/DD-PROD_SU01_REPL' status is 'normal', last time peer sync : 24633 seconds ago
Replication 'mtree://DD-CYB/data/col1/veeam_restore/mtree://DD-PROD/data/col1/veeam_restore_from_crs' status is 'normal', last time peer sync : 22384919 seconds ago
Replication 'mtree://DD-PROD/data/col1/DD-PROD_SU02/mtree://DD-CYB/data/col1/DD-PROD_SU02_REPL' status is 'normal', last time peer sync : 19804 seconds ago
centreon@SRV-CENTREON ~]$
Regards,
GHT Armor
Since this is an enhancement request, i have converted it to idea. It is now open to votes.