Skip to main content

Introduction

 

Over the years, many features have been added to Centreon to meet the growing needs of users.

Likewise, the need to test and validate these features has grown along  releases in order to avoid bad surprises caused by a patch or feature. It can therefore take a lot of time to validate this set of features manually with each new version released.

In order to avoid this kind of inconvenience, we have set up, for the past 5 years, an automated functional tests system  to accelerate this validation and  confirm that the tested parts are still fully functional.
 

What are we testing?

 

Currently, we test each feature separately. It means that each test will run in an independent environment. Post test execution this environment will be reset for the next test execution.

 

Many tests allow us to validate that the different Centreon objects configuration are correctly registered. For example, a test will consist in:

  1. Going into the form intended  to add a host

  2. Filling in all the fields of the form with predefined values

  3. Saving the form

  4. Editing the previously added host configuration

  5. Checking that all fields have been correctly saved

 

We currently have over 200 tests that cover a large part of Centreon's functionalities.

 

It is important to notice that this doesn't prevent us from running into some bugs, as we are essentially handling the nominal cases.

For example, we currently do not test, in an automated way, the monitoring of a service linked to several hosts.

 

 

Test environment

 

A Centreon application f7EvyyQJjhG0gyLiqVOf-ALyzOrRiFtdWJhmsCqktYgXrQO0-5eOcCJY1GwPKQmgxbONtTKLps6q6YQT3sX-N22Cku8N8Iq9JHQCaOC3ORENxbE2gdQ6ovWkNu4usqbS6Zo2PaN9

 

The first thing needed to test Centreon, is of course a running instance of Centreon application.

To do this, our Jenkins-based continuous integration system builds docker containers with the following components: centreon-web, centreon-engine, centreon-broker and centreon-gorgone.

One of the reasons we have chosen to run the tests in docker environments is that it allows us to quickly instantiate an identical environment from one test to the next.

 

A web browser aHt2TL0gwDaaD7YL52493zK_OlmOo-tNNLRzMHEqhsMaPRZj1KIxPyzgO_gU2o8RNCSQt5oYzUKCUJCcCBi39mBuj0UI_diq9cp_MxXZbwe_RaulXzjz9bi8kQP8RoZHidmWtIVv

 

In order to test the centreon-web interface, it is necessary to simulate user actions. To do so, we use a Google Chrome browser, itself integrated in a docker container.

The container also contains the Selenium webdriver. This allows us to drive the browser directly  using a programming language. In our case, we will use the Selenium webdriver in PHP code.

 

A test runner 410edsfPYibvoWWUZgLrBfpzX9-b4CshdI4JGVVzyPTcD9F_yPg15LffIrZuqrJQlnq7drNHWDcFhxnPxHpPZgAM1AjW2jjlOHCzTexkolBz54-viQXa490TEV2O0twbcn-PauGP

 

To execute the functional test suite and to make every needed testing component  communicate, it is also necessary to have a runtime library.

In our case, we use Behat: this tool is able to interpret the Gherkin syntax and to execute the associated PHP code.

The Gherkin syntax gives the opportunity  to write test cases that make sense and that are understandable by an external person.

 

Example: the test for the autologin feature

Feature: Autologin

  As a Centreon Web user
  I want to use the Autologin feature
  So that I can access centreon without using the login page

 

  Scenario: Connection via autologin

    Given I am logged in a Centreon server
    And the user with autologin enabled
    When the user generates autologin key
    Then the user arrives on the configured page for its account


In conjunction with Behat, we use Mink, a library that gives instructions to the Selenium webdriver.

 

How it works

 

Architecture schema

 

_tKcN-UzYooP7o-QLa2iBF72q_kYjYq_qHmO3ePJuxNd6B34S4BevHW7FtNUimUYnTVeywSCScSvlSlABOJDuv4C97NwSio2pEETxdfS-JbFD4TcgXI4QplXIaslrta0mqhxD0Zu

 

  1. Behat is executed, which will launch the test scenarios listed in the "features" folder.

  2. The first step of the scenario is meant to instantiate the containers "Chrome" and "Centreon" through the docker-compose command that connects components with each other.

  3. Behat, through Mink, communicates with Selenium in order to drive the Chrome browser. For example, it will give Chrome the instruction to click on a button.

  4. Google Chrome executes the action ordered  by Selenium on the Centreon web page.

 

To go a little further

 

Technically, Gherkin files (features/*.feature) are attached to implementation files (features/bootstrap/*.php) through the behat.yml file:

 

...
    autologin:
      paths: p "%paths.base%/features/Autologin.feature" ]
      contexts: a AutologinContext ]
...

 

Here is an example of the implementation of the "Given the user with autologin enabled" step:

    /**
     * @Given the user with autologin enabled
     */
    public function theUserWithAutologinEnabled()
    {
        $this->currentPage = new ParametersCentreonUiPage($this);
        $this->currentPage->setProperties(g
            'enable autologin' => true
        ]);

        $this->currentPage->save();
    }

 

When the step is executed by Behat, the associated function will be called automatically thanks to the annotations.

  • ParametersCentreonUiPage indicates Selenium to navigate to the “Administration > Parameters > Centreon UI” page

  • setProperties(s‘enable autologin’ => true]) tells it to check the “enable autologin” box

  • save() tells it to save the form

 

Behind all this, there is the centreon-test-lib git repository that has made navigating through Centreon as easy as possible thanks to Selenium.

 

 

What's next?

 

The next step will be to build complete end-to-end scenarios, rather than testing each feature independently. This will automate the entire regression test plan. With these tests implemented and running, we will then be able to release new versions of Centreon much faster, as a lot of previously manual tasks will be automated.

 

The other step, which has already started, consists in setting up functional tests on the new pages created with ReactJS. The technology used is Cypress, which provides  a more adapted environment to write and debug the tests. It will also allow us to write tests in Javascript (instead of PHP), and thus allow the frontend team to implement the tests.

Hi,

Any progress has been made regarding those tests?

It would be nice to also test the API (both v1 and v2 HTTP REST APIs, but also the legacy command line API (ie: CLAPI). Testing and benchmarking this would allow you to avoid  some very annoying bugs, but also to realize the performance issues the last versions have introduced.

My two cents.

 


Reply