Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Performance tests with Artillery

1. Environment

This tutorial is written using the following environment:

  • Hardware: Slimbook Pro 13.3 “(Intel Core i7, 32GB RAM)
  • Operating System: LUbuntu 16.04
  • Visual Studio Code 1.16.1
  • NodeJS v9.3.0
  • artillery 1.6.0-12

2. Introduction

Who else who least when you talk about performance and stress tests of an API or a website automatically thinks of JMeter, in fact in Adictos we are very fond of JMeter.

But times change and although JMeter is still a very useful Tool for this type of tests, other more “modern” tools have emerged with strength as we present in this tutorial: Artillery.

It is a tool implemented in NodeJS that through the declaration of a configuration file allows defining, among other things, the number of users and requests to be simulated and recording the results of latency and Duration for the minimum, the maximum, the median, the 95th percentile and the 99th percentile.

We can also configure plugins that allow us to generate other metrics and integrate them with visualization tools, it is compatible with technologies and tools such as: StatsD, InfluxDB, Graphite, Grafana, ElasticSearch, Kibana …

In this tutorial we will see how to configure the plugin that generates the metrics for StatsD and how to do the integration with Graphite and Grafana for the visualization of results.

3. Let’s go to the mess

As in any modern application worth its salt, the installation can not be easier thanks to NodeJS.

With an instance of NodeJS and NPM it is as simple as:

$> npm install -g artillery

The tool has the “quick” command that allows us to launch a quick test against the URL that we define, being able to establish the following modifiers:

  • -d, -duration
  • -r, -rate
  • -p, -payload
  • -t, -content-type
  • -o, -output
  • -k, -insecure
  • -q, -quiet

In this way if we want to simulate 10 users sending 20 requests each, we simply have to execute:

$> artillery quick –count 10 -n 20 url_deseada

And we will receive a result by console similar to this:

As you see it offers a summary by console indicating latency and duration metrics as well as the total number of requests made and so many per second, among other information.

This is fine for a quick and typical test of an API endpoint that does not require security. For other types of more advanced tests we have to resort to the “run” command that, apart from the configuration file of the test, supports the following modifiers:

  • -o, -output
  • -k, -insecure disables TLS certificate verification
  • -e, -environment
  • -t, -target
  • -q, -quiet disables logs by console
  • -Overrides overwrites entire parts of the configuration file

The definition of the configuration file can be done in a file with extension .yml that always starts with “config:” and this property can be followed by:

  • target: define the URL of the endpoint we want to test. It usually stores the URL base of the server that is being attacked.
  • phases: in this property the determined moments in the test are defined, where we can define that the requests behave in a certain way during a set time. Each “phase” can be defined with the following properties:
    • duration: defines the complete duration of the “phase”.
    • arrivalRate: the number of requests per second that will arrive during the defined duration
    • rampTo: defines the number that you want to reach from the one defined in “arrivalRate” in the established time period.
    • arrivalCount: is the number of virtual users that you want to have in the period of time established.
    • pause: do nothing during the determined time.
    • name: it is optional, but it is advisable to identify the different “phases” in the generated reports.
  • environments: is the property that allows defining the particularities of the different execution environments.
  • payload: is the property that allows defining a CSV file where to read different test cases, for example, users and passwords attacking the same endpoint.
  • scenarios: is the property that allows defining a flow of requests to different endpoints.
  • defaults: is the property that allows to establish the HTTP headers that will affect all the defined requests.
  • tls: is the property that configures how self-signed certificates should be handled.

A complete example of a configuration file could be:

Note: As you can see there are many properties so I advise to see the official documentation to get to know them, here only the most basic ones have been described.

4. Practical case and integration with StatsD and Graphite

To make the case study you can use any endpoint that you have, but if you do not have one at hand, you can download the following GitHub project: https://github.com/raguilera82/api-back-nodejs.git

It is an API written with NodeJS using the NestJS framework that has several endpoints published between them / api / public / users that returns a list of users.

To use it we simply “clone” the project, we place ourselves inside the root directory and execute:

To verify that the boot has occurred correctly we connect with the browser to http: // localhost: 3001 / api / public / users

Now we are going to install the necessary dependency to integrate StatsD with our tests. We execute:

$> npm install -g artillery-plugin-statsd

We also need a server with StatsD and Graphite, the fastest, most comfortable and elegant way is with the following Docker container:

Now we prepare the script for the test taking into account include the plugins section with the connection information to the StatsD server that is listening on port 8125 of our machine and we give a prefix to identify metric within Graphite.

In this case we configure that for 30 seconds we want to reach 20 requests per second.

Save the file with the name you want with extension .yml, for example test.yml, and execute:

$> artillery run test.yml

Now if we open the URL http: // localhost , we have to see the administration page of Graphite, with all the metrics on the left in the form of a tree.

And as you see here are the metrics that we have just generated, so we can use them to create the “dashboards” we need.

5. Conclusions

As you have seen, this tool can be very useful for fast tests in a NodeJS environment and the ease of integration with other tools in your field makes it especially useful and simple. Now for Java environments, JMeter is the undisputed leader, even if Gatling comes to his heels.

The post Performance tests with Artillery appeared first on Target Veb.



This post first appeared on Targetveb, please read the originial post: here

Share the post

Performance tests with Artillery

×

Subscribe to Targetveb

Get updates delivered right to your inbox!

Thank you for your subscription

×