PHPBench 0.11 (Dornbirn) has been released, and it is now available as a PHAR.

Its been over two months since the last PHPBench release, this is due to the rather large step of introducing a benchmark storage system, but this is not the only new feature, we also now have a better reporting system and baseline benchmarks.

But in this post I will just briefly talk about the storage feature.

Storage

You can store benchmarks using the --store option. By default benchmarks are stored in XML, but you can also use the DBAL extension to store into a standard DBMS.

$ phpbench --progress=dots --store
PhpBench 0.11.0 (e5cfcfb). Running benchmarks.
Using configuration file: /home/daniel/www/phpbench/phpbench/phpbench.json.dist

............ 

10 subjects, 12 iterations, 12 revs, 0 rejects
(best [mean mode] worst) = 3.000 [31,291.583 31,291.583] 3.000 (μs)
⅀T: 375,499.000μs μSD/r 0.000μs μRSD/r: 0.000%
Storing results ... OK
Run: 1339f4acee5b5fe6930e11a835bfbce4ca679081

Note the last line of the output, this is how we can henceforth reference the results of this run, for example by using the UUID to generate reports:

$ ./bin/phpbench report --uuid=1339f4acee5b5fe6930e11a835bfbce4ca679081 --report=aggregate
Suite: 1339f4acee5b5fe6930e11a835bfbce4ca679081, date: 2016-03-30, stime: 08:40:23
+------------------------------------+------------+----------+-----------+--------+----------+
| subject                | mem        | best      | mode      | worst     | rstdev | diff     |
+------------------------+------------+-----------+-----------+-----------+--------+----------+
| benchInitNoExtensions  | 961,680b   | 0.000249s | 0.000249s | 0.000249s | 0.00%  | +98.80%  |
| benchInitCoreExtension | 2,968,328b | 0.031917s | 0.031917s | 0.031917s | 0.00%  | +99.99%  |
+------------------------+------------+-----------+-----------+-----------+--------+----------+

You can also compare multiple reports by specifying multiple UUIDs - here we will use the meta-UUID "latest" to show the results of the latest benchmark:

$ phpbench report --uuid=latest --uuid=1339f4ab46c77b691e5f6d1fb9618161dc32dea3 --report='extends: "compare", compare: "stime"'
benchmark: RunBench, subject: benchRunAndReport
+------+---------------------+---------------------+
| revs | stime:08:45:31:mean | stime:08:45:25:mean |
+------+---------------------+---------------------+
| 1    | 0.109s              | 0.106s              |
+------+---------------------+---------------------+

If you are using the DBAL storage engine, then you can query using a MongoDB-like JSON based query language:

$ phpbench report --report=aggregate --query='$and: [ { subject: "benchMd5" }, { date: { $gt: "2016-02-09" } } ]'

You may also view the history using the log command (inspired by the git log):

$ phpbench log
run 1339f4a54f575a141ae0feaea3b1872de98e555b
Date:    2016-03-30T08:45:31+02:00
Branch:  master
Context: <none>
Scale:   1 subjects, 4 iterations, 1 revolutions
Summary: (best [mean] worst) = 105,824.000 [109,261.750] 112,228.000 (μs)
         ⅀T: 437,047.000μs μRSD/r: 2.394%

run 1339f4ab46c77b691e5f6d1fb9618161dc32dea3
Date:    2016-03-30T08:45:25+02:00
Branch:  master
Context: <none>
Scale:   1 subjects, 4 iterations, 1 revolutions
Summary: (best [mean] worst) = 105,872.000 [106,263.750] 106,575.000 (μs)
         ⅀T: 425,055.000μs μRSD/r: 0.248%

run 1339f4acee5b5fe6930e11a835bfbce4ca679081
Date:    2016-03-30T08:40:23+02:00
Branch:  master
Context: <none>
Scale:   10 subjects, 12 iterations, 12 revolutions
Summary: (best [mean] worst) = 3.000 [31,291.583] 3.000 (μs)
lines 0-22 any key to continue, <q> to quit

Thats it, this is not the only new feature in 0.11, but certainly the largest, checkout the release page for all the new goodies.

What next?

Git Archiving

This release paves the way for allowing results to be archived to a GIT branch. This will permit you to permanently and securely store benchmarks in your GIT repository alongside your code.

As PHPBench already stores a large amount of environmental information this will provide interesting statistics about how your application performs on different developer machines.

Counterweighting and normalization

One of the big problems with time-based benchmarking is that results are always different depending on the machine, and the load the machine is currently under.

We mitigate local fluctuations in various ways, e.g. by increasing the number of iterations, using the mode and applying the retry threshold) feature.

Counterweighting will approach the problem by running standard micro-benchmarks during the test. When the test is finished we can compare the trend of the standard benchmarks and our "actual" benchmarks for the same iterations and offset one against the other to produce a more stable result.

baseline_correlation

(red line represents baseline benchmark, green the scaled down (factor 40) benchmark-under-test).

Of course, benchmarks may be affected by different factors, e.g. one benchmark my be affected by disk I/O, whilst another by the CPU. We would solve this by comparing the results first to the CPU benchmark, then to the disk I/O benchmark, then applying each depending on the amount to which it fits our actual results. So a benchmark heavy with disk I/O would be offset to a greater degree by the disk I/O baseline.

Additionally, if we can determine an accurate standard time for our counterweight-benchmarks then we should be able to more reliably compare results from different machines.

I actually have no idea to what extent this idea will work in practice, but it could be interesting.

Posted on: 2016-03-30 00:00:00

Comments

Recent Posts

Psi Grid Component (a data grid)

I spent the last three months or more working arou...

Bundles: Service Providers, Definition Factories and Tags.

In this post I will talk about how we have employe...

PHPBench 0.11

PHPBench 0.11 (Dornbirn) has been released, and it...
Twatter