piglit (II): How to launch a tailored piglit run

Last post I gave an introduction to piglit, an open-source test suite for OpenGL implementations. On that post, I explained how to compile the source code, how to run full piglit test suite and how to analyze the results.

However, you can tailor a piglit run to execute the specific tests that match your needs. For example I might want to check if a specific subset of tests pass because I am testing a new feature in Mesa as part of my job at Igalia.

Configure the piglit run

There are several parameters that configure the piglit run to match our needs:

  • –dry-run: do not execute the tests. Very useful to check if the parameters you give are doing what you expect.
  • –valgrind: It runs Valgrind memcheck on each test program that it’s going to execute. If it finds any error, the test fails and it saves the valgrind output into results file.
  • –all-concurrent: run all tests concurrently.
  • –no-concurrency: disable concurrent test runs.
  • –sync: sync results to disk after every test so you don’t lose information if something bad happens to your system.
  • –platform {glx,x11_egl,wayland,gbm,mixed_glx_egl}: if you compiled waffle with different windows system, this is the name of the one passed to waffle.

There is a help parameter if you want further information:

Skip tests

Sometimes you prefer to skip some tests because they are causing a GPU hang or taking a lot of time and their output is not interesting for you. The way to skip a test is using the -x parameter:

Run specific tests

You can run some specific subset of tests using a filter by test name. Remember that it’s filtering by test name but not by functionality, so you might miss some test programs that check the same functionality.

Also, you can concatenate more parameters to add/remove more tests that are going to be executed.

Run standalone tests

There is another way to run standalone tests apart from using the -t argument to piglit run. You might be interested in this way if you want to run GDB on a failing tests to debug what is going on.

If we remember how was the HTML output for a given test:


The command field specifies the program name executed for this test together with all its arguments. Let me explain what they mean in this example.

  • Some binaries receives arguments that specify the data type to test (like at this case GL_RGB16_SNORM) or any other data: number of samples, msaa, etc.
  • -fbo: it draws in an off-screen framebuffer.
  • -auto: it automatically run the test and when it finishes, it closes the window and prints if the test failed or passed.

Occasionally, you might run the test program without fbo and auto parameters because you want to see what it draws on the window to understand better the bug you are debugging.

Create your own test profile

Besides explicitly adding/removing tests from tests/all.py (or other profiles) in the piglit run command, there is other way of running a specific subset of tests: profiles.

A test profile in piglit is a script written in Python that selects the tests to execute.

There are several profiles already defined in piglit, two of them we already saw them in this post series: tests/sanity.tests and tests/all. First one is useful to check if piglit was correctly compiled together with its dependencies, while the second runs all piglit tests in one shot. Of course there are more profiles inside tests/ directory: cl.py (OpenCL tests), es3conform.py (OpenGL ES 3.0 conformance tests), gpu.py, etc.

Eventually you will write your own profile because adding/removing tests in the console command is tiresome and prone to errors.

This is an example of a profile based on tests/gpu.py that I was recently using for testing the gallium llvmpipe driver.

As you see, it picks the test lists from quick.py but with some changes: it drops all the tests related to a couple of OpenGL extensions (ARB_vertex_program and ARB_fragment_program) and it drops other seven tests because they took too much time when I was testing them with gallium llvmpipe driver on my laptop.

I recommend you to spend some time playing with profiles as it is a very powerful tool when tailoring a piglit run.

What else?

In some situations, you want to know which test produces kernel error messages in dmesg, or which test is being ran now. For both cases, piglit provides parameters for run command:

  • Verbose (-v) prints a line of output for each test before and after it runs, so you can find which takes longer or outputs errors without need to wait until piglit finishes.
  • Dmesg (–dmesg): it saves the difference of dmesg outputs before and after each test program is executed. Thanks to that, you can easily find which test produces kernel errors in the graphics device driver.

Wrapping up

After giving an introduction to piglit in my blog,  this post explains how to configure a piglit run, change the list of tests to execute and how to run a standalone test. As you see, piglit is very powerful tool that requires some time to learn how to use it appropriately.

In next post I will talk about a more advanced topic: how to create your own tests.

Piglit, an open-source test suite for OpenGL implementations

OpenGL logo OpenGL is an API for rendering 2D and 3D vector graphics now managed by the non-profit technology consortium Khronos Group. This is a multi-platform API found in different form factor devices (from desktop computer to embedded devices) and operating systems (GNU/Linux, Microsoft Windows, Mac OS X, etc).

As Khronos only defines OpenGL API, the implementors are free to write the OpenGL implementation they wish. For example, when talking about GNU/Linux systems, NVIDIA provides its own proprietary libraries while other manufacturers like Intel are using Mesa, one of the most popular open source OpenGL implementations.

Because of this implementation freedom, we need a way to check that they follow OpenGL specifications. Khronos provides their own OpenGL conformance test suite but your company needs to become a Khronos Adopter member to have access to it. However there is an unofficial open source alternative: piglit.


Piglit is an open-source OpenGL implementation conformance test suite created by Nicolai Hähnle in 2007. Since then, it has increased the number of tests covering different OpenGL versions and extensions: today a complete piglit run executes more than 35,000 tests.

Piglit is one of the tools widely used in Mesa to check that the commits  adding new functionality or modifying the source code don’t break the OpenGL conformance. If you are thinking in contributing to Mesa, this is definitely one of the tools you want to master.

How to compile piglit

Before compiling piglit, you need to have the following dependencies installed on your system. Some of them are available in modern GNU/Linux distributions (such as Python, numpy, make…), while others you might need to compile them (waffle).

  • Python 2.7.x
  • Python mako module
  • numpy
  • cmake
  • GL, glu and glut libraries and development packages (i.e. headers)
  • X11 libraries and development packages (i.e. headers)
  • waffle

But waffle is not available in Debian/Ubuntu repositories, so you need to compile it manually and, optionally,  install it in the system:

Piglit is a project hosted in Freedesktop. To download it, you need to have installed git in your system, then run the corresponding git-clone command:

Once it finishes cloning the repository, just compile it:

More info in the documentation.

As a result, all the test binaries are inside bin/ directory and it’s possible to run them standalone… however there are scripts to run all of them in a row.

Your first piglit run

After you have downloaded piglit source code from its git repository and compiled it , you are ready to run the testing suite.

First of all, make sure that everything is correctly setup:

The results will be inside results/sanity.results directory. There is a way to process those results and show them in a human readable output but I will talk about it in the next point.

If it fails, most likely it is because libwaffle is not found in the path. If everything went fine, you can execute the piglit test suite against your graphics driver.

Remember that it’s going to take a while to finish, so grab a cup of coffee and enjoy it.

Analyze piglit output

Piglit provides several tools to convert the JSON format results in a more readable output: the CLI output tool (piglit-summary.py) and the HTML output tool (piglit-summary-html.py). I’m going to explain the latter first because its output is very easy to understand when you are starting to use this test suite.

You can run these scripts standalone but piglit binary calls each of them depending of its arguments. I am going to use this binary in all the examples because it’s just one command to remember.

HTML output

In order to create an HTML output of a previously saved run, the following command is what you need:

  • You can append more results at the end if you would like to compare them. The first one is the reference for the others, like when counting the number of regressions.

  • The overwrite argument is to overwrite summary destination directory contents if they have been already created.

Finally open the HTML summary web page in a browser:

Each test has a background color depending of the result: red (failed), green (passed), orange (warning), grey (skipped) or black (crashed). If you click on its respective link at the right column, you will see the output of that test and how to run it standalone.

There are more pages:

  • skipped.html: it lists all the skipped tests.
  • fixes.html: it lists all the tests fixed that before failed.
  • problems.html: it lists all the failing tests.
  • disabled.html: it shows executed tests before but now skipped.
  • changes.html: when you compare twoormoredifferentpiglit runs, this page shows all the changes comparing the new results with the reference (first results/ argument):
    • Previously skipped tests and now executed (although the result could be either fail, crash or pass).
    • It also includes all regressions.html data.
    • Any other change of the tests compared with the reference: crashed tests, passed tests that before were failing or skipped, etc.
  • regressions.html: when you compare two or more different piglit runs, this page shows the number of previously passed tests that now fail.
  • enabled.html: it lists all the executed tests.

I recommend you to explore which pages are available and what kind of information each one provides. There are more pages like info which is at the first row of each results column on the right most of the screen and it gathers all the information about hardware, drivers, supported OpenGL version, etc.

Test details

As I said before, you can see what kind of error output (if any) a test has written, spent time on execution and which kind of arguments were given to the binary.

piglit-test-detailsThere is also a dmesg field which shows the kernel errors that appeared in each test execution. If these errors are graphics driver related, you can easily detect which test was guilty. To enable this output, you need to add –dmesg argument to piglit run but I will explain this and other parameters in next post.

Text output

The usage of the CLI tool is very similar to HTML one except that its output appears in the terminal.

As its output is not saved in any file, there is not argument to save it in a directory and there is no overwrite arguments either.

Like HTML-output tool, you can append several result files to do a comparison between them. The tool will output one line per test together with its result (pass, fail, crash, skip) and a summary with all the stats at the end.

As it prints the output in the console, you can take advantage of tools like grep to look for specific combinations of results

This is an example of an output of this command:

And this is the output when you compare two different piglit results:

Output for Jenkins-CI

There is another script (piglit-summary-junit.py) that produces results in a format that Jenkins-CI understands which is very useful when you have this continuous integration suite running somewhere. As I have not played with it yet, I keep it as an exercise for readers.


Piglit is an open-source OpenGL implementation conformance test suite widely use in projects like Mesa.

In this post I explained how to compile piglit, run it and convert the result files to a readable output. This is very interesting when you are testing your last Mesa patch before submission to the development mailing list or when you are looking for regressions in the last stable version of your graphics device driver.

Next post will cover how to run specific tests in piglit and explain some arguments very interesting for specific cases. Stay tuned!

BlinkOn 2: Zürich

Last  week I traveled to Zürich to attend to BlinkOn 2 at the office Google has there. This is the first time the event is done in Europe (previous one at US) and it was a very good opportunity to meet all the people working around Blink, Chromium and Skia.

Zürich photograph

As Igalia has a long experience working on browsers technologies, it was represented by four igalians who are working on different areas: Manuel Rego on CSS, Xabier Rodríguez on multimedia and Eduardo Lima together with me on graphics and Skia.

The attendees of this conference were not only googlers, there were people from Opera, Samsung, Intel and much more companies interested on meeting each other and collaborate to improve Blink.

Google Zürich office
Google Zürich office


The talks were covering a lot of different areas of a browser: from memory leaks or image filters to security models to name a few of them. Worth to mention Manuel Rego’s talk which was about his latest work on CSS Grid Layout (more details in his blog).

BlinkOn 2 photo
BlinkOn 2

I have attended to the talks more related to graphics on the browser as this is one of the areas I am currently working on: “Rendering for Glass Time“, “Responsive images“, “Chasing Pixels“, “Filter effects“, “Storage modes for 2D canvas” and “60fps on mobile“.

I would like to highlight the talk given by Justin Novosad (Google) where he explained his proposal of different storage modes for 2D canvas: persistent memory, discardable memory and none. Each one with different use cases and consequences on memory usage with focus on mobile devices dealing with very large canvases and there was a discussion about how to provide a useful API for web developers.

I attended to more that the aforementioned talks: the always interesting “Memory leaks in Blink” report, “Web animations“, “Out-of-process iframes“, “Smooth to the touch: challenges in input on mobile“, the Opening talk and “Trace reading clinic” which was very useful to learn more about the tracing tools available in Chromium.

This conference was also the excuse to meet some Skia developers in person and talk about Igalia’s contributions and future work on this 2D graphics library.

I would like to thank Google for its great job organizing BlinkOn at its office in Zürich and also Igalia for sponsoring my trip.

Logo Igalia


But as not everything would be work, I also met an old friend who is working at Google to give me an update of his last years  while drinking good coffee.

At the last day, just before returning home, I bought a very appreciated gift for my relatives… good Swiss chocolate! Yummy!

Convocatoria de Asamblea General Ordinaria de Socios de AsturLiNUX

Queda convocada una Asamblea General Ordinaria de Socios de AsturLiNUX para el Sábado 17 de Mayo de 2014, a las 16:30 en primera convocatoria y a las 17:00 en segunda convocatoria. El lugar será el Hotel de Asociaciones de Oviedo.

El orden del día es el siguiente:

1. Lectura y aprobación del acta de la Asamblea anterior
2. Estado de la asociación
3. Renovación de la Junta Directiva
4. Próximas actividades
5. Ruegos y preguntas

Hágase notar el punto segundo, donde se discutirá si los socios desean darle continuidad a la Asociación, o alternativamente comenzar a organizar y planificar los trámites de disolución. Por lo tanto, la Junta Directiva se renovará en función a las decisiones tomadas en el citado punto del orden del día.

Dada la importancia de los temas a tratar, se ruega la máxima asistencia posible.

Para los que no puedan asistir presencialmente, se organizará un evento de Google Plus con Hangout para poder participar remotamente. En la lista de correo ha comenzado una discusión preliminar del segundo punto del día (continuidad de la asociación). Asimismo, si cualquier tipo de presencia en la asamblea es imposible, Samuel se ha ofrecido voluntariamente a coleccionar vuestros comentaros/opiniones/pensamientos vía correo electrónico y a transmitirlos presencialmente durante la Asamblea de socios.

Más información en la web de AsturLiNUX.

AIME on Technology of Controls for Accelerators and Detectors

Last week, Javier Muñoz and myself attended the conference HEPTech Academia – Industry Matching Event on Technology of Controls for Accelerators and Detectors, which was hosted at DEMOKRITOS, Athens.

The target of this event was to have Academia and Industry together in the same place to share ideas and potential applications around Control Systems. Igalia was invited to participate as member of the Industry side to talk about our collaboration with CERN on Linux device drivers, KiCAD and HW virtualization areas.

Open Hardware slideThe conference’s agenda covered very exciting control system’s technologies used by High Energy Physics facilities. For example, CERN staff explained ATLAS and CMS detectors control systems, what they are planning to do for next years and how the next-gen accelerator, CLIC, is being designed.

However, there were also very good talks from other accelerators. One of the talks explained how the proton therapy cures eye cancer with a huge percentage of success. I recommend you to check out all the slides from the agenda because you will learn how the accelerator’s control systems are made and how the companies are collaborating on this area. Very interesting for both scientists and engineers!

Our talk was part of the “Open Hardware vs Conventional Development approach” track, where concepts like Open Hardware or projects like WhiteRabbit were explained to the audience. We were explaining our work developing the FMC TDC driver and how QEMU helped us a lot to debug the driver and improve its robustness by using SW techniques such as continuous integration and testing. Our slides are publicly available if you want to read them.

In the same track, we represented Igalia in the round table by giving our opinion about Open Hardware and sharing some advices based on our experience in the Free Software world that can be applied on Open Hardware based companies.

I would like to finish this post by saying that the organizers did a great job in taking care of the success of the event. Everything was well explained and organized, even when this was the first event of this kind organized around control systems for accelerators and detectors.

AGP Video Card photo

Introduction to Linux Graphics drivers: DRM

Linux support for graphics cards are very important for desktop and mobile users: they want to run games, composite their applications and have a nice and modern user experience.

AGP Video Card photo

So it’s usual that all the eyes are on this area when you want to optimize your embedded device’s user experience… but how the graphics drivers communicate with user-space applications?

Víctor Jáquez, from Igalia, wrote a very nice introduction to this topic. If you are interest on Linux graphics stack in general, you have this post wrote by Jasper St. Pierre.

Direct Rendering Infraestructure schema
Direct Rendering Infraestructure schema (by Víctor Jáquez)

The Direct Rendering Infrastructure (DRI) in Linux is like is shown at the above picture. Using Xorg as an X server example, we see that it is in charge of X11 drawing commands along with Mesa or Gallium3D as the OpenGL software implementation for rendering three-dimensional graphics.

How they communicate with the actual HW? One possibility is using libdrm as a backend to talk with the Direct Rendering Manager at Linux kernel. DRM is divided in two in-kernel drivers: a generic drm driver and another which has specific support for the video hardware.

There is a Xorg driver running in user-space who receives the data to be passed to the HW. Using Nouveau as an example, the xf86-video-nouveau is the DDX (Device Dependent X) component inside of X stack.

It communicates with the in-kernel driver using ioctl. Some of them are generic for all the drivers and they are defined in the generic drm driver. Inside of drivers/gpu/drm/drm_drv.c file you have the relationship between the ioctl number, its corresponding function and its capabilities.

However, the specific driver for the video hardware can define its own ioctls in a similar way. For example, here you have the corresponding ones for the Nouveau driver (drivers/gpu/drm/nouveau/nouveau_drm.c file).

As you can see, most of these ioctls can be grouped by functionality:

  • Get information from the drm generic driver: stats, version number, capabilities, etc.
  • Get/set parameters: gamma values, planes, color pattern, etc.
  • Buffer’s management: ask for new buffer, destroy, push buffer, etc.
  • Memory management: GEM, setup MMIO, etc.
  • Framebuffer management.
  • In case of Nouveau: channel allocation (for context switching).

Basically, the xorg user-space driver may prepare a buffer of commands to be executed on the GPU and pass it through a ioctl call. Sometimes, it just want to use the framebuffer to draw on the screen directly. Others, it uses KMS capabilities to change parameters of the screen: video mode, resolution, gamma correction, etc.

There are more things that DRM can do inside of the kernel. It can setup the color pattern used to draw pixels on the screen, it can select which encoder is going to be used and on which connector (LVDS, D-Sub, HDMI, etc), pick EDID information from the monitor, manage vblank IRQ…

At Igalia we have a long background working in Linux multimedia stack (GStreamer, WebGL, OpenCL, drivers, etc). If you need help to develop, optimize or just you want advice, please don’t hesitate to contact us.

A great year has passed

Exactly one year ago, it was my first day as Igalia employee. I had arrived from Geneva some days before, I had rented an apartment here in Coruña the day before. I was very nervous and willing to start.

Today, I can say I am more than happy to be working for this company: I started contributing drivers upstream, I became one of the official maintainers of them, I went to LinuxCon Europe, gave talks at OSHWCon Madrid 2012 and 7th White Rabbit workshop, I learned a lot of things… but the most important thing is that I made real friends here.

Regarding my personal life, everything has been getting better last year. Living so close to my parent’s home gives me the possibility to see my relatives and friends more often. I just need to drive 3 hours and I am there!

I don’t know what is going to happen next year, but I am sure it’s going to be awesome :-)

White Rabbit Logo

7th White Rabbit workshop

This week I traveled to Madrid where the 7th White Rabbit workshop was held. It was a two day event where the White Rabbit community presented all the latest development efforts, installations, the White Rabbit standardization process and much more. It was amazing to find a broad range of experiments using or considering to use this technology, the applications, the new PCB designs to come, etc.

Apart from Research institutions like CERN, GSI, DESY, NIKHEF among others, there were a lot of HW companies that design, manufacture and support all the required boards needed by the experiments. They were showing their products like the White Rabbit switch and Sevensols‘s White Rabbit starter kit.

I went there to present, together with Javier Muñoz, the latest work Igalia‘s OS team did. We spoke about testing and how virtual HW helps on this task. We showed our experience on the FMC TDC driver development, the FMC TDC’s virtual model and its integration in a testing suite with continuous integration.

Also, some software companies were there, like Gnudd represented by Alessandro Rubini, Integrasys and others. They were describing their work on the software part: FMC bus drivers, sdbfs filesystem, White Rabbit switch software, etc.

I found this event very interesting to share ideas, talk about technologies and plan new designs to improve all the White Rabbit stack.

I wouldn’t like to finish this post without saying anything about the organization. It was a perfect event: CDTI made a great work hosting the event at their offices and BE/CO/HT section at CERN organizing the rest.

FMC TDC board image

FMC TDC driver

During last few months we have been involved in the development of FMC TDC software support for Linux, i.e., writing a driver for it along with an user-space library and a bunch of test programs.

But first of all, let me show you what is a Time-to-Digital converter (TDC)

In electronic instrumentation and signal processing, a time to digital converter (abbreviated TDC) is a device for recognizing events and providing a digital representation of the time they occurred. For example, a TDC might output the time of arrival for each incoming pulse. Some applications wish to measure the time interval between two events rather than some notion of an absolute time.

Source: Wikipedia

In summary, it measures the time of arrival of each incoming pulse and saves the timestamp into the memory, ready to be read later on by the user. This time, this is a mezzanine board plugged to a carrier board across the FMC bus, hence its name.

FMC TDC board image

The board was designed by CERN to fulfill their needs in the control system. They published the schematics and all the needed design files under CERN Open Hardware license. You can check them out from its project page at OHWR website. Remember that it is still in the prototyping phase.

The FMC TDC driver depends on ZIO framework and FMC bus driver to work. Also, as we were developing it using the SPEC board as a carrier board, the SPEC driver is needed to perform the I/O operations from/to the FMC TDC board.

There is still work to be done like adapting the code to last ZIO changes and improve the documentation, but it is quite close to what we can call “1.0” version :-)

All the code is hosted in OHWR, under its own project. If you are interested on following its development, I recommend you to subscribe to the mailing list.


New GPG key

Hello lazy web,

For a number of reasons, I have set up a new OpenPGP key, and I will be transitioning away from my old one. I created a new 4096-bits key following Debian’s instructions.

At the time of writing GnuPG unfortunately defaults to a 1024 bit DSA key as the primary with SHA1 as the preferred hash. Due to weaknesses found with the SHA1 hashing algorithm Debian prefers to use keys that are at least 2048 bits and preferring SHA2.


The old key will continue to be valid for some time, but i prefer all future correspondence to come to the new one. I would also like this new key to be re-integrated into the web of trust.

The old key was:

pub 1024D/F99DF5E2 2006-01-04
Key fingerprint = DC45 A767 9618 ACA8 FDD1 989A 5AB7 DBC9 F99D F5E2

And the new key is:

pub 4096R/F17DC343 2012-11-16
Key fingerprint = 40FF 9902 F697 5A47 EE29 7884 7FF4 BA32 F17D C343

To fetch my new key from a public key server, you can simply do:

gpg –keyserver pgp.mit.edu –recv-key F17DC343

If you already know my old key, you can now verify that the new key is signed by the old one:

gpg –check-sigs F17DC343 | grep F99DF5E2

And if you don’t have my old key then you can check the following link, and see the signatures done with my old key (F99DF5E2):


If you are satisfied that you have got the right key, and the UIDs match what you expect ( gpg –fingerprint F17DC343 ), then I would appreciate it if you would sign my key:

gpg –sign-key F17DC343

Lastly, if you could upload these signatures, I also would appreciate it.

gpg –keyserver pgp.mit.edu –send-key F17DC343

Please let me know if there is any trouble, and sorry for the inconvenience.