• Michael Barroco

Roll out of the Amazon Alexa Eurovision Song Contest Skill !

by Michael Barroco, EBU on 12 May 2017

The European Broadcasting Union (EBU) has launched a new skill for Amazon's Alexa Voice Service, which allows users to discover and listen to every Eurovision Song Contest winner on devices including Amazon Echo and Echo Dot. Amazon Echo and Echo Dot are voice-controlled speakers powered by Alexa.

The Eurovision Song Contest skill was jointly developed by the EBU's Technology & Innovation and Media departments and allows users in the UK, Germany, Austria and the US to easily discover who has won every Eurovision Song Contest since the event began in 1956.


Users can simply ask in English or German: "Alexa, ask Eurovision who won in" a particular year. Alexa will then ask users if they want to hear the winner and can play the winning entry. Users can also "Ask Eurovision, when did France (or other nations) last win", or "When is the Grand Final" as well as "Who has won the most" and "Which countries have never won", amongst other combinations.

Amazon Echo owners in the UK will also be able to listen to a live stream of the Eurovision Song Contest Grand Final through the skill via EBU Member BBC.

Echo owners in the UK, Germany, Austria and the US with an Amazon account can enable the skill at these links: UK, DE/AT, US.

Original article

alexa eurovision song contest


Behaviour Driven Development (BDD) and testing in Python

by Gil BRECHBUEHLER, EBU on 07 Jul 2016

Behaviour Driven Development (BDD) and testing in Python


First we need to introduce the concepts of unit testing and functional testing, along with their differences:

  • Unit testing means testing each component of a feature in isolation.
  • Functional testing means testing a full feature, with all its components and their interactions.

The tests we present here are functional tests.

Behaviour driven development is the process of defining scenarios for your features and test that the features you are testing are behaving correctly under each scenario.
Actually this is only one part of behaviour driven development, it is also a methodology. Normally for each feature you have to:

  • write a test scenario for the feature
  • implement the feature
  • run the tests and update the code until the tests on the feature pass

As you can see the methodology of behaviour driven development is close to the methodology of test driven development.

The goal of this blog however is not to explain behaviour driven development, but to show how it can be implemented in Python. If you want to learn more about behaviour driven development you can read here for example.

Before starting: Python itself does not give us any BDD tools, so to be able to use BDD in Python we use the following packages:

Finally, here is the example Python function we will test with BDD:

def foo(a, b):    
    if a > 10 or b > 10:    
        raise ValueError    
    if (a * b) % 2 == 0:    
        return "foo"    
        return "bar"    

It is a simple example, but I think it is enough to explain how to do behaviour driven testing in Python. If you want to follow BDD methodology strictly, you have to write your functional tests before implementing the feature, however for an example it is easier to first introduce the functionality we want to test.

Note : the fact that the function "foo" does not accept numbers strictly bigger than ten is just for example purposes.

Gherkin language (link)

BDD has one great feature : it allows to define tests by writing scenarios using the Gherkin language.

Here are the scenarios we will use for our example :

Feature: Foo function    

    A function for foo-bar    

    Scenario: Should work    
        Given <a> and <b>    
        Then foo answers <c>    

        | a | b | c   |    
        | 2 | 3 | foo |    
        | 5 | 3 | bar |    

    Scenario: Should raise an error    
        Given <a> and <b>    
        Then foo raises an error    

        | a   | b  |    
        | 2   | 15 |    
        | 21  |  2 |    
        | 45  | 11 |    

A feature in Gherkin represents a feature of our project. Each feature has a set of scenarios we want to test. In scenarios we can define variables, such as <a> and <b> in this case, and examples that define values for these variables. Each scenario will be run once for each line of its Examples table.
Given lines allow us to define context for our scenarios. Then lines allow us to define the behaviour that our function should have in the defined context.

Features and scenarios are defined in .feature files.

Tests definition

Scenarios are great to describe functionalities in a largely understandable way, but scenarios themselves are not enough to have working tests. Along with our feature file we need a test file in which we define functions that correspond to each line of the scenarios. We will first show the full Python file and then explain it in details :

from moduleA import foo
from pytest_bdd import scenarios, given, then
import pytest

scenarios('foo.feature', example_converters=dict(a=int, b=int, c=str))    

@given('<a> and <b>')    
def numbers(a, b):    
    return [a, b]    

@then('foo answers <c>')    
def compare_answers(numbers, c):    
    assert foo(numbers[0], numbers[1]) == c    

@then('foo raises an error')    
def raise_error(numbers):    
    with pytest.raises(ValueError):    
        foo(numbers[0], numbers[1])    

In our case this file is named test_foo.py. Note that for pytest to be able to automatically find your test files, they have to be named with the pattern test_*.py.

scenarios('foo.feature', example_converters=dict(a=int, b=int, c=str))    

This line tells pytest that the functions defined in this file have to be mapped with the scenarios in foo.feature file. The example_converters parameter indicates pytest to which type each variables from the Examples should be converted. This argument is optional; if omitted pytest will give us each variable as a string of characters (str).

Then :

@given('<a> and <b>')    
def numbers(a, b):    
    return [a, b]    

@then('foo answers <c>')    
def normal_behaviour(numbers, c):    
    assert foo(numbers[0], numbers[1]) == c    

@then('foo raises an error')    
def should_raise_error(numbers):    
    with pytest.raises(ValueError):    
        foo(numbers[0], numbers[1])    

In these three functions we define what has to be done for each line of the scenarios, the mapping is done with the tags used before each function. We get the values of the a, b and c variables by giving arguments with the same name to the functions.

Pytest-bdd also makes use of fixtures, a feature of pytest: giving the numbers function as an argument to the compare_answers and raise_error functions allows us to directly access anything the numbers function returned. Here it is an array containing the two integers to pass to the foo function. For more details on how fixtures work in pytest see pytest documentation.

Running the tests

To run the tests we simply call the py.test command :

$ py.test -v    
============================== test session starts ==============================    
platform linux2 -- Python 2.7.11, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- /home/gil/.pyenv/versions/2.7.11/envs/evaluate-tests/bin/python2.7    
cachedir: .cache    
rootdir: /home/gil/work/blog, inifile:    
plugins: cov-2.2.1, bdd-2.16.1    
collected 5 items    

test_foo.py::test_normal_behavior[2-3-foo] PASSED    
test_foo.py::test_normal_behavior[5-3-bar] PASSED    
test_foo.py::test_should_raise_an_error[2-15] PASSED    
test_foo.py::test_should_raise_an_error[21-2] PASSED    
test_foo.py::test_should_raise_an_error[45-11] PASSED    

============================== 5 passed in 0.02 seconds ==============================    

If we launch pytest without giving any file to it, it searches for files names with the pattern test_*.py in the current folder and recursively in any subfolder.
We see that five tests have actually run, one for each line of the Examples section of the scenarios.


Behaviour Driven Development is a great tool especially because it allows us to define functionalities and their behaviour in a really easy and largely understandable way. Moreover writing BDD tests in Python is easy with pytest-bdd.

Note that pytest-bdd is not the only Python package that brings BDD to Python, there is also planterbox for Nose2, another testing framework for Python. Behave is another framework for behaviour driven development in Python.

BDD python Test Testing

  • Frans De Jong

Version support

by Frans De Jong, EBU on 15 Jan 2015

Profiles now support versions

We have added new functionality to the Profiles you can create in EBU.IO/QC.

From now on each Test has a version indicated next to its ID.

If a newer version is available, a little warning sign appears next to it.

This way users can manage their profiles as they like (there is no forced update to the latest Test version), but at the same time they are encouraged to check out newer versions of the Tests they are using.

Version selection

Users can decide the version to use in the Profile Manager, using a simple drop-down list.


The version information is visible to users regardless of their log in status, but as currently there is only one single version for each Test published publicly, the general audience will not really make use of it yet.

However, for editors (which are working with many different draft versions of the Tests), the new functionality is already practically relevant.

A large batch of updates to all Tests is expected in Q2, when the EBU QC Output subgroup has completed its work.

QC quality Quality Control version versions

  • Frans De Jong

Quality Wishes for 2015

by Frans De Jong, EBU on 03 Jan 2015

  • Frans De Jong

Adding ranges with Glühwein

by Frans De Jong, EBU on 22 Dec 2014

Developing under the Xmas tree...

With a warm chocolate and a glühwein* next to our laptops, we've been working on improvements of the EBU.IO/QC back-end.

* It is ~+12 degrees Celcius in Geneva...

Linked parameter types

We now have added linked definitions of parameter types:

  • Type (e.g. integer);
  • Representation (e.g. a single digit);
  • Range (e.g. [0-5));
  • Units (e.g. m/s)


The idea is that we want to help EBU QC Test editors to be stricter in the way they define parameter in- and outputs.

By facilitating managed lists of types, representations, ranges and units, we encourage reuse and minimize mistakes.


We've decided to use regexes to help check correct instantiation of ranges.

For example [0,4) is a valid instantation of [a,b).

We also use Regexes to check the names of the managed 'types'.

But we did not dare to go so far as to ask editors to specify their ranges as regexes directly... :-o

In a next step we plan to check default (and later user) values against the instantiated ranges.

Merry Christmas!

Frans and Julien

Christmas development QC QC Quality Quality Control range regex software types units