CANDID peer meeting at the Cosmocaixa crossroads

In more than one of his acclaimed product keynotes Steve Jobs said he considered the best innovations occurred, in a metaphorical sense at least, ‘at the intersection of technology and the liberal arts’.

When presenting the Apple iPad 2 for the first time, he asserted that (at Apple) “Technology is not enough” …. “It’s in our DNA that technology married with the liberal arts is what makes our hearts sing”.

While there may be more than one songsheet from which to sing when it comes to responsible research and innovation, CANDID is working to provide resources to help technologists developing ‘smart’ products and services that can better resonate with the market, and wider society.

The figurative ‘crossroads’ for the CANDID project was our ‘peer workshop’ on 26 September, as it happens in Barcelona’s natural history museum, the Cosmocaixa, Carrer d’Isaac Newton, Barcelona.



Some exhibits outside the Salon Agora hinted at slim pickings, but it was soon obvious there would be plenty of meat on the bones of the issues at hand, as discussions and ideas flowed and evolved.

The researchers working on the different aspects of the project were intermingled with, in the language of the project ‘peers’, a representation of ICT and social science practitioners, members of EU working groups and of societal interests.

These practitioners, academics, and campaigners were experts in such diverse subjects as human rights, economics, computer science, and assistive technologies.


Welcome and presentation

Associate Professor Kjetil Rommetveit, the project coordinator, welcomed the CANDID project team and invited ‘peers’ to Barcelona, “where interesting things are happening”.

Setting the scene for the day, Kjetil promised intense discussions, firstly as a result of the findings of the four modules of CANDID, then for the cross-cutting themes that would emerge.

Kjetil – a philosopher specialising in digital technologies – introduced CANDID as a response to a Horizon 2020 call for new ideas from the social sciences and humanities to address important ICT challenges, as “an invitation to challenge assumptions and to come up with interesting proposals”.

Outlining the scope of the project he stated that it will come up with proposals for how to facilitate expanded dialogues between SSH and ICT related professions.

“CANDID is criticising and analysing smartness, and producing insights, making use of insights derived by social scientists in other areas.”

”The Modules – constructed as vehicle of checking assumptions – are:

“The peers invited here today were chosen from among those saying really interesting things in the written consultations, and will provide a third check on assumptions.”

Kjetil said there was an expectation of inter-disciplinarity being applied, so there would be discussion on this, and in summing up the work done to date, Kjetil highlighted two broad findings.

Firstly, that “inter-disciplinarity is not a default, it’s not something you can just ask for”.

“You can’t just put various actors together and expect them to come up with something that can be useful across society.”

However, “when inter-disciplinarity works it’s the result of a difficult process, depending on appropriate methods and tools. You need people with experience and you need people willing to listen to each other. You also need a little bit of luck.”

“The second thing is when looking at discourse about ‘smart’ we find it difficult to come up with an overall definition or framing of the concept, so were seeing it more as a rhetorical tool that could enable different types of collaborations.”

The outcome of the peer workshop will be formed into a template or recipe, plus an account of the process of the project, “that can be used by other scholars or other people looking to implement smart technology projects”.

According to Kjetil, “we’ll write recommendations, and produce academic publications. We’re eager to get input from peers to guide us, and to learn from each other.”


Introductory presentations of the Modules

To open up discussion in the workshop preliminary findings of each module was summarised in brief presentations:

Module 1 User and Design Configurations – introduced by Antti Silvast, The University Edinburgh

The focus of Module 1, titled User and Design Configurations, was on ‘Smart Energy’ and ‘Smart Health’.

Speaking for the team of University Edinburgh researchers working on this module, Antti Silvast, like Kjetil, noted that the concept of ‘smart’ is hard to define, and functions “more like a rhetorical device”.

Antti also highlighted the concept of the ‘user’ as being important, as the concept of ‘smartness’ centres around the ‘user’, but that “there are different types of users, and very different ways of being a user”.

Module 1 reported receiving 46 responses to the consultation process.

The Module 1 researchers found the concept of ‘smart’ not easy to define, in part, as it results in incompatible aims.

“For some people it might mean a revolution in the energy infrastructure or a healthcare system, while for others it might mean just a little bit of tweaking.”

“For example, there’s an aspiration for smart energy systems to become two-way systems, while the objective for smart metering in terms of billing might just be to make payments a little bit more efficient.”

Antti’s second main observation of the responses received from peers was that, in spite of a lot of talk about users, evidence of benefits to end users (i.e. consumers), for example in respect of cost savings, is inconclusive.

“In actual fact, the evidence is that the main users tend to be professionals. In smart health, for example, hospitals are the end users.”

“Citizen’s initiatives do exist – but the respondents felt these weren’t representative of the general population.”

Module 2: Risk, Fundamental Rights, and Engineering – introduced by Alessia Tanas, Brussels University – VUB

Research for Module 2 focused on risks, rights and engineering — centred on fundamental rights of privacy and personal data protection. Its peer communication exercise obtained 23 responses, and carried out eight interviews.

The main starting point was the adoption in 2016 of the EU General Data Protection Regulation (GDPR, that comes into force in May 2018), dealing with the protection of natural persons with regard to the processing of personal data and on the free movement of such data.

The GDPR is designed to support expansion of the single digital market and the need to provide for protection of personal data, as well as of other fundamental rights and freedoms, when personal data is processed.

In particular, the research examined the ‘operationalization’ of two specific instruments introduced by the GDPR to help organisations that process personal data (i.e. data controllers) to do so in a way that respects fundamental rights and freedoms of individuals (i.e. data subjects). The two instruments are: Data Protection Impact Assessments and Data Protection by Design and by Default.

According to Alessia, “we focused on aspects related to the evaluation of ‘risks to rights’ in contexts of personal data processing and to the translation of legal safeguards into the design of ICT technology. The aim was to try to understand what is happening when inter-discplinary teams sit together to enact Data Protection Impact Assessments and Data Protection by Design and by Default”.

“In the course of the research, we looked at relations and associations between the notions of ‘risk’ and ‘engineering’ in protection of fundamental rights and freedoms. Each of these notions traditionally belonging to very different settings, practices and traditions: managerial, technical or legal etc, and we were interested in understanding what happens as they encounter one another.”

The University of Brussels researchers also tried to map the views of representative groups and individuals impacted by Data Protection Impact Assessments and Data Protection by Design and by Default. This allowed new and interesting relationships to be outlined between: “risks and rights” and “rights and engineering”.

The key findings of the consultation exercise were that:

  • There is great uncertainty about how to turn privacy and personal data protection rights and fundamental freedoms into risk strategies and engineering goals;
  • There seems a lack of organisational uptake of Data Protection Impact Assessments and Data Protection by Design and by Default in early stages of ICT design processes. In this respect, some peers claimed that a ‘cultural change’ in engineering and management practices may be required;
  • Both the risk-based and the design-based approaches to the protection of fundamental rights and freedoms are difficult to square with rights-based approaches that are typical in legal practice;
  • There is a tendency within current risk-based and design-based approaches to overlook the views, concerns and interests of data subjects and concerned publics;
  • It’s not always clear which conception of ‘rights’ (legal, ethical, public/social, ICT/security) is articulated in the design of ICT technologies and who makes this decision.

Module 3. Sensing infrastructures – Maxigas, Universitat Oberta de Catalunya

Introduced by Maxigas, Module 3 on sensing infrastructures, focused on smart cities; covering the topics of environmental pollution monitoring projects and, to a lesser extent, disaster preparedness and management.

In advance of the consultation the researchers identified three main controversies:

  • Firstly to do with awareness; that sensor infrastructures can change understandings and approaches to important personal and public issues. For instance measurements by sensors may not match indviduals’ own perceptions or experience of pollution levels.
  • Secondly on participation; questions arise such as: which social groups gain benefit and which social groups are disempowered by deployment of these infrastructures? And, what is the role of citizens in these projects?
  • Thirdly on fairness; what ethical values are encoded in autonomous systems and who should be responsible for these automatic decisions?

The researchers received 24 responses to the consultation, mostly from ICT practitioners, and did 15 interviews, some still being analysed.

Characterising the feedback analysed so far, Maxigas said, “Anecdotally, we found Social Sciences and Humanities (SSH) practitioners tended to be more critical about sensor infrastructures than ICT practitioners, perhaps due to social scientists tending to focus on the consequences of technologies while ICT practitioners emphasise the promises and possibilities.”

This wasn’t universal, however, as, “this is an emerging field so there were some overlaps in the responses.”

An illustrative quote in the consultation from an ICT peer said that analysis of data automatically allowed understanding of the true causes of issues, whereas a SSH scholar was concerned that the unclear relationship between the properties of sensing devices and the object or system sensed may make any understanding of causality from the data less than transparent, and therefore less accountable (as happens with air pollution sensing).

“So, it seems the two sides of this debate do not necessarily agree about causality.”

Another ‘nice quote’ was that the citizen themselves should be the ultimate sensor in the smart city – a notion Maxigas considered, “a nice idea with rich potential”.

By contrast, some engineers expressed concerns about a superficial approach to these issues.

Maxigas introduced an attempt to structure and understand the data, offering for debate what he called ‘smartness regimes’.

“These are different ways of talking about these technologies, that can co-exist, and any one person can use multiple regimes.”

“Maybe”, speculated Maxigas, “the divide is not simply between SSH and ICT practitioners, but within or between particular discourses.”

The problem, therefore, isn’t necessarily that people from different communities cannot understand each other but that sensing infrastructures could be structured in a way that tries to meet the requirements of incompatible objectives.

“For example, in what is called the scientific regime, sensing infrastructures are about finding out some complete truth but if you’re thinking as an activist you want to use that sensing infrastructure to dramatise something.”

“These regimes don’t necessarily refer to each other, so we may ask an engineer about sensing infrastructures in terms of the quality of the data that is supplied and then it will be described as an engineering problem. But then in the next question that asks about privacy this engineer could perfectly reproduce a privacy discourse; but that doesn’t really mean that changes the way he does his engineering.”

So, in this model, different communities can understand one another but frequently fall back into using different ways of talking that are not compatible with each other.

Maxigas wasn’t sure how many of these different ways of talking might emerge.

“Scientists are interested in understanding a meaningful representation of reality. Typically, engineers are concerned about quality of the data. In the civic regime sensors are used to get an advantage in a debate or are used to get evidence; not for finding something new about the world, but providing data that is seen as rational, well-constructed, is modern and an objective fact. Sensing infrastructures in the civic regime are used to construct objective facts that can be mobilised in a debate.”

In addition, there is the bureaucratic regime, “that looks for ways to insert or diffuse sensing infrastructures into existing regulatory frameworks. So the bureaucrat wouldn’t care if an air pollution sensor is well calibrated but would want to know if the value is high what kind of procedures might need to be triggered.”

According to Maxigas, “here there’s a classic tension that’s one of the top five questions in Science and Technology Studies: the tension between democratic participation and technical expertise.”

“Traditionally, bureaucratic procedures aren’t encouraged to apply value judgements, as a safeguard against abuse of position in society. In a sense, this is evident when inserting sensing infrastructures into bureaucratic regimes.”

“We also have the problem when social scientists come to the innovation table they appear as ‘party-poopers’, in talking about the limitations of technologies. So, we would like to find out if there’s a more positive way to make social scientists participate in these debates so they are not seen as the annoying party.”

The presentation slides described by Maxigas above can be viewed here: Module 3: Peer consultation – results from peer consultation, plus a Report on discussions of Module 3 is available to view.


Commonalities and conclusions

The peers and researchers then discussed these preliminary findings in separate breakout sessions for each module.

The sessions were reported back to the workshop by the respective moderators, that then addressed perceived commonalities.

The afternoon session covered cross-cutting themes and an introduction by Maria Xentidou to Module 4 on discourse analysis of the language of the feedback in modules one to three, as well as in European Commission documentation on smart technologies.

The first public report for this module – Discourse Codification: How Responsible Research and Innovation (RRI), interdisciplinarity and agency are constructed and constituted in discourses on smart technologies, systems and associated developments – was subsequently published on 9 October.

Kjetil rounded up the workshop discussions and comments, that will go into the mix in the deliberations for the final report.

Kjetil concluded, “what is clear is that there is still quite a lot to be done”.

For the latest officially published public deliverables of CANDID see

Leave a Reply

Your email address will not be published. Required fields are marked *