“AI in the Public Sector” was the first Tailor Theme Development Workshop (TDW). The report from this workshop provides key findings as well as some initial ideas for follow-up activities and further collaborations.
At this two-day workshop, experts from public and governmental institutions, industry and academia jointly developed initial input for the European AI research and innovation roadmap. Several topics were identified and will provide the ‘core’ of the input. However, when the roadmaps will be constructed, all inputs from the Theme Development Workshop will be considered.
Due to a lack of proper understanding, new technologies are marginalised in the processes of public administration, and the dialogue with technical developers is complicated, reducing the potential benefits and impact of AI technologies and solutions. Tailored education for civil servants and other public sector workers could be a possible approach to address this challenge, especially by focussing on a better understanding of the general framework for the potential introduction of AI in the processes of public administration; balancing expectations; having a more concrete view of limits and capabilities of AI increasing acceptance of AI as part of the future working activities.
Measure performance of AI ecosystems
The deployment of AI in society by governments will have systemic effects. Citizens, companies and other organisations will change their behaviours. In order to detect potential harmful side-effects, it is necessary to be able to measure the effects of AI in society in a systematic way. One of the underlying aspects that could be measured is the performance of AI ecosystems: clusters of companies that develop AI, (local) governments that stimulate the update of AI, end users (both companies and citizens) that use AI.
The development and deployment of algorithm registers addresses a number of concerns that are related to the usage of AI in the public sector. Such a register provides a way to implement transparency and could also be a basis for public accountability. The concept could be extended so that it fosters citizen engagement, for instance by supporting citizen science initiatives.
Procurement, and market creation
Governments can play a role in market creation and thus influence developments in a
favourable way. A number of considerations are relevant here: is there a place for
in-house development and how does this relate to procurement from private 14 companies? How should investments be organised, what is the role of public-private partnerships? Also, guidelines for tender processes need to be updated as a result of
Data is vital to produce AI solutions, but the availability of a large amounts of data is not a unique requirement; also the quality and accessibility of the data are key requirements to produce and replicate trustworthy AI solutions, for both public and
private sectors. This includes the need to overcome information silos in different public organisations and (potentially) private actors and the design of both governance models and technologies for data sharing infrastructure (as enabler for trustworthy AI solutions) ensuring availability, quality and accessibility of the data.
Requirements of AI
The requirements for AI systems as identified by the High-level Expert Group on AI
are still very valid and relevant, and many aspects still need to be further detailed and
made more concrete. Important is also a broad and integrated view on these aspects.
An additional consideration is how certification could support the promotion of trust
and adoption of AI systems that are used in the public sector. This could be
organised alongside the previously mentioned algorithm register.
Systemic approach and life-cycle management
A more integrated approach towards procurement and deployment of AI is necessary.
It is not only important to set out clear procurement guidelines, at the same time, it
should be ensured that the necessary knowledge and resources to operate and
maintain the system are in place. This is specifically important because AI systems
can adapt themselves, so monitoring of the system performance is needed: is the
system still operating within the scope for which it was designed and trained? How
does the system react to new or adapted other systems? When will a system become
end-of-life? There are also links to the previously mentioned topics of ‘measuring
systemic effects’, ‘Requirements of AI’ and ‘Education’, which could be further