Daphne Leprince-Ringuet, reporter at ZDNet explains, A two-year investigation into the private and public use of AI systems shows that more oversight is needed, particularly in government services like policing.
Empowering algorithms to make potentially life-changing decisions
about citizens still comes with significant risk of unfair
discrimination, according to a new report published by the UK's Center
for Data Ethics and Innovation (CDEI). In some sectors, the need to
provide adequate resources to make sure that AI systems are unbiased is
becoming particularly pressing – namely, the public sector, and
specifically, policing. Photo: Daria Sannikova from Pexels
The CDEI spent two years investigating the use of algorithms in both the private and the public sector, and was faced with many different levels of maturity in dealing with the risks posed by algorithms. In the financial sector, for example, there seems to be much closer regulation of the use of data for decision-making; while local government is still in the early days of managing the issue.
Although awareness of the threats that AI might pose is growing across all industries, the report found that there is no particular example of good practice when it comes to building responsible algorithms...
Similar conclusions were reached in a report published earlier this year by the UK's committee on standards in public life, led by former head of MI5 Lord Evans, who expressed particular concern at the use of AI systems in the police forces. Evans noted that there was no coordinated process for evaluating and deploying algorithmic tools in law enforcement, and that it is often up to individual police departments to make up their own ethical frameworks.
Source: ZDNet