Daphne Leprince-Ringuet, reporter at ZDNet explains, A two-year investigation into the private and public use of AI systems shows that more oversight is needed, particularly in government services like policing.
![]() |
Photo: Daria Sannikova from Pexels |
The CDEI spent two years investigating the use of algorithms in both the private and the public sector, and was faced with many different levels of maturity in dealing with the risks posed by algorithms. In the financial sector, for example, there seems to be much closer regulation of the use of data for decision-making; while local government is still in the early days of managing the issue.
Although awareness of the threats that AI might pose is growing across all industries, the report found that there is no particular example of good practice when it comes to building responsible algorithms...
Similar conclusions were reached in a report published earlier this year by the UK's committee on standards in public life, led by former head of MI5 Lord Evans, who expressed particular concern at the use of AI systems in the police forces. Evans noted that there was no coordinated process for evaluating and deploying algorithmic tools in law enforcement, and that it is often up to individual police departments to make up their own ethical frameworks.
Source: ZDNet