Photo: Jane Duncan |
Photo: Franck V. / Unsplash |
Claims abound that Artificial Intelligence (AI) can rescue our ailing retail and manufacturing sectors. President Cyril Ramaphosa has even appointed a Commission on the Fourth Industrial Revolution to promote what he calls an “entrepreneurial state… [which will] assist government in taking advantage of the opportunities presented by the digital industrial revolution”.
The one voice that is largely missing in the noise about AI is the users of AI-driven systems, and by now, that includes most of us. Users are an important constituency as these systems are generally trained using our data, but the means by which they do so are opaque.
Automated decisions using AI are difficult to challenge, which make them ripe for abuse in ways that threaten basic rights and freedom. Elections can be distorted through AI-powered disinformation, and people can be falsely accused of a crime if they are profiled incorrectly
Yet, despite the dangers, information regulators are struggling to defend users’ rights as AI challenges traditional notions of data protection...
Personalised algorithmic models to rank and curate information can lead to the development of filter bubbles. As things stand, though, the available research points in the opposite direction, with search engines of companies like Google exposing internet users to a greater diversity of news sources than they would be exposed to ordinarily.
Even social media users can reap the unintended benefits of incidental exposure to news they would otherwise not look at. Greater AI-enabled content personalisation could amplify these dangers in time to come, though, so these concerns shouldn’t be taken off the table.
Read more...
Source: Daily Maverick