Row over AI ‘light touch’ turns into chorus of disapproval

Parliament_AI_2Criticism of Government plans for “light touch” regulation of artificial intelligence is turning into a chorus of disapproval, with both publishers and unions joining the battle against ministers’ refusal to follow EU and US moves.

The TUC has now waded into the war by launching an AI taskforce to introduce new legal protections for employers and workers from misuse of the technology, as it warns that the UK is “way behind the curve” on regulation.

The taskforce – including specialists in law, technology, politics, HR and the voluntary sector – will publish a draft AI and employment bill in early 2024, with the aim of getting it enacted into UK law.

The organisation claims that AI is being used to “analyse facial expressions, tone of voice and accents” to assess candidates’ suitability for roles, and warned that if left unchecked it could lead to “greater discrimination, unfairness and exploitation at work across the economy”.

Employers were buying and using systems without fully understanding the implications, such as whether they were “discriminatory”, the TUC claimed, while cautioning that the UK risks becoming an “international outlier” in terms of AI policy.

Meanwhile, the Periodical Publishers Association has joined with the European Publishers Council, Publishers’ Licensing Services and Association of Learned & Professional Society Publishers to write to Prime Minister Rishi Sunak.

The organisations are calling for the implementation of a legal footing for transparency provisions “to ensure that owners of AI systems declare how they have used publishers’ content so that compensation issues can be identified and addressed”.

The letter states: “It is already clear that publishers’ content is being used to train AI tools without permission, or any form of payment. There are already documented cases of AI systems using publishers’ works without citing or crediting them which in some cases has led to lengthy and costly litigation.

“Without transparency enforcement provisions, rights holders are unable to see how their content is used, which creates challenges to any ability to agree terms, fees and limits on ensuing use through a negotiated licence. Therefore, the Government must take swift action to put the right regulatory mechanisms in place.”

Meanwhile, the knives are also out for the Government’s Global AI Safety Summit, being hosted at Bletchley Park in November. The event, designed to address the global threats to democracy posed by the tech, is in danger of being “captured” by big tech firms, experts have warned.

The gathering was conceived by the Prime Minister to address fears about the threat that AI could pose to humanity and is due to be attended by politicians, civil society groups, tech companies and academics.

Last week, the Government laid out its objectives for the meeting, which will focus on the most powerful AI systems which could, for example, “undermine biosecurity”. However, some industry sources claim it could be little more than a “rubber stamping” exercise for the likes of Facebook, Instagram, Google and TikTok.

Even so, it is not as if the Government has not been warned.

The first signs of discontent emerged in June, when consumer group BEUC, whose UK members include Which? and Citizen’s Advice, urged all data protection regulators to “launch investigations now” into generative AI and “not wait idly for all kinds of consumer harm to have happened before they take action”.

A month later, the UK’s Ada Lovelace Institute – named after the daughter of poet Lord Byron who was a trailblazer for women in maths and science – called for an urgent rethink, citing severely limited legal protections for consumers to seek redress when AI goes wrong.

The organisation made a total of 18 recommendations it wants included in the Data Protection & Digital Information Bill (No2), which is still going through Parliament.

And last week a group of MPs demanded the Government introduced AI measures in the new data reforms or risk falling behind in the race for regulation.

With the EU continuing to push its AI Act – which will establish requirements for both providers and users dependent on the level of risk posed by AI – and the US issuing a blueprint for an AI Bill of Rights that will act as a guide to help protect people from these threats, the MPs claim the UK is falling behind.

Related stories
Generative AI threatened by unresolved martech issues
AI to turbocharge economy but staffing threat looms
Never mind the AI threats, feel the benefits say bosses
Creatives embrace ChatGPT but ‘AI anxiety’ escalates
Robot wars: Brits spooked over ad industry’s use of AI
Marketers ditch metaverse to embrace generative AI