Britain’s Information Commissioner, John Edwards, has cautioned companies about the need to prioritize privacy rights when implementing artificial intelligence (AI) technologies. Edwards stressed that failure to do so could result not only in significant fines but also erode public trust in AI.
During a speech on Wednesday, Edwards emphasized the obligation of companies to safeguard customer’s personal information when utilizing AI in their products or services. “You cannot expect to utilize AI in your products or services without considering privacy, data protection, and how you will safeguard people’s rights,” he asserted.
Edwards issued a clear message to organizations, stating, “Our message to those organizations is clear – non-compliance with data protection will not be profitable.” He further warned that fines would be imposed in proportion to any gains obtained through non-compliance with data protection rules.
Read more: UK Watchdog Calls For AI Regulations
The announcement comes at a time when concerns about the risks associated with rapidly developing AI technologies are mounting globally. The release of ChatGPT by Microsoft-backed Open AI last year has heightened policymakers’ focus on regulating AI.
The United Kingdom took a proactive stance in addressing AI-related challenges by hosting the world’s first artificial intelligence safety summit in November. Although there was widespread consensus on the necessity of AI regulation, a global plan for overseeing the technology remains in the early stages of development.
The Information Commissioner’s warning serves as a reminder to businesses to prioritize data protection and privacy considerations when implementing AI. As AI continues to play an increasingly integral role in various industries, regulators and policymakers are working to establish comprehensive frameworks to ensure the responsible and ethical use of these technologies on a global scale.