AI to power future data privacy breaches


A new report on the cyber risk outlook by global insurer Allianz Commercial reveals that cyber claims have continued their upwards trend over the past year, driven in large part by a rise in data and privacy breach incidents. The frequency of large cyber claims (>€1mn) in the first six months of 2024 was up 14% while severity increased by 17%, according to claims analysis, following just a 1% increase in severity during 2023. Data and privacy breach-related elements are present in two thirds of these large losses.

One of the leading emerging risk trends examined in the report is artificial intelligence (AI), which has the potential to turbocharge data breach exposures in the future, fueling greater processing of personal data and as a powerful tool for threat actors.

The use of AI by businesses and public bodies is growing day by day, with applications in technology, media, healthcare, finance, retail, and logistics. In a recent McKinsey survey, almost two thirds (65%) of organizations say they regularly use AI, nearly double the number from a year ago.

AI relies on the collection and processing of vast amounts of data, including personal, health and biometric information, for training AI models and making accurate predictions or recommendations. AI is also integral to some technologies, such as personal assistants (like) Alexa and Siri, for surveillance, tracking and monitoring systems, chatbots and driverless vehicles.

Given the volume of personal data involved, and its black-box nature, AI can create potential privacy and security risks if not properly managed. With so much data being collected and processed, there is a risk that it could fall into the wrong hands, either through hacking or other security breaches. There are also concerns around potential breaches of privacy laws, such as whether organizations have proper consent to process data through AI. In February 2024, Air Canada was ordered to pay compensation to a customer that had relied on incorrect information provided by one of the airline’s chatbots.

AI technology and use cases are also developing in an evolving regulatory and legal environment. AI regulation is tightening – the EU is establishing a common framework for regulation under the AI Act and complementary AI Liability Directive – which will increase complexity and raise the compliance bar for companies.

Different AI applications, however, carry varying degrees of risk. AI use cases that focus on consumer products and services – such as chatbots or AI-generated content – are likely to bring a higher degree of data privacy risk than administrative AI applications, such as automation of internal processes.

Companies should consider the following factors to harness the benefits of AI while mitigating the risks of potential breaches.

Data governance: Establishing robust data governance practices is crucial for ensuring that data is collected, stored, and processed in compliance with privacy regulations and internal policies. This includes defining data ownership, implementing data classification, and setting clear guidelines for data access and usage.

Security measures: Implementing strong security measures, such as encryption, access controls, and regular security audits, is essential for protecting the data used by AI systems. This helps prevent unauthorized access and data breaches that could compromise individuals’ privacy.

Privacy regulations compliance: Companies must ensure that their use of AI aligns with relevant privacy regulations, such as the General Data Protection Regulation (GDPR) in Europe or the California Consumer Privacy Act (CCPA) in the US. This includes obtaining explicit consent for data collection and processing, providing individuals with control over their data, and honoring data subject rights.

Ethical AI practices: Embracing ethical AI practices involves considering the potential impact of AI on individuals’ privacy and well-being. This includes addressing bias and fairness in AI algorithms, being transparent about data usage, and incorporating privacy considerations into the design and deployment of AI systems.

Privacy-preserving AI techniques: Companies can explore privacy-preserving AI techniques, such as federated learning and differential privacy, to minimize the risk of data privacy breaches. These techniques allow AI models to be trained on decentralized data sources without directly accessing sensitive information, thus reducing privacy concerns.

To read the full Allianz Cyber Risk Trends Report, please visit: cyber-security-trends-2024.pdf (allianz.com)

Ad

Source link

#power #future #data #privacy #breaches


Unlock the potential of cutting-edge AI solutions with our comprehensive offerings. As a leading provider in the AI landscape, we harness the power of artificial intelligence to revolutionize industries. From machine learning and data analytics to natural language processing and computer vision, our AI solutions are designed to enhance efficiency and drive innovation. Explore the limitless possibilities of AI-driven insights and automation that propel your business forward. With a commitment to staying at the forefront of the rapidly evolving AI market, we deliver tailored solutions that meet your specific needs. Join us on the forefront of technological advancement, and let AI redefine the way you operate and succeed in a competitive landscape. Embrace the future with AI excellence, where possibilities are limitless, and competition is surpassed.

Leave a Comment