Ai Use: What's Allowed And What's Not

which scenario constitutes a prohibited use of ai

The use of AI is a highly debated topic, with many countries working on draft legislation to ensure its safe application. The EU Artificial Intelligence Act (AI Act) prohibits certain AI practices and systems, including those that manipulate people's decisions, exploit their vulnerabilities, or evaluate them based on social behaviour and personal traits. AI systems that employ subliminal techniques, deception, or coercion to alter human behaviour are also prohibited. These practices are considered risky and may lead to detrimental effects on individuals' physical, psychological, and financial well-being. Additionally, the AI Act prohibits the use of AI in creating facial recognition databases and inferring emotions in workplace and educational contexts, except for specific purposes. Companies must develop guidelines and prioritise ethical design to prevent prohibited practices and ensure compliance with the AI Act.

Characteristics Values
AI systems that manipulate people's decisions Prohibited
AI systems that exploit human vulnerabilities Prohibited
AI systems that evaluate or classify people based on their social behaviour or personal traits Prohibited
AI systems that predict a person's risk of committing a crime Prohibited
AI systems that scrape facial images from the internet or CCTV footage Prohibited
AI systems that infer emotions in the workplace or educational institutions Prohibited
AI systems that categorize people based on their biometric data Prohibited
AI systems that identify individuals without their knowledge or consent Prohibited
AI systems that continuously monitor people's movements using biometric data Prohibited
AI systems that perform multiple functions and have multiple purposes Not inherently prohibited but must be used with care
AI systems that influence critical decisions, perpetuate bias or discriminate, or compromise user privacy Not prohibited but pose an <co: 0,8,9,11>unacceptable risk
AI systems that evaluate risk associated with criminal activity Not prohibited
AI systems that create or expand facial recognition databases Not prohibited
AI systems that infer emotions Not prohibited
AI systems that categorize people based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation Not prohibited
AI systems that identify individuals in publicly accessible spaces for law enforcement purposes Not prohibited

cycivic

AI systems that manipulate people's decisions

AI tools have the potential to be used to manipulate online audiences into making decisions, as highlighted by researchers at the University of Cambridge. This emerging practice, known as the "'intention economy,"' involves AI-savvy tech companies selling information about users' motivations and intentions to the highest bidder. For example, an AI model could infer a user's intent to book a restaurant, flight, or hotel and sell this information to advertisers.

Additionally, AI systems can learn to identify and exploit vulnerabilities in human habits and behaviours to influence decision-making. This was demonstrated in a series of experiments by Australian researchers, where machines successfully steered participants towards particular actions by targeting vulnerabilities in their decision-making processes. While this technology can be used to defend against influence attacks, proper governance is crucial to ensure responsible implementation and prevent misuse.

The use of AI in advertising is not banned by the AI Act, but it draws a fine line between permissible AI-enhanced advertising and forbidden manipulative or deceptive techniques. This distinction requires careful examination on a case-by-case basis to ensure consumer autonomy and decision-making are respected. Overall, the EU AI Act aims to prohibit AI practices that pose an unacceptable level of risk to fundamental rights, health, and safety.

cycivic

AI systems that exploit human vulnerabilities

The EU Artificial Intelligence Act (AI Act) prohibits AI systems that exploit human vulnerabilities. This includes systems that use subliminal techniques, manipulations, or deceptions to alter human behaviour and coerce individuals into making decisions they would not otherwise consider, especially when these actions could lead to significant harm. These manipulative and deceptive techniques can be delivered through audio, imagery, or video, and are often undetectable to human senses but potent enough to sway behaviour.

Additionally, the AI Act bans the use of AI systems that make risk assessments of natural persons to predict the risk of them committing a criminal offence based solely on profiling or assessing their personality traits and characteristics. This prohibition does not apply to AI systems used to support human assessment of a person's involvement in criminal activity based on objective and verifiable facts linked directly to the criminal activity.

The Act also prohibits the use of AI systems that infer emotions in the workplace or educational institutions, except when used for medical or safety reasons. Furthermore, it bans biometric categorisation systems that categorise individuals based on their biometric data to deduce or infer sensitive personal information such as race, political opinions, religious beliefs, or sexual orientation.

The prohibited practices outlined in the AI Act are not absolute and may have exceptions, particularly in the case of law enforcement. However, non-compliance with these prohibitions can result in significant administrative fines or penalties under the Act.

cycivic

AI systems that evaluate or classify people based on their social behaviour

The use of AI systems that evaluate or classify people based on their social behaviour is a prohibited practice under the EU AI Act. This act identifies specific AI systems as prohibited due to their association with an unacceptable level of risk to fundamental rights, health, and safety.

The EU AI Act prohibits the use of social scoring AI systems for several reasons. Firstly, these systems can exploit human vulnerabilities to distort behaviour and cause significant harm. For example, by leveraging sensitive information such as age, mental health status, or employment situation, these systems can influence individuals' choices or purchasing behaviour. Secondly, social scoring systems can lead to detrimental or unfavourable treatment of individuals or groups in social contexts that are unrelated to the original data generation or collection contexts. This means that individuals may face adverse consequences based on their social behaviour that is not relevant to the specific social situation at hand.

Additionally, the use of social scoring AI systems by public authorities, and eventually the private sector, is prohibited under the EU AI Act. This is because these systems can reinforce systemic biases and exacerbate race-based health disparities, as they often rely on self-reported race, ethnicity, and ancestry as biomarkers and risk factors in disease diagnoses. Furthermore, social scoring systems may draw conclusions about causal relationships too quickly and with insufficient knowledge of statistics and causality/correlation claims, leading to potential inaccuracies in their evaluations or classifications of individuals.

cycivic

AI systems that use subliminal techniques to alter human behaviour

The use of AI systems that employ subliminal techniques to alter human behaviour is prohibited under the EU AI Act. This act establishes strict prohibitions against AI systems that utilise subliminal techniques, manipulations, or deceptions to alter human behaviour, coercing individuals into making decisions they wouldn't otherwise consider. These AI systems are considered highly risky as they can potentially undermine personal autonomy and freedom of choice, often without individuals being consciously aware or able to counteract these influences.

Subliminal techniques refer to methods that operate below the threshold of human awareness. While these techniques are not always effective, they can still influence human behaviour and thoughts. For example, a subliminal message could boost one's motivation or change their current mood. In some cases, these messages can even alter political attitudes and voting intentions.

The EU AI Act specifically prohibits the use of AI systems that deploy subliminal techniques beyond a person's consciousness with the objective or effect of materially distorting the behaviour of an individual or group. This includes employing subtle cues through audio, imagery, or video that, while undetectable to the human senses, are potent enough to sway behaviour. Examples include streaming services embedding unnoticed messages in videos or films, or social media platforms algorithmically promoting emotionally charged content to manipulate user feelings.

The Act also prohibits AI systems that exploit human vulnerabilities to significantly distort behaviour. This includes the use of advanced data analytics to generate highly personalised online ads that leverage sensitive information such as age, mental health status, or employment situation to influence individuals' choices or purchase frequency.

The use of AI systems that employ subliminal techniques to alter human behaviour is a serious concern due to its potential impact on human autonomy and freedom of choice. These practices are prohibited under the EU AI Act to ensure the protection of individuals and to prevent any detrimental outcomes on an individual's physical or psychological health, or financial well-being.

cycivic

AI systems that cause unacceptable risk

The EU Artificial Intelligence Act (AI Act) prohibits AI systems that cause an unacceptable level of risk to fundamental rights, health, and safety. These prohibitions are not absolute and are assessed on a case-by-case basis. The AI Act identifies and prohibits specific AI systems and practices that fall under this category.

Firstly, the AI Act prohibits AI systems that utilize subliminal techniques, manipulations, or deceptions to alter human behavior. These systems aim to coerce individuals into making decisions they wouldn't normally make, potentially leading to significant harm to their physical, psychological, or financial well-being. Examples include streaming services embedding hidden messages in videos or social media platforms algorithmically promoting emotionally charged content to manipulate user feelings.

Secondly, the AI Act prohibits AI systems that exploit human vulnerabilities to distort behavior. This includes leveraging sensitive information such as age, mental health status, or socio-economic circumstances to influence individuals' choices or purchase frequency. The Act emphasizes protecting individuals who are particularly susceptible due to factors like age, disabilities, or belonging to minority groups.

Thirdly, the AI Act prohibits certain specific practices of AI systems that pose unacceptable risks. These include creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage, inferring emotions in workplaces and educational institutions, and biometric categorization systems that categorize individuals based on sensitive attributes like race, political opinions, religious beliefs, or sexual orientation.

Additionally, the AI Act also addresses high-risk AI systems, which may not be explicitly prohibited but require careful regulation and oversight. These include AI safety components in critical infrastructures like transport, where failure could put citizens' lives and health at risk, and AI solutions used in education that may impact access to education and professional opportunities.

While the AI Act aims to mitigate unacceptable risks, it is important to recognize that AI technology is rapidly evolving, and new risks may emerge that are beyond the capacity of private entities to control. Malicious use of AI, such as the development of biological weapons or autonomous pursuit of harmful goals, poses a significant threat. To address these challenges, measures such as biosecurity enhancements, restricted access to dangerous AI systems, and international coordination are recommended.

Frequently asked questions

The use of AI technologies that fall within the "unacceptable risk" category is strictly prohibited. This includes AI systems that manipulate individuals, exploit their vulnerabilities, or lead to physical or psychological harm. It is important to assess the context in which the AI model operates and the potential risks associated with its use.

The AI Act prohibits the use of AI systems that employ subliminal techniques, manipulations, or deceptions to alter human behaviour and coerce individuals into making decisions that could lead to significant harm. It also bans AI systems that exploit human vulnerabilities, evaluate or classify people based on social behaviour or personal traits, and predict the risk of a person committing a crime.

While the AI Act does not ban the use of AI in advertising, it prohibits manipulative or deceptive techniques that undermine consumer autonomy and decision-making. This includes the use of advanced data analytics to generate highly personalized ads by leveraging sensitive information such as age, mental health status, or employment status.

Yes, there are specific and limited exceptions to the prohibited AI practices. For example, the use of predictive policing and emotion recognition systems is not outright prohibited in every scenario. In the case of law enforcement, the use of real-time remote biometric identification systems in public is prohibited unless it is for searching for victims of serious crimes, preventing imminent threats, or locating suspects or perpetrators of serious crimes.

Written by
Reviewed by
Share this post
Print
Did this article help you?

Leave a comment