The impact of AI on home security
Artificial intelligence (AI) is playing an increasingly important role in our daily lives. However, a recent study sheds light on potential problems when using AI in home security. Scientists at the Massachusetts Institute of Technology (MIT) and Penn State University have found that large language models are inconsistent in their assessment of surveillance images. This raises questions about the reliability and fairness of AI systems in safety applications.
Unpredictable decision making
The research shows that AI models sometimes recommend calling in the police even when no criminal offences are visible on surveillance footage. Moreover, the models appear to be inconsistent in their ratings. For example, a video showing a car break-in may be flagged for police intervention while ignoring a similar situation. This inconsistency makes it difficult to predict how AI will behave in different situations.
Unintentional biases
A striking finding is that some AI models are less likely to recommend police intervention in neighborhoods where mostly white people live. This points to unintended biases in the systems, which are influenced by demographic factors. The researchers call this phenomenon “norm inconsistency” - the uneven application of social norms to similar situations.
Lack of transparency
A major problem when investigating these AI systems is the lack of transparency. Researchers have no access to the training data or the internal workings of the models. This makes it difficult to determine the cause of the inconsistencies and biases. This lack of insight makes it difficult to improve the systems or to use them responsibly.
Broader implications
While large language models are currently not used in real home security systems, they are used for important decisions in other sectors, such as health care, mortgage lending and staff recruitment. The researchers suspect that similar inconsistencies may also occur in these applications.
Call for caution
The study underlines the importance of care when implementing AI systems, especially in situations where the consequences can be significant. “The rapid introduction of generative AI models, especially in sensitive situations, deserves much more attention because it can be quite harmful,” warns Ashia Wilson, one of the lead researchers.
Future research
The scientists want to continue their research by looking at how the normative judgments of AI systems relate to those of humans. They also want to investigate how well the models understand the facts in complex scenarios. In addition, they are working on a system that makes it easier for people to report biases and potential damage caused by AI to companies and governments.
This study highlights the importance of critical research into AI systems before they are widely deployed. Only by understanding the limitations and risks can we use AI responsibly and fairly in our society.
Check out our highlighted articles
Get inspired by our featured articles.