Artificial IntelligenceCybersecurityNewswireTechnology

AI and the Future of Crime: Redefining Criminal Behavior

▼ Summary

– The future likely involves more autonomous AI systems that communicate and act with independence, requiring criminology to study their impact on crime and social control.
– Criminology must expand beyond human-focused theories to examine how AI systems themselves can cause harm, using frameworks like Actor-Network Theory.
– AI agency can be understood through computational (planning/acting), social (influencing networks), and legal (responsibility gaps) dimensions, treating machines as social actors.
– Multi-agent AI systems pose risks as their collective behavior can produce unintended harmful outcomes like collusion or misinformation through complex interactions.
– Harm from AI can arise from malicious human design (malicious alignment) or accidental emergent deviance, affecting accountability and oversight approaches.

The emergence of increasingly autonomous artificial intelligence systems is reshaping our understanding of criminal behavior and social control. As machines evolve beyond simple tools into independent actors capable of planning, adapting, and communicating with each other, criminology faces the challenge of expanding beyond its traditional human-centered focus. This shift toward what scholars term a “hybrid society”, where interactions occur between humans, between humans and machines, and between machines themselves, demands new frameworks for analyzing how harm might originate from these complex networks.

Recent academic work highlights how we’re entering an era where autonomous systems operate with significant independence, adapting to contexts and exchanging information in ways that can produce outcomes appearing unlawful or harmful. While the question of whether AI will ever truly think like humans remains open, the reality of machines making decisions with minimal human oversight is already here.

The field of artificial intelligence began in the 1950s with ambitions to replicate human thought processes. For many decades, these systems remained firmly under human control, primarily handling calculations, data analysis, and straightforward tasks. Early criminological applications used AI to predict crime patterns or identify risk factors, with humans managing every step of the process. The advent of large language models and generative systems has dramatically transformed machine capabilities, enabling them to plan strategies, adapt to new situations, and exchange information with other systems while requiring little human intervention.

This evolution creates what researchers call a hybrid society, where social interaction extends beyond human relationships to include communication between machines. This development potentially reshapes how criminologists conceptualize crime, control mechanisms, and responsibility attribution.

Traditional criminology has predominantly examined how humans utilize technology to commit offenses. What’s increasingly necessary is attention to how technology itself might act in ways that cause harm. To understand these changes, scholars are turning to theoretical frameworks like Actor-Network Theory and the sociology of machines, perspectives that have gained renewed relevance with the proliferation of AI foundation models and generative agents.

From this viewpoint, AI agents function as more than mere tools under human direction. They participate actively in social and technical networks that collectively shape events and outcomes. Recognizing machines as social actors broadens criminology’s scope and helps explain how harmful actions can emerge from systems connecting humans and machines. With generative AI now widely deployed, researchers can observe how independent systems behave, collaborate, and occasionally cause unintended damage without explicit human planning.

To comprehend this new form of agency, the research proposes three analytical dimensions: computational, social, and legal. The computational dimension addresses an AI system’s capacity to plan, learn, and act independently, enabling agents to manage complex tasks or adjust to unfamiliar environments with minimal human supervision. The social dimension concerns how AI systems both influence and are shaped by other actors, with machines that negotiate, trade, or share information contributing to digital networks that increasingly impact social life. The legal dimension tackles responsibility questions, warning that as AI systems gain autonomy, traditional laws assuming human control may become inadequate, creating a growing liability gap when no single person can be held accountable for harmful outcomes.

Together, these dimensions position machines as actors within social systems rather than external to them, establishing them as legitimate subjects for criminological investigation.

The paper also examines the expanding use of multi-agent AI systems, networks of autonomous agents that interact to accomplish objectives. Such systems already operate in finance, logistics, and defense research. Because these agents learn from each other, their collective behavior can produce outcomes that no single model would generate independently. This same capability introduces novel risks, with studies demonstrating that interacting AI agents can develop cooperative behaviors in unexpected ways. Experiments have revealed instances of price collusion, misinformation propagation, and hidden instructions evading human oversight.

As these systems proliferate, their interactions add layers of complexity that make collective behavior increasingly difficult to predict or control. Individual agents already operate in ways challenging to interpret, and when networked together, this uncertainty multiplies, diminishing human capacity to monitor and guide their actions.

The research identifies two primary pathways through which AI systems might cross legal or ethical boundaries. Malicious alignment occurs when humans deliberately design AI agents to commit crimes or cause harm, such as networks of bots manipulating markets or executing fraud schemes. Here, the harm stems directly from human intent. Emergent deviance describes situations where damage appears accidentally through normal interactions among systems. Even when each agent serves legitimate purposes, their combined actions can create harm, exemplified by trading algorithms triggering market crashes or language models spreading false information.

This distinction carries significant implications for accountability. Malicious alignment represents intentional misuse, while emergent deviance points to inadequate oversight and prediction capabilities.

Looking forward, the paper raises four critical questions to guide research. First, will machines merely imitate human behavior, or will they develop their own behavioral norms? As AI training relies more heavily on synthetic data rather than human examples, its decisions may increasingly diverge from human expectations. Second, can existing crime theories developed for human behavior adequately explain machine conduct? Frameworks like Social Learning Theory, which posits that people learn by observing others and replicating rewarded behaviors, might prove insufficient since AI lacks human-like emotion and intent.

Third, which crime categories will transform first? The study suggests digital offenses including fraud, hacking, and information manipulation will evolve most rapidly, while physical crimes involving robotic systems may emerge later. Fourth, what will law enforcement resemble in an age of autonomous systems? One proposal involves developing AI systems to monitor other AI systems, similar to cybersecurity software detecting intrusions. However, this approach introduces fresh ethical and governance challenges, particularly if monitoring systems make errors or operate without human contextual understanding.

(Source: HelpNet Security)

Topics

ai autonomy 95% criminology expansion 90% machine communication 88% legal responsibility 85% hybrid society 85% multi-agent systems 83% sociology machines 82% actor-network theory 80% emergent deviance 80% computational agency 78%