Researchers Find Over 175,000 AI Servers Publicly Exposed Worldwide
A new joint investigation by cybersecurity researchers from SentinelOne’s SentinelLABS and Censys has uncovered a staggering number of publicly accessible artificial intelligence servers — raising serious security concerns for organisations and developers deploying AI systems.
According to the analysis, researchers discovered more than 175,000 unique Ollama AI servers exposed directly to the internet across 130 countries, many of them lacking basic security controls or visibility safeguards. SentinelLABS notes these open AI infrastructures represent an “unmanaged, publicly accessible layer of AI compute infrastructure” that could be exploited by attackers, similar to how vulnerabilities led to crypto mining incidents, if left unprotected.
How AI Deployments Became Exposed
The exposed servers belong to installations of the Ollama framework — an open-source platform that organisations use to host and run large language models (LLMs) on local or cloud infrastructure. Unlike commercial AI services that are protected by built-in authentication and monitoring, these self-hosted instances often run without the guardrails typically enforced by major cloud providers.
Researchers found that these services exist in a variety of environments, spanning both corporate cloud deployments and smaller, residential networks. Because many are connected to the internet without firewalls, access controls, or login restrictions, they can be discovered and interacted with by anyone scanning the public address space.
Countries with the Biggest Footprints
While the majority of exposed AI hosts were located in China, accounting for just over 30% of the total, other countries with large numbers of accessible instances included the United States, Germany, France, South Korea, India, Russia, Singapore, Brazil, and the United Kingdom.
Risk Implications
Alarmingly, researchers observed that nearly half of these public servers were configured with “tool calling” capabilities — meaning they are not only serving AI responses but are also capable of executing code, accessing APIs, or interacting with external systems. This configuration dramatically increases the potential misuse risk if hostile actors gain access.
As AI systems become more integrated into critical workflows and automation tasks, exposed infrastructure of this scale could serve as a fertile ground for attackers seeking to misuse resources, launch attacks, or pivot into larger networks.
What Organisations Should Do
While the investigation highlights a significant security blind spot, it also serves as a reminder that self-hosted AI deployments require the same level of protection as any other internet-connected service. Security experts recommend:
- Implementing strict access controls such as firewalls and authentication for all AI services
- Regularly auditing network exposure to detect unintended public accessibility
- Monitoring for suspicious activity on self-hosted AI infrastructure
- Updating and patching AI frameworks to ensure known vulnerabilities are closed
By taking proactive security measures, organisations can reduce the risk that unprotected AI instances will become entry points for attackers or infrastructure abuse.
Read the full article: https://luckyy.uk/researchers-find-over-175000-ai-servers-publicly-exposed-worldwide/
- Tech
- Art
- Causes
- Crafts
- Dance
- Drinks
- Film
- Fitness
- Food
- Oyunlar
- Gardening
- Health
- Home
- Literature
- Music
- Networking
- Other
- Party
- Religion
- Shopping
- Sports
- Theater
- Wellness