#1 Trusted Cybersecurity News Platform
Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cybersecurity

machine learning | Breaking Cybersecurity News | The Hacker News

Category — machine learning
Researchers Identify Over 20 Supply Chain Vulnerabilities in MLOps Platforms

Researchers Identify Over 20 Supply Chain Vulnerabilities in MLOps Platforms

Aug 26, 2024 ML Security / Artificial Intelligence
Cybersecurity researchers are warning about the security risks in the machine learning (ML) software supply chain following the discovery of more than 20 vulnerabilities that could be exploited to target MLOps platforms. These vulnerabilities, which are described as inherent- and implementation-based flaws, could have severe consequences, ranging from arbitrary code execution to loading malicious datasets. MLOps platforms offer the ability to design and execute an ML model pipeline, with a model registry acting as a repository used to store and version-trained ML models. These models can then be embedded within an application or allow other clients to query them using an API (aka model-as-a-service). "Inherent vulnerabilities are vulnerabilities that are caused by the underlying formats and processes used in the target technology," JFrog researchers said in a detailed report. Some examples of inherent vulnerabilities include abusing ML models to run code of the attacker
The AI Hangover is Here – The End of the Beginning

The AI Hangover is Here – The End of the Beginning

Aug 12, 2024 AI Technology / Machine Learning
After a good year of sustained exuberance, the hangover is finally here. It's a gentle one (for now), as the market corrects the share price of the major players (like Nvidia, Microsoft, and Google), while other players reassess the market and adjust priorities. Gartner calls it the trough of disillusionment , when interest wanes and implementations fail to deliver the promised breakthroughs. Producers of the technology shake out or fail. Investment continues only if the surviving providers improve their products to the satisfaction of early adopters.  Let's be clear, this was always going to be the case: the post-human revolution promised by the AI cheerleaders was never a realistic goal, and the incredible excitement triggered by the early LLMs was not based on market success.  AI is here to stay  What's next for AI then? Well, if it follows the Gartner hype cycle, the deep crash is followed by the slope of enlightenment where the maturing technology regains its footing, benefits
CTEM in the Spotlight: How Gartner's New Categories Help to Manage Exposures

CTEM in the Spotlight: How Gartner's New Categories Help to Manage Exposures

Aug 27, 2024Threat Management / Enterprise Security
Want to know what's the latest and greatest in SecOps for 2024? Gartner's recently released Hype Cycle for Security Operations report takes important steps to organize and mature the domain of Continuous Threat Exposure Management, aka CTEM. Three categories within this domain are included in this year's report: Threat Exposure Management, Exposure Assessment Platforms (EAP), and Adversarial Exposure Validation (AEV). These category definitions are aimed at providing some structure to the evolving landscape of exposure management technologies. Pentera, listed as a sample vendor in the newly defined AEV category, is playing a pivotal role in increasing the adoption of CTEM, with a focus on security validation. Following is our take on the CTEM related product categories and what they mean for enterprise security leaders. The Industry is Maturing CTEM, coined by Gartner in 2022, presents a structural approach for continuously assessing, prioritizing, validating, and remediating expo
Safeguard Personal and Corporate Identities with Identity Intelligence

Safeguard Personal and Corporate Identities with Identity Intelligence

Jul 19, 2024 Machine Learning / Corporate Security
Learn about critical threats that can impact your organization and the bad actors behind them from Cybersixgill's threat experts. Each story shines a light on underground activities, the threat actors involved, and why you should care, along with what you can do to mitigate risk.  In the current cyber threat landscape, the protection of personal and corporate identities has become vital. Once in the hands of cybercriminals, compromised credentials and accounts provide unauthorized access to corporations' sensitive information and an entry point to launch costly ransomware and other malware attacks. To properly mitigate threats stemming from compromised credentials and accounts, organizations need identity intelligence. Understanding the significance of identity intelligence and the benefits it delivers is foundational to maintaining a secure posture and minimizing risk.  There is a perception that security teams and threat analysts are already overloaded by too much data. By these
cyber security

Saas Attacks Report: 2024 Edition

websitePush SecuritySaaS Security / Offensive Security
Offensive security drives defensive security. Learn about the SaaS Attack Matrix – compiling the latest attack techniques facing SaaS-native and hybrid organizations.
Summary of "AI Leaders Spill Their Secrets" Webinar

Summary of "AI Leaders Spill Their Secrets" Webinar

Jul 19, 2024 Technology / Artificial Intelligence
Event Overview The " AI Leaders Spill Their Secrets " webinar, hosted by Sigma Computing, featured prominent AI experts sharing their experiences and strategies for success in the AI industry. The panel included Michael Ward from Sardine, Damon Bryan from Hyperfinity, and Stephen Hillian from Astronomer, moderated by Zalak Trivedi, Sigma Computing's Product Manager. Key Speakers and Their Backgrounds 1. Michael Ward Senior Risk Data Analyst at Sardine. Over 25 years of experience in software engineering. Focuses on data science, analytics, and machine learning to prevent fraud and money laundering. 2. Damon Bryan Co-founder and CTO at Hyperfinity. Specializes in decision intelligence software for retailers and brands. Background in data science, AI, and analytics, transitioning from consultancy to a full-fledged software company. 3. Stephen Hillion SVP of Data and AI at Astronomer. Manages data science teams and focuses on the development and scaling of
The Emerging Role of AI in Open-Source Intelligence

The Emerging Role of AI in Open-Source Intelligence

Jul 03, 2024 OSINT / Artificial Intelligence
Recently the Office of the Director of National Intelligence (ODNI) unveiled a new strategy for open-source intelligence (OSINT) and referred to OSINT as the "INT of first resort". Public and private sector organizations are realizing the value that the discipline can provide but are also finding that the exponential growth of digital data in recent years has overwhelmed many traditional OSINT methods. Thankfully, Artificial Intelligence (AI) and Machine Learning (ML) are starting to provide a transformative impact on the future of information gathering and analysis.  What is Open-Source Intelligence (OSINT)? Open-Source Intelligence refers to the collection and analysis of information from publicly available sources. These sources can include traditional media, social media platforms, academic publications, government reports, and any other data that is openly accessible. The key characteristic of OSINT is that it does not involve covert or clandestine methods of information gather
Google Introduces Project Naptime for AI-Powered Vulnerability Research

Google Introduces Project Naptime for AI-Powered Vulnerability Research

Jun 24, 2024 Vulnerability / Artificial Intelligence
Google has developed a new framework called Project Naptime that it says enables a large language model (LLM) to carry out vulnerability research with an aim to improve automated discovery approaches. "The Naptime architecture is centered around the interaction between an AI agent and a target codebase," Google Project Zero researchers Sergei Glazunov and Mark Brand said . "The agent is provided with a set of specialized tools designed to mimic the workflow of a human security researcher." The initiative is so named for the fact that it allows humans to "take regular naps" while it assists with vulnerability research and automating variant analysis. The approach, at its core, seeks to take advantage of advances in code comprehension and general reasoning ability of LLMs, thus allowing them to replicate human behavior when it comes to identifying and demonstrating security vulnerabilities. It encompasses several components such as a Code Browser tool
Critical RCE Vulnerability Discovered in Ollama AI Infrastructure Tool

Critical RCE Vulnerability Discovered in Ollama AI Infrastructure Tool

Jun 24, 2024 Artificial Intelligence / Cloud Security
Cybersecurity researchers have detailed a now-patched security flaw affecting the Ollama open-source artificial intelligence (AI) infrastructure platform that could be exploited to achieve remote code execution. Tracked as CVE-2024-37032 , the vulnerability has been codenamed Probllama by cloud security firm Wiz. Following responsible disclosure on May 5, 2024, the issue was addressed in version 0.1.34 released on May 7, 2024. Ollama is a service for packaging, deploying, running large language models (LLMs) locally on Windows, Linux, and macOS devices. At its core, the issue relates to a case of insufficient input validation that results in a path traversal flaw an attacker could exploit to overwrite arbitrary files on the server and ultimately lead to remote code execution. The shortcoming requires the threat actor to send specially crafted HTTP requests to the Ollama API server for successful exploitation. It specifically takes advantage of the API endpoint "/api/pull&
New Attack Technique 'Sleepy Pickle' Targets Machine Learning Models

New Attack Technique 'Sleepy Pickle' Targets Machine Learning Models

Jun 13, 2024 Vulnerability / Software Security
The security risks posed by the Pickle format have once again come to the fore with the discovery of a new "hybrid machine learning (ML) model exploitation technique" dubbed Sleepy Pickle. The attack method, per Trail of Bits, weaponizes the ubiquitous format used to package and distribute machine learning (ML) models to corrupt the model itself, posing a severe supply chain risk to an organization's downstream customers. "Sleepy Pickle is a stealthy and novel attack technique that targets the ML model itself rather than the underlying system," security researcher Boyan Milanov said . While pickle is a widely used serialization format by ML libraries like PyTorch , it can be used to carry out arbitrary code execution attacks simply by loading a pickle file (i.e., during deserialization). "We suggest loading models from users and organizations you trust, relying on signed commits, and/or loading models from [TensorFlow] or Jax formats with the from_
AI Company Hugging Face Detects Unauthorized Access to Its Spaces Platform

AI Company Hugging Face Detects Unauthorized Access to Its Spaces Platform

Jun 01, 2024 AI-as-a-Service / Data Breach
Artificial Intelligence (AI) company Hugging Face on Friday disclosed that it detected unauthorized access to its Spaces platform earlier this week. "We have suspicions that a subset of Spaces' secrets could have been accessed without authorization," it said in an advisory. Spaces offers a way for users to create, host, and share AI and machine learning (ML) applications. It also functions as a discovery service to look up AI apps made by other users on the platform. In response to the security event, Hugging Space said it is taking the step of revoking a number of HF tokens present in those secrets and that it's notifying users who had their tokens revoked via email. "We recommend you refresh any key or token and consider switching your HF tokens to fine-grained access tokens which are the new default," it added. Hugging Face, however, did not disclose how many users are impacted by the incident, which is currently under further investigation. It has
How to Build Your Autonomous SOC Strategy

How to Build Your Autonomous SOC Strategy

May 30, 2024 Endpoint Security / Threat Detection
Security leaders are in a tricky position trying to discern how much new AI-driven cybersecurity tools could actually benefit a security operations center (SOC). The hype about generative AI is still everywhere, but security teams have to live in reality. They face constantly incoming alerts from endpoint security platforms, SIEM tools, and phishing emails reported by internal users. Security teams also face an acute talent shortage.  In this guide, we'll lay out practical steps organizations can take to automate more of their processes and build an autonomous SOC strategy . This should address the acute talent shortage in security teams, by employing artificial intelligence and machine learning with a variety of techniques, these systems simulate the decision-making and investigative processes of human analysts. First, we'll define objectives for an autonomous SOC strategy and then consider key processes that could be automated. Next, we'll consider different AI and automation
Experts Find Flaw in Replicate AI Service Exposing Customers' Models and Data

Experts Find Flaw in Replicate AI Service Exposing Customers' Models and Data

May 25, 2024 Machine Learning / Data Breach
Cybersecurity researchers have discovered a critical security flaw in an artificial intelligence (AI)-as-a-service provider  Replicate  that could have allowed threat actors to gain access to proprietary AI models and sensitive information. "Exploitation of this vulnerability would have allowed unauthorized access to the AI prompts and results of all Replicate's platform customers," cloud security firm Wiz  said  in a report published this week. The issue stems from the fact that AI models are typically packaged in formats that allow arbitrary code execution, which an attacker could weaponize to perform cross-tenant attacks by means of a malicious model. Replicate makes use of an open-source tool called  Cog  to containerize and package machine learning models that could then be deployed either in a self-hosted environment or to Replicate. Wiz said that it created a rogue Cog container and uploaded it to Replicate, ultimately employing it to achieve remote code exec
(Cyber) Risk = Probability of Occurrence x Damage

(Cyber) Risk = Probability of Occurrence x Damage

May 15, 2024 Threat Detection / Cybersecurity
Here's How to Enhance Your Cyber Resilience with CVSS In late 2023, the Common Vulnerability Scoring System (CVSS) v4.0 was unveiled, succeeding the eight-year-old CVSS v3.0, with the aim to enhance vulnerability assessment for both industry and the public. This latest version introduces additional metrics like safety and automation to address criticism of lacking granularity while presenting a revised scoring system for a more comprehensive evaluation. It further emphasizes the importance of considering environmental and threat metrics alongside the base score to assess vulnerabilities accurately. Why Does It Matter? The primary purpose of the CVSS is to evaluate the risk associated with a vulnerability. Some vulnerabilities, particularly those found in network products, present a clear and significant risk as unauthenticated attackers can easily exploit them to gain remote control over affected systems. These vulnerabilities have frequently been exploited over the years, often ser
6 Mistakes Organizations Make When Deploying Advanced Authentication

6 Mistakes Organizations Make When Deploying Advanced Authentication

May 14, 2024 Cyber Threat / Machine Learning
Deploying advanced authentication measures is key to helping organizations address their weakest cybersecurity link: their human users. Having some form of 2-factor authentication in place is a great start, but many organizations may not yet be in that spot or have the needed level of authentication sophistication to adequately safeguard organizational data. When deploying advanced authentication measures, organizations can make mistakes, and it is crucial to be aware of these potential pitfalls.  1. Failing to conduct a risk assessment A comprehensive risk assessment is a vital first step to any authentication implementation. An organization leaves itself open to risk if it fails to assess current threats and vulnerabilities, systems and processes or needed level of protections required for different applications and data.  Not all applications demand the same levels of security. For example, an application that handles sensitive customer information or financials may require stro
Expert Insights
Cybersecurity Resources