#1 Trusted Cybersecurity News Platform
Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cybersecurity

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Category — artificial intelligence
Google Introduces Project Naptime for AI-Powered Vulnerability Research

Google Introduces Project Naptime for AI-Powered Vulnerability Research

Jun 24, 2024 Vulnerability / Artificial Intelligence
Google has developed a new framework called Project Naptime that it says enables a large language model (LLM) to carry out vulnerability research with an aim to improve automated discovery approaches. "The Naptime architecture is centered around the interaction between an AI agent and a target codebase," Google Project Zero researchers Sergei Glazunov and Mark Brand said . "The agent is provided with a set of specialized tools designed to mimic the workflow of a human security researcher." The initiative is so named for the fact that it allows humans to "take regular naps" while it assists with vulnerability research and automating variant analysis. The approach, at its core, seeks to take advantage of advances in code comprehension and general reasoning ability of LLMs, thus allowing them to replicate human behavior when it comes to identifying and demonstrating security vulnerabilities. It encompasses several components such as a Code Browser tool
Critical RCE Vulnerability Discovered in Ollama AI Infrastructure Tool

Critical RCE Vulnerability Discovered in Ollama AI Infrastructure Tool

Jun 24, 2024 Artificial Intelligence / Cloud Security
Cybersecurity researchers have detailed a now-patched security flaw affecting the Ollama open-source artificial intelligence (AI) infrastructure platform that could be exploited to achieve remote code execution. Tracked as CVE-2024-37032 , the vulnerability has been codenamed Probllama by cloud security firm Wiz. Following responsible disclosure on May 5, 2024, the issue was addressed in version 0.1.34 released on May 7, 2024. Ollama is a service for packaging, deploying, running large language models (LLMs) locally on Windows, Linux, and macOS devices. At its core, the issue relates to a case of insufficient input validation that results in a path traversal flaw an attacker could exploit to overwrite arbitrary files on the server and ultimately lead to remote code execution. The shortcoming requires the threat actor to send specially crafted HTTP requests to the Ollama API server for successful exploitation. It specifically takes advantage of the API endpoint "/api/pull&
CTEM in the Spotlight: How Gartner's New Categories Help to Manage Exposures

CTEM in the Spotlight: How Gartner's New Categories Help to Manage Exposures

Aug 27, 2024Threat Management / Enterprise Security
Want to know what's the latest and greatest in SecOps for 2024? Gartner's recently released Hype Cycle for Security Operations report takes important steps to organize and mature the domain of Continuous Threat Exposure Management, aka CTEM. Three categories within this domain are included in this year's report: Threat Exposure Management, Exposure Assessment Platforms (EAP), and Adversarial Exposure Validation (AEV). These category definitions are aimed at providing some structure to the evolving landscape of exposure management technologies. Pentera, listed as a sample vendor in the newly defined AEV category, is playing a pivotal role in increasing the adoption of CTEM, with a focus on security validation. Following is our take on the CTEM related product categories and what they mean for enterprise security leaders. The Industry is Maturing CTEM, coined by Gartner in 2022, presents a structural approach for continuously assessing, prioritizing, validating, and remediating expo
Meta Pauses AI Training on EU User Data Amid Privacy Concerns

Meta Pauses AI Training on EU User Data Amid Privacy Concerns

Jun 15, 2024 Artificial Intelligence / Privacy
Meta on Friday said it's delaying its efforts to train the company's large language models ( LLMs ) using public content shared by adult users on Facebook and Instagram in the European Union following a request from the Irish Data Protection Commission (DPC). The company expressed disappointment at having to put its AI plans on pause, stating it had taken into account feedback from regulators and data protection authorities in the region. At issue is Meta's plan to use personal data to train its artificial intelligence (AI) models without seeking users' explicit consent, instead relying on the legal basis of ' Legitimate Interests ' for processing first and third-party data in the region. These changes were expected to come into effect on June 26, before when the company said users could opt out of having their data used by submitting a request "if they wish." Meta is already utilizing user-generated content to train its AI in other markets such
cyber security

Saas Attacks Report: 2024 Edition

websitePush SecuritySaaS Security / Offensive Security
Offensive security drives defensive security. Learn about the SaaS Attack Matrix – compiling the latest attack techniques facing SaaS-native and hybrid organizations.
Google's Privacy Sandbox Accused of User Tracking by Austrian Non-Profit

Google's Privacy Sandbox Accused of User Tracking by Austrian Non-Profit

Jun 14, 2024 Privacy / Ad Tracking
Google's plans to deprecate third-party tracking cookies in its Chrome web browser with Privacy Sandbox has run into fresh trouble after Austrian privacy non-profit noyb (none of your business) said the feature can still be used to track users. "While the so-called 'Privacy Sandbox' is advertised as an improvement over extremely invasive third-party tracking, the tracking is now simply done within the browser by Google itself," noyb said . "To do this, the company theoretically needs the same informed consent from users. Instead, Google is tricking people by pretending to 'Turn on an ad privacy feature.'" In other words, by making users agree to enable a privacy feature, they are still being tracked by consenting to Google's first-party ad tracking, the Vienna-based non-profit founded by activist Max Schrems alleged in a complaint filed with the Austrian data protection authority. Privacy Sandbox is a set of proposals put forth by the i
Microsoft Delays AI-Powered Recall Feature for Copilot+ PCs Amid Security Concerns

Microsoft Delays AI-Powered Recall Feature for Copilot+ PCs Amid Security Concerns

Jun 14, 2024 Artificial Intelligence / Data Protection
Microsoft on Thursday revealed that it's delaying the rollout of the controversial artificial intelligence (AI)-powered Recall feature for Copilot+ PCs. To that end, the company said it intends to shift from general availability to a preview available first in the Windows Insider Program ( WIP ) in the coming weeks. "We are adjusting the release model for Recall to leverage the expertise of the Windows Insider community to ensure the experience meets our high standards for quality and security," it said in an update. "This decision is rooted in our commitment to providing a trusted, secure and robust experience for all customers and to seek additional feedback prior to making the feature available to all Copilot+ PC users." First unveiled last month, Recall was originally slated for a broad release on June 18, 2024, but has since waded into controversial waters after it was widely panned as a privacy and security risk and an alluring target for threat ac
Apple Launches Private Cloud Compute for Privacy-Centric AI Processing

Apple Launches Private Cloud Compute for Privacy-Centric AI Processing

Jun 11, 2024 Cloud Computing / Artificial Intelligence
Apple has announced the launch of a "groundbreaking cloud intelligence system" called Private Cloud Compute (PCC) that's designed for processing artificial intelligence (AI) tasks in a privacy-preserving manner in the cloud. The tech giant described PCC as the "most advanced security architecture ever deployed for cloud AI compute at scale." PCC coincides with the arrival of new generative AI (GenAI) features – collectively dubbed Apple Intelligence , or AI for short – that the iPhone maker unveiled in its next generation of software, including iOS 18 , iPadOS 18 , and macOS Sequoia . All of the Apple Intelligence features, both the ones that run on-device and those that rely on PCC, leverage in-house generative models trained on "licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot." With PCC, the idea is to essentially offload complex requests that requir
Microsoft Revamps Controversial AI-Powered Recall Feature Amid Privacy Concerns

Microsoft Revamps Controversial AI-Powered Recall Feature Amid Privacy Concerns

Jun 08, 2024 Artificial Intelligence / Privacy
Microsoft on Friday said it will disable its much-criticized artificial intelligence (AI)-powered Recall feature by default and make it an opt-in. Recall , currently in preview and coming exclusively to Copilot+ PCs on June 18, 2024, functions as an "explorable visual timeline" by capturing screenshots of what appears on users' screens every five seconds, which are subsequently analyzed and parsed to surface relevant information. But the feature, meant to serve as some sort of an AI-enabled photographic memory, was met with instantaneous backlash from the security and privacy community, which excoriated the company for having not thought through enough and implementing adequate safeguards that could prevent malicious actors from easily gaining a window into a victim's digital life. The recorded information could include screenshots of documents, emails, or messages containing sensitive details that may have been deleted or shared temporarily using disappearing
The AI Debate: Google's Guidelines, Meta's GDPR Dispute, Microsoft's Recall Backlash

The AI Debate: Google's Guidelines, Meta's GDPR Dispute, Microsoft's Recall Backlash

Jun 07, 2024 Artificial Intelligence / Privacy
Google is urging third-party Android app developers to incorporate generative artificial intelligence (GenAI) features in a responsible manner. The new guidance from the search and advertising giant is an effort to combat problematic content, including sexual content and hate speech, created through such tools. To that end, apps that generate content using AI must ensure they don't create Restricted Content , have a mechanism for users to report or flag offensive information , and market them in a manner that accurately represents the app's capabilities. App developers are also being recommended to rigorously test their AI models to ensure they respect user safety and privacy. "Be sure to test your apps across various user scenarios and safeguard them against prompts that could manipulate your generative AI feature to create harmful or offensive content," Prabhat Sharma, director of trust and safety for Google Play, Android, and Chrome, said . The development com
Unpacking 2024's SaaS Threat Predictions

Unpacking 2024's SaaS Threat Predictions

Jun 05, 2024 SaaS Security / Artificial Intelligence
Early in 2024, Wing Security released its State of SaaS Security report , offering surprising insights into emerging threats and best practices in the SaaS domain. Now, halfway through the year, several SaaS threat predictions from the report have already proven accurate. Fortunately, SaaS Security Posture Management (SSPM) solutions have prioritized mitigation capabilities to address many of these issues, ensuring security teams have the necessary tools to face these challenges head-on. In this article, we will revisit our predictions from earlier in the year, showcase real-world examples of these threats in action, and offer practical tips and best practices to help you prevent such incidents in the future. It's also worth noting the overall trend of an increasing frequency of breaches in today's dynamic SaaS landscape, leading organizations to demand timely threat alerts as a vital capability. Industry regulations with upcoming compliance deadlines are demanding similar time-sens
AI Company Hugging Face Detects Unauthorized Access to Its Spaces Platform

AI Company Hugging Face Detects Unauthorized Access to Its Spaces Platform

Jun 01, 2024 AI-as-a-Service / Data Breach
Artificial Intelligence (AI) company Hugging Face on Friday disclosed that it detected unauthorized access to its Spaces platform earlier this week. "We have suspicions that a subset of Spaces' secrets could have been accessed without authorization," it said in an advisory. Spaces offers a way for users to create, host, and share AI and machine learning (ML) applications. It also functions as a discovery service to look up AI apps made by other users on the platform. In response to the security event, Hugging Space said it is taking the step of revoking a number of HF tokens present in those secrets and that it's notifying users who had their tokens revoked via email. "We recommend you refresh any key or token and consider switching your HF tokens to fine-grained access tokens which are the new default," it added. Hugging Face, however, did not disclose how many users are impacted by the incident, which is currently under further investigation. It has
New Tricks in the Phishing Playbook: Cloudflare Workers, HTML Smuggling, GenAI

New Tricks in the Phishing Playbook: Cloudflare Workers, HTML Smuggling, GenAI

May 27, 2024 Phishing Attack / Artificial Intelligence
Cybersecurity researchers are alerting of phishing campaigns that abuse  Cloudflare Workers  to serve phishing sites that are used to harvest users' credentials associated with Microsoft, Gmail, Yahoo!, and cPanel Webmail. The attack method, called transparent phishing or adversary-in-the-middle ( AitM ) phishing, "uses Cloudflare Workers to act as a reverse proxy server for a legitimate login page, intercepting traffic between the victim and the login page to capture credentials, cookies, and tokens," Netskope researcher Jan Michael Alcantara  said  in a report. A majority of phishing campaigns hosted on Cloudflare Workers over the past 30 days have targeted victims in Asia, North America, and Southern Europe, spanning technology, financial services, and banking sectors. The cybersecurity firm said that an increase in traffic to Cloudflare Workers-hosted phishing pages was first registered in Q2 2023, noting it observed a spike in the total number of distinct domains
Five Core Tenets Of Highly Effective DevSecOps Practices

Five Core Tenets Of Highly Effective DevSecOps Practices

May 21, 2024 DevSecOps / Artificial Intelligence
One of the enduring challenges of building modern applications is to make them more secure without disrupting high-velocity DevOps processes or degrading the developer experience. Today's cyber threat landscape is rife with sophisticated attacks aimed at all different parts of the software supply chain and the urgency for software-producing organizations to adopt DevSecOps practices that deeply integrate security throughout the software development life cycle has never been greater.  However, HOW organizations go about it is of critical importance. For example, locking down the development platform, instituting exhaustive code reviews, and enforcing heavyweight approval processes may improve the security posture of pipelines and code, but don't count on applications teams to operate fluidly enough to innovate. The same goes for application security testing; uncovering a mountain of vulnerabilities does little good if developers have inadequate time or guidance to fix them. At a high
Expert Insights
Cybersecurity Resources