#1 Trusted Cybersecurity News Platform
Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cybersecurity

artificial intelligence | Breaking Cybersecurity News | The Hacker News

Category — artificial intelligence
Webinar: Learn to Boost Cybersecurity with AI-Powered Vulnerability Management

Webinar: Learn to Boost Cybersecurity with AI-Powered Vulnerability Management

Sep 02, 2024 Vulnerability Management / Webinar
The world of cybersecurity is in a constant state of flux. New vulnerabilities emerge daily, and attackers are becoming more sophisticated. In this high-stakes game, security leaders need every advantage they can get. That's where Artificial Intelligence (AI) comes in. AI isn't just a buzzword; it's a game-changer for vulnerability management. AI is poised to revolutionize vulnerability management in the coming years. It enables security teams to: Identify risks at scale: AI can analyze massive amounts of data to identify vulnerabilities that humans might miss. Prioritize threats: AI helps focus on the most critical vulnerabilities, ensuring resources are used effectively. Remediate faster: AI automates many tasks, allowing for quicker and more efficient remediation. AI isn't just about technology; it's about people. This webinar will delve into how security leaders can leverage AI to empower their teams and foster a culture of security. Learn how to tur
Microsoft Fixes ASCII Smuggling Flaw That Enabled Data Theft from Microsoft 365 Copilot

Microsoft Fixes ASCII Smuggling Flaw That Enabled Data Theft from Microsoft 365 Copilot

Aug 27, 2024 AI Security / Vulnerability
Details have emerged about a now-patched vulnerability in Microsoft 365 Copilot that could enable the theft of sensitive user information using a technique called ASCII smuggling. " ASCII Smuggling is a novel technique that uses special Unicode characters that mirror ASCII but are actually not visible in the user interface," security researcher Johann Rehberger said . "This means that an attacker can have the [large language model] render, to the user, invisible data, and embed them within clickable hyperlinks. This technique basically stages the data for exfiltration!" The entire attack strings together a number of attack methods to fashion them into a reliable exploit chain. This includes the following steps - Trigger prompt injection via malicious content concealed in a document shared on the chat to seize control of the chatbot Using a prompt injection payload to instruct Copilot to search for more emails and documents, a technique called automatic too
CTEM in the Spotlight: How Gartner's New Categories Help to Manage Exposures

CTEM in the Spotlight: How Gartner's New Categories Help to Manage Exposures

Aug 27, 2024Threat Management / Enterprise Security
Want to know what's the latest and greatest in SecOps for 2024? Gartner's recently released Hype Cycle for Security Operations report takes important steps to organize and mature the domain of Continuous Threat Exposure Management, aka CTEM. Three categories within this domain are included in this year's report: Threat Exposure Management, Exposure Assessment Platforms (EAP), and Adversarial Exposure Validation (AEV). These category definitions are aimed at providing some structure to the evolving landscape of exposure management technologies. Pentera, listed as a sample vendor in the newly defined AEV category, is playing a pivotal role in increasing the adoption of CTEM, with a focus on security validation. Following is our take on the CTEM related product categories and what they mean for enterprise security leaders. The Industry is Maturing CTEM, coined by Gartner in 2022, presents a structural approach for continuously assessing, prioritizing, validating, and remediating expo
Researchers Identify Over 20 Supply Chain Vulnerabilities in MLOps Platforms

Researchers Identify Over 20 Supply Chain Vulnerabilities in MLOps Platforms

Aug 26, 2024 ML Security / Artificial Intelligence
Cybersecurity researchers are warning about the security risks in the machine learning (ML) software supply chain following the discovery of more than 20 vulnerabilities that could be exploited to target MLOps platforms. These vulnerabilities, which are described as inherent- and implementation-based flaws, could have severe consequences, ranging from arbitrary code execution to loading malicious datasets. MLOps platforms offer the ability to design and execute an ML model pipeline, with a model registry acting as a repository used to store and version-trained ML models. These models can then be embedded within an application or allow other clients to query them using an API (aka model-as-a-service). "Inherent vulnerabilities are vulnerabilities that are caused by the underlying formats and processes used in the target technology," JFrog researchers said in a detailed report. Some examples of inherent vulnerabilities include abusing ML models to run code of the attacker
cyber security

Saas Attacks Report: 2024 Edition

websitePush SecuritySaaS Security / Offensive Security
Offensive security drives defensive security. Learn about the SaaS Attack Matrix – compiling the latest attack techniques facing SaaS-native and hybrid organizations.
New UULoader Malware Distributes Gh0st RAT and Mimikatz in East Asia

New UULoader Malware Distributes Gh0st RAT and Mimikatz in East Asia

Aug 19, 2024 Threat Intelligence / Cryptocurrency
A new type of malware called UULoader is being used by threat actors to deliver next-stage payloads like Gh0st RAT and Mimikatz . The Cyberint Research Team, which discovered the malware, said it's distributed in the form of malicious installers for legitimate applications targeting Korean and Chinese speakers. There is evidence pointing to UULoader being the work of a Chinese speaker due to the presence of Chinese strings in program database (PDB) files embedded within the DLL file. "UULoader's 'core' files are contained in a Microsoft Cabinet archive (.cab) file which contains two primary executables (an .exe and a .dll) which have had their file header stripped," the company said in a technical report shared with The Hacker News. One of the executables is a legitimate binary that's susceptible to DLL side-loading, which is used to sideload the DLL file that ultimately loads the final stage, an obfuscate file named "XamlHost.sys" that&#
OpenAI Blocks Iranian Influence Operation Using ChatGPT for U.S. Election Propaganda

OpenAI Blocks Iranian Influence Operation Using ChatGPT for U.S. Election Propaganda

Aug 17, 2024 National Securit / AI Ethics
OpenAI on Friday said it banned a set of accounts linked to what it said was an Iranian covert influence operation that leveraged ChatGPT to generate content that, among other things, focused on the upcoming U.S. presidential election. "This week we identified and took down a cluster of ChatGPT accounts that were generating content for a covert Iranian influence operation identified as Storm-2035," OpenAI said . "The operation used ChatGPT to generate content focused on a number of topics — including commentary on candidates on both sides in the U.S. presidential election – which it then shared via social media accounts and websites." The artificial intelligence (AI) company said the content did not achieve any meaningful engagement, with a majority of the social media posts receiving negligible to no likes, shares, and comments. It further noted it had found little evidence that the long-form articles created using ChatGPT were shared on social media platforms.
How to Set up an Automated SMS Analysis Service with AI in Tines

How to Set up an Automated SMS Analysis Service with AI in Tines

Jul 22, 2024 Threat Detection / Employee Security
The opportunities to use AI in workflow automation are many and varied, but one of the simplest ways to use AI to save time and enhance your organization's security posture is by building an automated SMS analysis service. Workflow automation platform Tines provides a good example of how to do it. The vendor recently released their first native AI features , and security teams have already started sharing the AI-enhanced workflows they've built using the platform.  Tines' library of pre-built workflows includes AI-enhanced pre-built workflows for normalizing alerts, creating cases, and determining which phishing emails require escalations.  Let's take a closer look at their SMS analysis workflow, which, like all of their pre-built workflows, is free to access and import, and can be used with a free Community Edition account.  Here, we'll share an overview of the workflow, and a step-by-step guide for getting it up and running. The problem - SMS scam messages targeted at employees
Summary of "AI Leaders Spill Their Secrets" Webinar

Summary of "AI Leaders Spill Their Secrets" Webinar

Jul 19, 2024 Technology / Artificial Intelligence
Event Overview The " AI Leaders Spill Their Secrets " webinar, hosted by Sigma Computing, featured prominent AI experts sharing their experiences and strategies for success in the AI industry. The panel included Michael Ward from Sardine, Damon Bryan from Hyperfinity, and Stephen Hillian from Astronomer, moderated by Zalak Trivedi, Sigma Computing's Product Manager. Key Speakers and Their Backgrounds 1. Michael Ward Senior Risk Data Analyst at Sardine. Over 25 years of experience in software engineering. Focuses on data science, analytics, and machine learning to prevent fraud and money laundering. 2. Damon Bryan Co-founder and CTO at Hyperfinity. Specializes in decision intelligence software for retailers and brands. Background in data science, AI, and analytics, transitioning from consultancy to a full-fledged software company. 3. Stephen Hillion SVP of Data and AI at Astronomer. Manages data science teams and focuses on the development and scaling of
Meta Halts AI Use in Brazil Following Data Protection Authority's Ban

Meta Halts AI Use in Brazil Following Data Protection Authority's Ban

Jul 18, 2024 Artificial Intelligence / Data Protection
Meta has suspended the use of generative artificial intelligence (GenAI) in Brazil after the country's data protection authority issued a preliminary ban objecting to its new privacy policy. The development was first reported by news agency Reuters. The company said it has decided to suspend the tools while it is in talks with Brazil's National Data Protection Authority (ANPD) to address the agency's concerns over its use of GenAI technology. Earlier this month, ANPD halted with immediate effect the social media giant's new privacy policy that granted the company access to users' personal data to train its GenAI systems. The decision stems from "the imminent risk of serious and irreparable damage or difficult-to-repair damage to the fundamental rights of the affected data subjects," the agency said. It further set a daily fine of 50,000 reais (about $9,100 as of July 18) in case of non-compliance. Last week, it gave Meta "five more days to p
U.S. Seizes Domains Used by AI-Powered Russian Bot Farm for Disinformation

U.S. Seizes Domains Used by AI-Powered Russian Bot Farm for Disinformation

Jul 12, 2024 Disinformation / Artificial Intelligence
The U.S. Department of Justice (DoJ) said it seized two internet domains and searched nearly 1,000 social media accounts that Russian threat actors allegedly used to covertly spread pro-Kremlin disinformation in the country and abroad on a large scale. "The social media bot farm used elements of AI to create fictitious social media profiles — often purporting to belong to individuals in the United States — which the operators then used to promote messages in support of Russian government objectives," the DoJ said . The bot network, comprising 968 accounts on X, is said to be part of an elaborate scheme hatched by an employee of Russian state-owned media outlet RT (formerly Russia Today), sponsored by the Kremlin, and aided by an officer of Russia's Federal Security Service (FSB), who created and led an unnamed private intelligence organization. The developmental efforts for the bot farm began in April 2022 when the individuals procured online infrastructure while anon
Brazil Halts Meta's AI Data Processing Amid Privacy Concerns

Brazil Halts Meta's AI Data Processing Amid Privacy Concerns

Jul 04, 2024 Artificial Intelligence / Data Privacy
Brazil's data protection authority, Autoridade Nacional de Proteção de Dados (ANPD), has temporarily banned Meta from processing users' personal data to train the company's artificial intelligence (AI) algorithms. The ANPD said it found "evidence of processing of personal data based on inadequate legal hypothesis, lack of transparency, limitation of the rights of data subjects, and risks to children and adolescents." The decision follows the social media giant's update to its terms that allow it to use public content from Facebook, Messenger, and Instagram for AI training purposes. A recent report published by Human Rights Watch found that LAION-5B , one of the largest image-text datasets used to train AI models, contained links to identifiable photos of Brazilian children, putting them at risk of malicious deepfakes that could place them under even more exploitation and harm. Brazil has about 102 million active users, making it one of the largest ma
The Emerging Role of AI in Open-Source Intelligence

The Emerging Role of AI in Open-Source Intelligence

Jul 03, 2024 OSINT / Artificial Intelligence
Recently the Office of the Director of National Intelligence (ODNI) unveiled a new strategy for open-source intelligence (OSINT) and referred to OSINT as the "INT of first resort". Public and private sector organizations are realizing the value that the discipline can provide but are also finding that the exponential growth of digital data in recent years has overwhelmed many traditional OSINT methods. Thankfully, Artificial Intelligence (AI) and Machine Learning (ML) are starting to provide a transformative impact on the future of information gathering and analysis.  What is Open-Source Intelligence (OSINT)? Open-Source Intelligence refers to the collection and analysis of information from publicly available sources. These sources can include traditional media, social media platforms, academic publications, government reports, and any other data that is openly accessible. The key characteristic of OSINT is that it does not involve covert or clandestine methods of information gather
The Secrets of Hidden AI Training on Your Data

The Secrets of Hidden AI Training on Your Data

Jun 27, 2024 Artificial Intelligence / SaaS Security
While some SaaS threats are clear and visible, others are hidden in plain sight, both posing significant risks to your organization. Wing's research indicates that an astounding 99.7% of organizations utilize applications embedded with AI functionalities. These AI-driven tools are indispensable, providing seamless experiences from collaboration and communication to work management and decision-making. However, beneath these conveniences lies a largely unrecognized risk: the potential for AI capabilities in these SaaS tools to compromise sensitive business data and intellectual property (IP). Wing's recent findings reveal a surprising statistic: 70% of the top 10 most commonly used AI applications may use your data for training their models. This practice can go beyond mere data learning and storage. It can involve retraining on your data, having human reviewers analyze it, and even sharing it with third parties. Often, these threats are buried deep in the fine print of Term
Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

Jun 27, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed a high-severity security flaw in the Vanna.AI library that could be exploited to achieve remote code execution vulnerability via prompt injection techniques. The vulnerability, tracked as CVE-2024-5565 (CVSS score: 8.1), relates to a case of prompt injection in the "ask" function that could be exploited to trick the library into executing arbitrary commands, supply chain security firm JFrog said . Vanna is a Python-based machine learning library that allows users to chat with their SQL database to glean insights by "just asking questions" (aka prompts) that are translated into an equivalent SQL query using a large language model (LLM). The rapid rollout of generative artificial intelligence (AI) models in recent years has brought to the fore the risks of exploitation by malicious actors, who can weaponize the tools by providing adversarial inputs that bypass the safety mechanisms built into them. One such prominent clas
Expert Insights
Cybersecurity Resources