#1 Trusted Cybersecurity News Platform
Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cybersecurity

AI Ethics | Breaking Cybersecurity News | The Hacker News

Category — AI Ethics
OpenAI Blocks Iranian Influence Operation Using ChatGPT for U.S. Election Propaganda

OpenAI Blocks Iranian Influence Operation Using ChatGPT for U.S. Election Propaganda

Aug 17, 2024 National Securit / AI Ethics
OpenAI on Friday said it banned a set of accounts linked to what it said was an Iranian covert influence operation that leveraged ChatGPT to generate content that, among other things, focused on the upcoming U.S. presidential election. "This week we identified and took down a cluster of ChatGPT accounts that were generating content for a covert Iranian influence operation identified as Storm-2035," OpenAI said . "The operation used ChatGPT to generate content focused on a number of topics — including commentary on candidates on both sides in the U.S. presidential election – which it then shared via social media accounts and websites." The artificial intelligence (AI) company said the content did not achieve any meaningful engagement, with a majority of the social media posts receiving negligible to no likes, shares, and comments. It further noted it had found little evidence that the long-form articles created using ChatGPT were shared on social media platforms.
Meta Halts AI Use in Brazil Following Data Protection Authority's Ban

Meta Halts AI Use in Brazil Following Data Protection Authority's Ban

Jul 18, 2024 Artificial Intelligence / Data Protection
Meta has suspended the use of generative artificial intelligence (GenAI) in Brazil after the country's data protection authority issued a preliminary ban objecting to its new privacy policy. The development was first reported by news agency Reuters. The company said it has decided to suspend the tools while it is in talks with Brazil's National Data Protection Authority (ANPD) to address the agency's concerns over its use of GenAI technology. Earlier this month, ANPD halted with immediate effect the social media giant's new privacy policy that granted the company access to users' personal data to train its GenAI systems. The decision stems from "the imminent risk of serious and irreparable damage or difficult-to-repair damage to the fundamental rights of the affected data subjects," the agency said. It further set a daily fine of 50,000 reais (about $9,100 as of July 18) in case of non-compliance. Last week, it gave Meta "five more days to p
CTEM in the Spotlight: How Gartner's New Categories Help to Manage Exposures

CTEM in the Spotlight: How Gartner's New Categories Help to Manage Exposures

Aug 27, 2024Threat Management / Enterprise Security
Want to know what's the latest and greatest in SecOps for 2024? Gartner's recently released Hype Cycle for Security Operations report takes important steps to organize and mature the domain of Continuous Threat Exposure Management, aka CTEM. Three categories within this domain are included in this year's report: Threat Exposure Management, Exposure Assessment Platforms (EAP), and Adversarial Exposure Validation (AEV). These category definitions are aimed at providing some structure to the evolving landscape of exposure management technologies. Pentera, listed as a sample vendor in the newly defined AEV category, is playing a pivotal role in increasing the adoption of CTEM, with a focus on security validation. Following is our take on the CTEM related product categories and what they mean for enterprise security leaders. The Industry is Maturing CTEM, coined by Gartner in 2022, presents a structural approach for continuously assessing, prioritizing, validating, and remediating expo
From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

Mar 19, 2024 Generative AI / Incident Response
Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future  said  in a new report shared with The Hacker News. The findings are part of a red teaming exercise designed to uncover malicious use cases for AI technologies, which are  already being experimented  with by threat actors to create malware code snippets, generate phishing emails, and conduct reconnaissance on potential targets. The cybersecurity firm said it submitted to an LLM a known piece of malware called  STEELHOOK  that's associated with the APT28 hacking group, alongside its YARA rules, asking it to modify the source code to sidestep detection such that the original functionality remained intact and the generated source code wa
cyber security

Saas Attacks Report: 2024 Edition

websitePush SecuritySaaS Security / Offensive Security
Offensive security drives defensive security. Learn about the SaaS Attack Matrix – compiling the latest attack techniques facing SaaS-native and hybrid organizations.
Expert Insights
Cybersecurity Resources