<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Cyber Security &#8211; Cyberwire Daily</title>
	<atom:link href="https://cyberwiredaily.com/category/cyber-security/feed/" rel="self" type="application/rss+xml" />
	<link>https://cyberwiredaily.com</link>
	<description></description>
	<lastBuildDate>Fri, 24 Apr 2026 08:30:19 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Npm Supply Chain Attack Uses Worm-Like Propagation</title>
		<link>https://cyberwiredaily.com/npm-supply-chain-attack-uses-worm-like-propagation/</link>
					<comments>https://cyberwiredaily.com/npm-supply-chain-attack-uses-worm-like-propagation/#respond</comments>
		
		<dc:creator><![CDATA[Team-CWD]]></dc:creator>
		<pubDate>Fri, 24 Apr 2026 08:30:19 +0000</pubDate>
				<category><![CDATA[Cyber Security]]></category>
		<guid isPermaLink="false">https://cyberwiredaily.com/npm-supply-chain-attack-uses-worm-like-propagation/</guid>

					<description><![CDATA[Malicious npm packages have been identified distributing malware that steals credentials and attempts to spread across developer ecosystems. According to new research from Socket, the activity mirrors earlier worm-style supply chain attacks that used blockchain-hosted infrastructure, including Internet Computer Protocol (ICP) canisters, for command and control (C2). Impacted packages include multiple versions of @automagik/genie and [...]]]></description>
										<content:encoded><![CDATA[<p> <br />
</p>
<div id="layout-164af9a8-5957-43c8-a1b1-475264bceca6" data-layout-id="2" data-edit-folder-name="text" data-index="0">
<p>Malicious npm packages have been identified distributing malware that steals credentials and attempts to spread across developer ecosystems.</p>
<p>According to new<a href="https://socket.dev/blog/namastex-npm-packages-compromised-canisterworm" style="text-decoration:none;" target="_blank"> research</a> from Socket, the activity mirrors earlier worm-style supply chain attacks that used blockchain-hosted infrastructure, including Internet Computer Protocol (ICP) canisters, for command and control (C2).</p>
<p>Impacted packages include multiple versions of @automagik/genie and pgserve, both linked to developer tooling workflows. Researchers found the malware executes during installation, harvesting sensitive data and attempting to republish compromised packages using stolen credentials.</p>
<h3><strong>Malware Focuses on Sensitive Data</strong></h3>
<p>The payload scans<a href="https://www.infosecurityeurope.com/en-gb/blog/threat-vectors/guide-infostealer-malware.html" target="_self"> infected systems for secrets</a> stored in environment variables and configuration files. Targeted data includes cloud credentials, CI/CD tokens, SSH keys and local developer artifacts such as .npmrc and shell histories.</p>
<p>It also attempts to access browser-stored data and cryptocurrency wallets, including Chrome profiles and extensions like MetaMask and Phantom.</p>
<p>Exfiltration occurs through two channels: a standard HTTPS webhook and an ICP endpoint. Data can be encrypted using AES-256 and RSA methods, though plaintext fallback is possible.</p>
<h2><strong>Self-Propagation and Possible Repository Compromise</strong></h2>
<p>A key feature of the malware  is its ability to spread. The malware extracts npm tokens, identifies accessible packages, injects malicious code, and republishes them, enabling further compromise across the ecosystem.</p>
<p>It also includes functionality to propagate via Python&#8217;s PyPI repository by generating malicious packages using .pth file injection when credentials are present.</p>
<p><em>Read more on similar threats: Malicious Machine Learning Model Attack Discovered on PyPI</em></p>
<p>Researchers observed similarities with prior TeamPCP-linked campaigns, including the use of post-install scripts and canister-based infrastructure. However, the exact source of the compromise remains under investigation.</p>
<p>Evidence suggests legitimate projects may have been hijacked. Some affected packages have active usage, with one showing over 6,700 weekly downloads. Inconsistencies between npm releases and Git tags further raise suspicion.</p>
<p>Socket said the situation is still evolving, with additional malicious versions continuing to emerge and the full scope of the attack not yet confirmed.</p>
</div>
<p><br />
<br /><a href="https://www.infosecurity-magazine.com/news/npm-supply-chain-worm-canister/" style="font-size: 11px;color:#D5DBDB">Source</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://cyberwiredaily.com/npm-supply-chain-attack-uses-worm-like-propagation/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The Rising Risk Landscape for Critical National Infrastructure</title>
		<link>https://cyberwiredaily.com/the-rising-risk-landscape-for-critical-national-infrastructure/</link>
					<comments>https://cyberwiredaily.com/the-rising-risk-landscape-for-critical-national-infrastructure/#respond</comments>
		
		<dc:creator><![CDATA[Team-CWD]]></dc:creator>
		<pubDate>Thu, 23 Apr 2026 20:59:32 +0000</pubDate>
				<category><![CDATA[Cyber Security]]></category>
		<guid isPermaLink="false">https://cyberwiredaily.com/the-rising-risk-landscape-for-critical-national-infrastructure/</guid>

					<description><![CDATA[The risks facing industrial organisations are growing in both scale and variety while many critical national infrastructure operators are being asked to stretch budgets beyond what feels safe. Organisations responsible for energy, transport, water and manufacturing are tasked with protecting increasingly complex operations from attackers who are using a much wider range of techniques than [...]]]></description>
										<content:encoded><![CDATA[<p> <br />
</p>
<div id="layout-29585171-3e13-439d-a1cf-3fff46e670db" data-layout-id="2" data-edit-folder-name="text" data-index="0">
<p>The risks facing industrial organisations are growing in both scale and variety while many critical national infrastructure operators are being asked to stretch budgets beyond what feels safe.</p>
<p>Organisations responsible for <a href="https://www.infosecurityeurope.com/en-gb/blog/threat-vectors/ot-security-rising-industrial-cyber-threats.html">energy, transport, water and manufacturing</a> are tasked with protecting increasingly complex operations from attackers who are using <a href="https://www.infosecurityeurope.com/en-gb/blog/threat-vectors/how-cisos-can-defend-against-the-rise-of-ai-powered-cybercrime.html">a much wider range of techniques</a> than even a few years ago.</p>
<p>These organisations often find themselves defending essential systems while justifying every item of spend, causing some to cut back on security because the benefits are not always immediately visible.</p>
<p>But consequences of those decisions aren’t always immediate, often occurring when disruption is deeper and recovery slower. Once an industrial environment is interfered with, bringing it back online isn’t always instantaneous, and safety considerations can escalate well beyond what any budget expected.</p>
<p>And with <a href="https://www.infosecurityeurope.com/en-gb/blog/future-thinking/digital-transformation-requires-cybersecurity-strategy.html">digital transformation</a>, attackers can and now do move laterally from the information technology side of a business or organisation, into planning systems, supplier connections, cloud interfaces and remote access points until they reach the technology that keeps production running.</p>
<p>This is why cyber resilience in critical infrastructure is essential to prevent breakdowns, protect uptime and keep critical services running.</p>
<h2><strong>Pressure Points That Attackers Understand</strong></h2>
<p>Recent rises in industrial ransomware show how urgently a stronger mindset is needed. According to the Dragos 2026 OT/ICS Cybersecurity Report, ransomware remained the most impactful threat to industrial organisations, with attacks increasing 64 percent year-over-year.</p>
<p>Dragos tracked 119 ransomware groups targeting industrial organisations in 2025, up from 80 the year before, collectively impacting 3,300 organisations. Manufacturing accounted for more than two-thirds of all victims, showing how attackers deliberately focus on sectors where disruption creates immediate pressure and quick leverage.</p>
<p>Most attacks in industrial environments still begin with predictable weaknesses. Exposed remote access tools, forgotten third party accounts and <a href="https://www.infosecurityeurope.com/en-gb/blog/guides-checklists/patch-management-best-practices-vulnerability.html">unpatched systems</a> give attackers simple points of entry. Incidents in throughout 2025 showed how quickly these gaps escalate.</p>
<p>For example, in the UK Jaguar Land Rover saw operational disruption because of a ransomware attack, while further afield Asahi’s operations in Japan were also severely impacted by ransomware.</p>
<p>In each case, what started as a technical breach developed into an operational stoppage, then a supply chain delay and finally lost revenue and a hit to the company’s brand and reputation amongst its customers.</p>
<p>Sectors where downtime causes the greatest impact are feeling this pressure most sharply. Manufacturing still sees the highest number of incidents, but key industries ranging from transport and logistics to telecoms and government face significant threats too.</p>
<p>Attackers are also targeting engineering partners and suppliers, knowing that a single compromise further up the supply chain can open the door to many organisations at once.</p>
<p>At the same time, budget constraints are prompting teams to relax vendor checks or ease remote access rules, widening exposure precisely when it needs to be narrowing. Cutting corners may feel practical in the moment, but in today’s environment it is increasingly detrimental.</p>
<h2><strong>A Practical Path to Lowering Risk </strong></h2>
<p>There are steps that security leaders can take to ensure their organisation is best prepared for whatever threats may be around the corner.</p>
<p>Most crucially, they need to develop a clear understanding of the systems they depend on, who can reach them and which assets they can never afford to lose. This includes engineering workstations, remote access points and the business systems that keep operations moving at all times.</p>
<p>Strengthening access controls, patching exposed systems and tightening privileges removes many attacker entry routes. Network segmentation can limit how far an intruder can move, while reliable backups, tested properly, ensure recovery is possible even when there is a security breach.</p>
<p>Now is also the time for security leaders to increase their organisation’s readiness outside normal working hours and ensure both IT and OT teams work from clear, well-practiced playbooks that reflect current geopolitical threats.</p>
<p>They also need to act on real‑time threat intelligence, vet the resilience of suppliers and apply the SANS ICS Five Critical Controls consistently across all operations to reinforce their overall security.</p>
<p>Across the globe, boards now recognise that ransomware and other threats to industrial operations can have serious consequences for their organisations. Whether it’s grounded fleets, stalled production lines or major delays in services or operations, they can all bring about serious financial costs and reputational damage.</p>
<p>As a result, it’s crucial that leaders, both in security teams and all the way up to board level, <a href="https://www.infosecurityeurope.com/en-gb/blog/threat-vectors/ot-security-rising-industrial-cyber-threats.html">understand the fundamentals of OT security</a>. Being aware of which assets or suppliers might be most at risk and having incident response plans set in stone can be critical in getting operations up and running again quickly should they become a target.</p>
<p>Industrial organisations operating critical national infrastructure are facing a period unlike one we have previously seen. <a href="https://www.infosecurityeurope.com/en-gb/blog/threat-vectors/tackle-evolving-email-based-attacks.html">Existing cyber threats are evolving</a> and new ones continue to emerge in differing forms, all at the same time as budgets continue to tighten.</p>
<p>However, the most effective defences remain the steady, uncomplicated practices that strengthen visibility, control and recovery. This will be the key to maintaining resilience amid a rising risk landscape in the years ahead.</p>
</div>
<p><br />
<br /><a href="https://www.infosecurity-magazine.com/opinions/rising-risk-landscape-for-critical/" style="font-size: 11px;color:#D5DBDB">Source</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://cyberwiredaily.com/the-rising-risk-landscape-for-critical-national-infrastructure/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>NCSC Unveils SilentGlass Device to Protect Monitors from Cyber-Attacks</title>
		<link>https://cyberwiredaily.com/ncsc-unveils-silentglass-device-to-protect-monitors-from-cyber-attacks/</link>
					<comments>https://cyberwiredaily.com/ncsc-unveils-silentglass-device-to-protect-monitors-from-cyber-attacks/#respond</comments>
		
		<dc:creator><![CDATA[Team-CWD]]></dc:creator>
		<pubDate>Wed, 22 Apr 2026 20:43:21 +0000</pubDate>
				<category><![CDATA[Cyber Security]]></category>
		<guid isPermaLink="false">https://cyberwiredaily.com/ncsc-unveils-silentglass-device-to-protect-monitors-from-cyber-attacks/</guid>

					<description><![CDATA[The UK National Cyber Security Centre (NCSC) has unveiled a new technology designed to protect video connections from cyber-attacks. The device, dubbed SilentGlass, was launched on April 22 at CYBERUK, the UK government’s flagship annual cybersecurity conference.   SilentGlass is plug-and-play device designed to actively block anything unexpected or malicious between HDMI or display port [...]]]></description>
										<content:encoded><![CDATA[<p> <br />
</p>
<div id="layout-2dd5d97c-4227-4acf-b66d-1db6e6ff2e36" data-layout-id="2" data-edit-folder-name="text" data-index="0">
<p>The UK National Cyber Security Centre (NCSC) has unveiled a new technology designed to protect video connections from cyber-attacks.</p>
<p>The device, dubbed SilentGlass, was launched on April 22 at CYBERUK, the UK government’s flagship annual cybersecurity conference.  </p>
<p>SilentGlass is plug-and-play device designed to actively block anything unexpected or malicious between HDMI or display port connections and monitor screens. It is approved for use in even the most high-threat cybersecurity environments.</p>
<p>The device has already been successfully deployed on government estates, and now SilentGlass has been released for anyone to buy and use. </p>
<p>The NCSC has partnered with Goldilock Labs and Sony UK to manufacture and sell SilentGlass globally. </p>
<h2><strong>Monitors a Target For Cyber-Attacks</strong></h2>
<p>The NCSC has warned that monitors can be a attractive target for malicious threat actors because of how they are used to hold and process valuable, sensitive or personal data.</p>
<p>According to the agency, that means that cyber threat actors are likely to abuse access to monitors to infiltrate networks for malicious activities such as disruption or financial gain, taking advantage of a lack of mitigations in this area.</p>
<p>SilentGlass can been developed to help to shut down this attack vector.</p>
<p>“Display screens and monitors are everywhere in modern business environments, and the SilentGlass device will help protect previously vulnerable IT infrastructure with unprecedented ease,” said Ollie Whitehouse, CTO at NCSC.</p>
<p>“Its development and commercialization shows the impact that the NCSC can have, alongside industry partners, with an affordable and effective product now globally available. By helping to launch a UK company onto the global market with this world-class innovation, we are breaking new ground and helping to strengthen national prosperity,” he added.</p>
<p>Goldilock Labs, a UK-based small business described as experts in cyber security innovation, was awarded the contract license to manufacture SmartGlass, after what the NCSC described as “a competitive process.”</p>
<p>“SilentGlass addresses a gap that has been widely overlooked. The hardware interfaces people rely on every day have rarely been treated as security boundaries, despite being exposed to risk through supply chains, third-party servicing, and direct physical access,” said Stephen Kines, co-founder of Goldilock Labs.</p>
<p>“What was once confined to national security environments is now being applied with a low-cost, easy to deploy solution for CNI and businesses where the same risks exist.”</p>
<p>Goldilock, Sony and the NCSC said they expect rapid global adoption of SilentGlass by governments and risk-conscious organizations. The NCSC pointed to SilentGlass as an example of how government intellectual property can be successfully commercialized.</p>
<p>Now in it’s tenth year, with the 2026 edition hosted in Glasgow, Scotland, CYBERUK 26 kicked off with a speech by Richard Horne, CEO of the NCSC. He warned that a “perfect storm” of new technologies and geopolitical risks have created the risk of unprecedented cyber threats for the UK</p>
</div>
<p><br />
<br /><a href="https://www.infosecurity-magazine.com/news/ncsc-silentglass-a-plugin-stop/" style="font-size: 11px;color:#D5DBDB">Source</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://cyberwiredaily.com/ncsc-unveils-silentglass-device-to-protect-monitors-from-cyber-attacks/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>FIRST CEO Calls for CVE Collaboration amid AI Vulnerability Tsunami</title>
		<link>https://cyberwiredaily.com/first-ceo-calls-for-cve-collaboration-amid-ai-vulnerability-tsunami/</link>
					<comments>https://cyberwiredaily.com/first-ceo-calls-for-cve-collaboration-amid-ai-vulnerability-tsunami/#respond</comments>
		
		<dc:creator><![CDATA[Team-CWD]]></dc:creator>
		<pubDate>Sat, 18 Apr 2026 07:31:12 +0000</pubDate>
				<category><![CDATA[Cyber Security]]></category>
		<guid isPermaLink="false">https://cyberwiredaily.com/first-ceo-calls-for-cve-collaboration-amid-ai-vulnerability-tsunami/</guid>

					<description><![CDATA[The cybersecurity community is facing a rapidly evolving software vulnerability landscape, with the mean time to exploit plummeting from weeks to mere hours. As artificial intelligence fundamentally alters how security flaws are both discovered and weaponized, industry leaders are grappling with how to manage an unprecedented explosion of vulnerabilities. Chris Gibson, CEO of global incident response alliance [...]]]></description>
										<content:encoded><![CDATA[<p> <br />
</p>
<div>
<div id="layout-b2234c51-e46c-402e-ae30-5245bf79fbc1" class="content-module " data-layout-id="2" data-edit-folder-name="text" data-index="0">
<p>The cybersecurity community is facing a rapidly evolving software vulnerability landscape, with the mean time to exploit plummeting from weeks to mere hours.</p>
<p>As artificial intelligence fundamentally alters how security flaws are both discovered and weaponized, industry leaders are grappling with how to manage an unprecedented explosion of vulnerabilities.</p>
<p>Chris Gibson, CEO of global incident response alliance FIRST, believes the answer lies in strict global cooperation rather than fragmented, regional efforts.</p>
<p>Pointing to European Union Agency for Cybersecurity’s (ENISA) decision to integrate with CISA and MITRE as a Top-Level Root CNA, Gibson says it is crucial to maintain a unified, federated vulnerability database instead of siloing vital threat intelligence.</p>
<p>In conversation with <em>Infosecurity</em> following VulnCon26, the global conference focused on improving vulnerability management and disclosure<em>, </em>Gibson spoke about the disruption caused by frontier AI models from companies like Anthropic and OpenAI.</p>
<p>While these autonomous tools present a major challenge to traditional coordinated disclosure timeframes, he argued that the cybersecurity community must adapt by bringing these AI firms inside the tent, ideally integrating them as Common Vulnerabilities and Exposures (CVE) Numbering Authorities (CAN) to help stabilize the ecosystem.</p>
</div>
<figure id="layout-416b9b82-7a77-472c-9dd0-f9449473789c" class="content-module media media-left" data-layout-id="4" data-edit-folder-name="image" data-index="1"></figure>
<div id="layout-a956c15b-d13b-41f0-8d22-3aec06967f73" class="content-module " data-layout-id="2" data-edit-folder-name="text" data-index="2">
<p><strong>Infosecurity Magazine:</strong>  <strong>FIRST hosts the annual VulnCon summit and the 2026 event has now finished. What were the main highlights of this edition?</strong></p>
<p>Chris Gibson: To me, the one that really stood out was the announcement by ENISA that they’re going to be working with the US Cybersecurity and Infrastructure Security Agency (CISA) and MITRE on the CVE program.</p>
<p>One thing that had me worried last year was when ENISA announced their EU Vulnerability Database (EUVD). It felt like they would go off and do their own thing.</p>
<p>The fact that they&#8217;re coming together now and ENISA is going to become a Top-Level Root CNA for the CVE program is really positive.</p>
<p>It demonstrates that the cybersecurity community is going to work together to fix the vulnerability explosion problem and no one going to go and build their own system with their own data.</p>
<p><strong>IM: How would you evaluate the magnitude of the ‘vulnerability explosion?’</strong></p>
<p>CG:  We&#8217;ve seen a steady increase in the number of vulnerabilities reported over the years. That&#8217;s logical. There&#8217;s more software out there, including more software getting older and more new pieces of software.</p>
<p>More recently, of course, I&#8217;ve heard about the ‘tsunami’ that is coming towards us of all these vulnerabilities being found very quickly through AI and our ability to react to them. I looked at a zero-day vulnerability tracking website recently and saw that the mean time to exploit now is in hours. It&#8217;s not in months, weeks or days that it used to be. It&#8217;s down to hours.</p>
<p>The ability for the bad guys to be able to come in, find vulnerabilities and then exploit them within hours is more than concerning and then how we manage our way out of that is a real challenge.</p>
<p>Vulnerability management is not something many companies do properly. Most don&#8217;t have enough staff; they have enough trouble trying to have information security people. Having a whole team working on vulnerabilities is a real challenge.</p>
<p>The fact that companies develop ways and systems to automate vulnerability management, especially using AI, is a good thing. We need AI to fight AI. But then, as a company, you&#8217;re giving your controls to a machine and that&#8217;s slightly concerning as well.</p>
<p>I also worry that AI fighting AI means that you can sort of game the system and therefore lose the understanding of how the AI on the defending side is working.</p>
</div>
<figure id="layout-9ebba81f-502b-42a3-bc63-7210a4c595fe" class="content-module blockquote" data-layout-id="6" data-edit-folder-name="quote" data-index="3">
<blockquote>
<p>&#8220;I would be surprised if Anthropic and OpenAI weren&#8217;t CVE Numbering Authorities by the end of 2026.&#8221;</p>
</blockquote>
</figure>
<div id="layout-4f55eb42-839e-4ee1-ad3c-bd773a614d67" class="content-module " data-layout-id="2" data-edit-folder-name="text" data-index="4">
<p><strong>IM: Do you always need AI to fight AI?</strong></p>
<p>CG: If we step back to the good old cyber hygiene that we banged on about for years, if we segmented networks properly, had decent passwords, patched our systems in a timely manner, no exploits would be quite so bad. But we don&#8217;t.</p>
<p>I would posit that very few companies have a very good idea of what their information systems actually look like. There are old machines, edge devices, personal devices and more.</p>
<p>My worry is that, if AI becomes as ubiquitous as some people think it will and as powerful as Anthropic’s Mythos announcement seems to claim, the sheer speed of being attacked is going to be so quick. Are humans going to be responsive enough to figure to respond to that?  </p>
<p><strong>IM:  What do you make of Anthropic’s announcement of Claude Mythos and OpenAI&#8217;s GPT-5.4-Cyber, two frontier models that promise to autonomously find and fix cybersecurity vulnerabilities at scale, but launched only to a limited user group?</strong></p>
<p>CG: I guess that&#8217;s them being responsible. Yet I think the genie is out of the bottle. At some point, such a model is going to leak out. Whenever humans have tried to keep things to a small, trusted community over time, that&#8217;s we have failed.</p>
<p>At some point, these models or others with the same capabilities are going to hit the market or be available for people to use. Are we not better of facing that now, rather than later?</p>
<p><strong>IM: </strong><strong>How should vulnerability disclosures be handled when considering traditional coordinated vulnerability disclosure models versus sharing information with a limited set of organizations?</strong></p>
<p>CG: The old way of finding vulnerabilities, the responsible disclosure process, has been proving useful in the past when finding individual vulnerabilities, giving people X weeks to deal with a vulnerability you have found.</p>
<p>On the one hand, yes, I&#8217;d love for Anthropic and OpenAI to have done that. On the other hand, I don’t know how it would work for the scale and speed we’re talking about.</p>
<p>Also, at the end of the day, Anthropic and OpenAI are looking for market share. These companies are not in it for the good of their health, so to speak, so showing what they can do and how good they are at it is clearly a benefit.</p>
<p>Most of us use AI for free, because some of the models are free, but they&#8217;ve got to make their money somewhere, and this is presumably how they&#8217;re going to do it.</p>
<p><strong>IM: Do you think AI companies should be better integrated into the vulnerability disclosure ecosystem? </strong></p>
<p>CG: I would say so. At FIRST, the organization hosting VulnCon, we believe in the community. We want to get people together and talk about things. I&#8217;d rather have AI companies inside that tent talking about it than outside, producing information that is going to make our lives challenging. Let&#8217;s have them inside and talk about how we do that.</p>
<p>I would be surprised if Anthropic and OpenAI weren&#8217;t CVE Numbering Authorities by the end of 2026. But I guess that&#8217;s a decision they&#8217;re going to have to make.</p>
</div>
<figure id="layout-83b647c1-57c6-448d-9d39-6e03f6007388" class="content-module blockquote" data-layout-id="6" data-edit-folder-name="quote" data-index="5">
<blockquote>
<p>&#8220;We&#8217;ve got some absolute geniuses at VulnCon, but we need more of them.&#8221;</p>
</blockquote>
</figure>
<div id="layout-4817a96a-cb06-43df-aebf-9929f98ac90a" class="content-module " data-layout-id="2" data-edit-folder-name="text" data-index="6">
<p><strong>IM: Is traditional vulnerability research “cooked” because of AI, as some people suggest?</strong></p>
<p>CG: Clearly, it&#8217;s another tool in the armory for finding vulnerabilities, and probably a game changer in many ways.</p>
<p>Will it, over time, calm down as we move forward? Possibly, but it&#8217;s clearly going to change the way we work and the way we have to respond to vulnerabilities and exploits.</p>
<p><strong>IM: Do we have the vulnerability disclosure infrastructure to deal with this new technology?</strong></p>
<p>CG: I look around the world, I clearly would worry we&#8217;re not ready for it. We see government programs that are having challenges for funding issues. We see the sheer complexity of analyzing vulnerabilities using a common language and the lack of good vulnerability researchers to fill those roles. We’ve got some absolute geniuses here, but we need more of them, and it&#8217;s a very niche topic for most people.</p>
<p>As an organization, FIRST aims to represent incident responders, and they want as simple a landscape as possible. That’s why the C in CVE stands for ‘common.’ We need a common language and end-user need to deal with coordinated organizations.</p>
<p>I&#8217;d like to see more announcements like ENISA joining the CVE program as opposed to specific countries launching their own initiatives.</p>
<p>If the NIST National Vulnerability Database (NVD) and the ENISA EU Vulnerability Database (EUVD) stay as two separate tracks, not really talking to each other, that will be unhelpful, because vulnerability consumers will have to track multiple systems and multiple identifiers. No one wants that.</p>
<p>However, if they can work together so that we&#8217;ve got a federated global system while providing what their stakeholders need, then it could be a working system. We bring them all together at VulnCon to nudge them in the right direction.</p>
<p><strong>IM: With both the CVE and NVD being US-funded programs, people fear of possible lack of funding. Are you worried about this?</strong></p>
<p>CG: We&#8217;re clearly concerned that there could be a cut in funding, but when I talk to CISA people here, they are adamant that it is not going to happen. They are absolutely categorical: the CVE program is well funded, and we will continue to fund it.</p>
<p>Nevertheless, I personally think that diversifying that funding would be a good thing, because it would take that uncertainty away – especially after last year, when a memo was made public about the CVE program’s funding about to end.</p>
<p>If such a CVE program shutdown were to happen – a worst-case scenario – I’m sure our industry would rebuild it in some way, because it cannot live without it. There would be a massive pull to reproduce it, as ENISA did in the EU. So, there is a bit of me that thinks, even if it did crash, there would be a period of challenge, but we would rebuild it in some form.</p>
<p>However, I remember that at that point in 2025, we saw various initiatives spin up, and that started to fragment the ecosystem. It is not the way forward from our point of view.</p>
<p>Now, if Europeans come and take part in the program, it could bring enough diversification that we can start putting things back together.</p>
</div></div>
<p><br />
<br /><a href="https://www.infosecurity-magazine.com/interviews/first-ceo-cve-collaboration-ai/" style="font-size: 11px;color:#D5DBDB">Source</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://cyberwiredaily.com/first-ceo-calls-for-cve-collaboration-amid-ai-vulnerability-tsunami/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>OpenClaw Exposes the Real Cybersecurity Risks of Agentic AI</title>
		<link>https://cyberwiredaily.com/openclaw-exposes-the-real-cybersecurity-risks-of-agentic-ai/</link>
					<comments>https://cyberwiredaily.com/openclaw-exposes-the-real-cybersecurity-risks-of-agentic-ai/#respond</comments>
		
		<dc:creator><![CDATA[Team-CWD]]></dc:creator>
		<pubDate>Fri, 17 Apr 2026 19:53:20 +0000</pubDate>
				<category><![CDATA[Cyber Security]]></category>
		<guid isPermaLink="false">https://cyberwiredaily.com/openclaw-exposes-the-real-cybersecurity-risks-of-agentic-ai/</guid>

					<description><![CDATA[One of today’s hot topics in infosec is Agentic AI.  For senior leaders it looks like magic – reduce your headcount, be more efficient and move more quickly.  But does the hype match the reality.  And do business leaders understand the security risks? Agentic AI typically involves one AI system orchestrating multiple other tools or [...]]]></description>
										<content:encoded><![CDATA[<p> <br />
</p>
<div id="layout-4f1edb47-f87f-4d62-910d-423eeac62a9d" data-layout-id="2" data-edit-folder-name="text" data-index="0">
<p>One of today’s hot topics in infosec is Agentic AI.  For senior leaders it looks like magic – reduce your headcount, be more efficient and move more quickly.  But does the hype match the reality.  And do business leaders understand the security risks?</p>
<p>Agentic AI typically involves one AI system orchestrating multiple other tools or agents to execute a chain of tasks. In more advanced deployments, agents operate autonomously, selecting which tools to use and how to complete an objective without human intervention. While this architecture can drive efficiency, it also introduces <a href="https://www.infosecurityeurope.com/en-gb/conference-programme/session-details.4886.262836.the-age-of-the-understatement-threat-hunting-the-agentic-ai-attack-surface-with-ai.html">a fragmented and dynamic attack surface</a>.   And in some organizations a loss of control.</p>
<p>Without effective governance, visibility and control, risks can escalate rapidly. Until recently, these risks were largely theoretical; however, the OpenClaw investigation shows how quickly those concerns can become real.  And how quickly regulators can get involved too.</p>
<h2><strong>The OpenClaw Exposure</strong></h2>
<p>OpenClaw was built in late 2025 as a “weekend project” by its author, Peter Steinberger, and quickly gained traction. Steinberger said that his GitHub repository attracted around 2 million visitors in a single week, with many developers incorporating the code into their Agentic AI infrastructure.</p>
<p>However, on 9 February 2026, a report identified significant vulnerabilities. Researchers discovered more than 42,000 unique IP addresses hosting exposed OpenClaw control panels across 82 countries, many with full system access.</p>
<p>The report identified almost 50,000 instances where devices appeared vulnerable to remote code execution (RCE). In practical terms, this could allow an attacker to exploit the OpenClaw gateway to take control of the affected system.</p>
<p>OpenClaw deployments were heavily concentrated across major cloud and hosting providers. Depending on configuration, these vulnerabilities could also allow threat actors to access connected third-party services, including email, calendars, chat applications, social media and browser sessions.</p>
<p>Further concerns emerged when a cybersecurity investigation reportedly identified a misconfigured database exposing approximately 1.5 million authentication tokens, around 35,000 email addresses, and private communications between AI agents. Taken together, these issues point not just to isolated weaknesses, but to broader challenges around access control, credential management and system design.</p>
<h2><strong>Regulatory Concerns Are Emerging</strong></h2>
<p>Regulators have already begun to respond. On 12 February 2026, the Dutch data protection authority, Autoriteit Persoonsgegevens (AP), warned users and organizations against using OpenClaw and similar experimental systems. They noted that what it called “open-source tools” may not meet basic security requirements and advised against deploying them on systems containing sensitive or confidential data. </p>
<p>The AP reminded organizations that it had powers under GDPR to get involved: under GDPR regulators can suspend processing, launch dawn raids or levy fines.  We’ve seen all three options used to police AI.</p>
<p>The AP’s warning includes the use of tools like OpenClaw in environments holding access credentials, financial information, employee data, private documents or identity records. The AP also emphasized that local deployment does not guarantee security, a point that remains widely misunderstood in practice.</p>
<h2><strong>Why Uninstalling OpenClaw is Not a Solution</strong></h2>
<p>For many organizations, fixing risks associated with OpenClaw is not be as simple as uninstalling the software. One challenge is visibility as some companies may not be aware if OpenClaw has been deployed, as the tool may have been adopted by developers or staff experimenting with AI tools without formal approval or oversight.</p>
<p>Shadow AI risk is already significant. A Microsoft study suggested that 71% of UK employees admitted using unapproved AI tools at work. Given the rapid adoption of AI since then, the true figure could now be higher.</p>
<p>OpenClaw also integrates with widely used communication and collaboration platforms, including WhatsApp, Telegram, Discord, Slack and Teams. If OpenClaw has been linked to multiple applications, manually resetting credentials and access tokens across those services could be a substantial task.</p>
<h2><strong>Practical Steps Organizations Should Consider</strong></h2>
<p>For many organizations, the OpenClaw case is a reminder that AI innovation must be matched with appropriate risk management. Some practical steps include:</p>
<ul>
<li><strong>Looking at Technical Settings</strong>:  Organizations need to restrict the use of applications like OpenClaw on their networks. There are tools available to look at Shadow AI risk. If the organization has those tools, they need to add OpenClaw on to the list of prohibited applications.  It has been reported that it is currently not possible for humans to delete an account on OpenClaw at least by using common settings.  Organizations that think they have been exposed may want to take specialist advice.</li>
<li><strong>Check Your Socials</strong>:  It has been reported that OpenClaw collects X (formerly Twitter) user names, display names andpasswords.So it might be possible via OpenClaw for a threat actor to gain access to the organization’s social networking output, which again can lead to reputational risks and expose the organization to phishing attacks etc.</li>
<li><strong>Literacy is Key</strong>.  AI literacy has become a regulatory expectation, including under the EU AI Act, and staff need to understand both the opportunities and risks of AI systems.</li>
<li><strong>Take Measures to Protect Against Shadow AI</strong>: Whilst a literacy program will be part of this, organizations may want to include traditional software solutions like data loss prevention (DLP) software, and specialist Shadow AI monitoring and blocking services. </li>
<li><strong>Look at Contracts and Developer Due Diligence</strong>: For some organizations the issue might stem from sub-contracted developers. Therefore, they need to ensure contractual protections are in place to meet their compliance and regulatory obligations.  This might also include specific insurance policies since developers with just 1 or 10 employees are unlikely to have the financial ability to pay up when things go wrong. </li>
<li><strong>Do a Proper Data Protection Impact Assessment</strong>: This isn’t just common sense but may well be a legal requirement.  Whilst organizations want to move quickly in the new AI world, sometimes it is necessary to step back and see if an organization’s legal and compliance obligations are being considered.</li>
</ul>
<h2><strong>A Broader Lesson</strong></h2>
<p>Agentic AI has the potential to transform the way organizations operate. However, the OpenClaw exposure highlights how quickly innovation can outpace governance. For security professionals, the issue is not just a single vulnerable tool, but a broader shift towards autonomous, highly integrated systems operating with extensive permissions and limited oversight. Without appropriate controls, these systems can introduce significant and systemic risk.</p>
<p>Organizations that improve visibility, strengthen governance and invest in AI literacy will be far better placed to realize the benefits of Agentic AI while managing its risks effectively.</p>
<p>Image credit: Stockinq / Shutterstock.com</p>
</div>
<p><br />
<br /><a href="https://www.infosecurity-magazine.com/opinions/openclaw-exposes-real-security/" style="font-size: 11px;color:#D5DBDB">Source</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://cyberwiredaily.com/openclaw-exposes-the-real-cybersecurity-risks-of-agentic-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Systemic Flaw in MCP Protocol Could Expose 150 Million Downloads</title>
		<link>https://cyberwiredaily.com/systemic-flaw-in-mcp-protocol-could-expose-150-million-downloads/</link>
					<comments>https://cyberwiredaily.com/systemic-flaw-in-mcp-protocol-could-expose-150-million-downloads/#respond</comments>
		
		<dc:creator><![CDATA[Team-CWD]]></dc:creator>
		<pubDate>Fri, 17 Apr 2026 07:19:38 +0000</pubDate>
				<category><![CDATA[Cyber Security]]></category>
		<guid isPermaLink="false">https://cyberwiredaily.com/systemic-flaw-in-mcp-protocol-could-expose-150-million-downloads/</guid>

					<description><![CDATA[Security researchers have warned of a “critical, systemic” vulnerability in the model context protocol (MCP) which could have a significant impact on the AI supply chain. MCP is a popular open source standard created by Anthropic which allows AI models to connect to external data and systems. However, in a report published on April 15, researchers at [...]]]></description>
										<content:encoded><![CDATA[<p> <br />
</p>
<div id="layout-2da84f92-c3fc-4c02-9608-b1a4792b303c" data-layout-id="2" data-edit-folder-name="text" data-index="0">
<p>Security researchers have warned of a “critical, systemic” vulnerability in the model context protocol (MCP) which could have a significant impact on the AI supply chain.</p>
<p>MCP is a popular open source standard created by Anthropic which allows AI models to connect to external data and systems.</p>
<p>However, <a href="https://www.ox.security/blog/the-mother-of-all-ai-supply-chains-critical-systemic-vulnerability-at-the-core-of-the-mcp/?_gl=1*4551xv*_up*MQ..*_ga*MzcwNjU4OTgwLjE3NzYzMzM3NTc.*_ga_BEXTPVWPX8*czE3NzYzMzM3NTUkbzEkZzAkdDE3NzYzMzM3NTUkajYwJGwwJGgw" target="_self">in a report published on April 15</a>, researchers at Ox Security claimed that a flaw in the protocol could enable arbitrary command execution on any vulnerable system, handing attackers access to sensitive user data, internal databases, API keys, and chat histories.</p>
<p>“This is not a traditional coding error,” warned the vendor.</p>
<p>“It is an architectural design decision baked into Anthropic’s official MCP SDKs across every supported programming language, including Python, TypeScript, Java, and Rust. Any developer building on the Anthropic MCP foundation unknowingly inherits this exposure.”</p>
<p>It said that over 200 open source projects, 150 million downloads, 7000+ publicly accessible servers and up to 200,000 vulnerable instances in total could be exposed by the vulnerability.</p>
<p><em>Read more on MCP: Hundreds of MCP Servers at Risk of RCE and Data Leaks.</em></p>
<p>According to Ox Security, the exploit mechanism is fairly straightforward.</p>
<p>“MCP’s STDIO interface was designed to launch a local server process. But the command is executed regardless of whether the process starts successfully,” it explained. “Pass in a malicious command, receive an error – and the command still runs. No sanitization warnings. No red flags in the developer toolchain. Nothing.”</p>
<p>In effect, this could result in complete takeover of a target’s system.</p>
<h2><strong>Who’s to Blame?</strong></h2>
<p>Ox Security said it has repeatedly tried to persuade Anthropic to patch the vulnerability. However, according to the report, the AI giant said that this was “expected behavior.”</p>
<p>“Anthropic confirmed the behavior is by design and declined to modify the protocol, stating the STDIO execution model represents a secure default and that sanitization is the developer’s responsibility,” Ox Security said.</p>
<p>The company argued that pushing responsibility onto developers for securing their code, instead of securing the infrastructure it runs on, is dangerous given the community’s track record on security.</p>
<p>In the meantime, Ox Security has issued over 30 responsible disclosures and discovered over 10 high or critical-severity CVEs, to help patch individual open source projects.</p>
<p>Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster University, said the research exposed “a shocking gap in the security of foundational AI infrastructure” and that the researchers did the right thing.</p>
<p>“We are trusting these systems with increasingly sensitive data and real-world actions. If the very protocol meant to connect AI agents is this fragile and its creators will not fix it then every company and developer building on top of it needs to treat this as an immediate wake-up call,” he added.</p>
</div>
<p><br />
<br /><a href="https://www.infosecurity-magazine.com/news/systemic-flaw-mcp-expose-150/" style="font-size: 11px;color:#D5DBDB">Source</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://cyberwiredaily.com/systemic-flaw-in-mcp-protocol-could-expose-150-million-downloads/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Cookeville Hospital Discloses Rhysida Breach Hitting 337,917</title>
		<link>https://cyberwiredaily.com/cookeville-hospital-discloses-rhysida-breach-hitting-337917/</link>
					<comments>https://cyberwiredaily.com/cookeville-hospital-discloses-rhysida-breach-hitting-337917/#respond</comments>
		
		<dc:creator><![CDATA[Team-CWD]]></dc:creator>
		<pubDate>Thu, 16 Apr 2026 19:50:12 +0000</pubDate>
				<category><![CDATA[Cyber Security]]></category>
		<guid isPermaLink="false">https://cyberwiredaily.com/cookeville-hospital-discloses-rhysida-breach-hitting-337917/</guid>

					<description><![CDATA[More than 337,000 patients of Cookeville Regional Medical Center (CRMC) in Tennessee have been notified that their personal and medical data was compromised in a July 2025 ransomware attack, the hospital confirmed this week. The 309-bed facility began mailing breach notification letters on April 14, 2026, roughly nine months after the intrusion was detected. Files [...]]]></description>
										<content:encoded><![CDATA[<p> <br />
</p>
<div id="layout-e3c4fb4f-6eea-4378-935e-8f085cd00489" data-layout-id="2" data-edit-folder-name="text" data-index="0">
<p>More than 337,000 patients of Cookeville Regional Medical Center (CRMC) in Tennessee have been notified that their personal and medical data was compromised in a July 2025 ransomware attack, the hospital confirmed this week.</p>
<p>The 309-bed facility began mailing breach notification letters on April 14, 2026, roughly nine months after the intrusion was detected.</p>
<p>Files were accessed or acquired by an unathorized party between July 11 and July 14, 2025, according to a<a href="https://www.maine.gov/agviewer/content/ag/985235c7-cb95-4be2-8792-a1252b4f8318/fb04ea66-92bb-4a15-b02c-8d1a9f783461.html" style="text-decoration:none;" target="_blank"> filing</a> with the Maine Attorney General&#8217;s Office. A total of 337,917 individuals have been affected. </p>
<h2><strong>Inside the Rhysida Attack on CRMC</strong></h2>
<p>Rhysida, a ransomware-as-a-service operation linked to Russia and active since May 2023, claimed responsibility on August 2, 2025. The gang demanded a ransom of 10 Bitcoin, worth roughly $1.15m at the time, and posted sample files on its dark web leak site. It is unclear whether any ransom was paid.</p>
<p>Information accessed may include names, addresses, dates of birth, Social Security numbers, driver&#8217;s license numbers, financial account details, medical record numbers, treatment information and health insurance data.</p>
<p>CRMC, which serves around 250,000 patients annually across 14 counties in the Upper Cumberland region, is offering 12 months of free identity theft protection through Experian.</p>
<p><em>Read more on Rhysida&#8217;s healthcare targeting: Rhysida Ransomware Analysis Reveals Vice Society Connection</em></p>
<h2><strong>A Year of Pressure on US Healthcare</strong></h2>
<p>The CRMC incident ranks as the eighth-largest US healthcare ransomware breach of 2025 by records compromised, according to<a href="https://www.comparitech.com/news/cookeville-regional-medical-center-warns-338000-people-of-data-breach/" style="text-decoration:none;" target="_blank"> Comparitech</a>, which logged 134 confirmed attacks on US healthcare providers last year, exposing 11.7 million records.</p>
<p>Rhysida alone claimed 91 attacks across all sectors in 2025, with 23 confirmed and an average demand of $1.2m.</p>
<p>Other recent Rhysida healthcare victims include:</p>
<ul>
<li>Florida Lung, Asthma &amp; Sleep Specialists (FL), May 2025, $639,000 demand</li>
<li>MedStar Health (MD), September 2025, $3.09m demand</li>
<li>Spindletop Center (TX), September 2025, $1.65m demand</li>
<li>MACT Health Board (CA), November 2025, $662,000 demand</li>
<li>Heart South Cardiovascular Group (AL), November 2025, $630,000 demand</li>
</ul>
<p>Rebecca Moody, head of data research at Comparitech, said the lengthy investigation timeline reflects the scale of forensic work required after a hospital ransomware hit.</p>
<p>&#8220;It can take a considerable amount of time for organizations to investigate what data has been impacted in these breaches,&#8221; Moody explained.</p>
<p>&#8220;While some organizations avoid using the word &#8216;ransomware&#8217; and don&#8217;t issue any form of data breach notification for months,&#8221; she added, &#8220;this lack of clarity and confirmation can leave those affected open to identity theft and phishing campaigns.&#8221;</p>
<p>Ransomware incidents at US hospitals routinely force extended downtime, canceled appointments and patient diversions even where clinical systems hold up. CRMC said it has put additional security measures in place since the attack.</p>
</div>
<p><br />
<br /><a href="https://www.infosecurity-magazine.com/news/cookeville-medical-center-data/" style="font-size: 11px;color:#D5DBDB">Source</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://cyberwiredaily.com/cookeville-hospital-discloses-rhysida-breach-hitting-337917/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI Companies To Play Bigger Role in CVE Program, Says CISA</title>
		<link>https://cyberwiredaily.com/ai-companies-to-play-bigger-role-in-cve-program-says-cisa/</link>
					<comments>https://cyberwiredaily.com/ai-companies-to-play-bigger-role-in-cve-program-says-cisa/#respond</comments>
		
		<dc:creator><![CDATA[Team-CWD]]></dc:creator>
		<pubDate>Thu, 16 Apr 2026 07:02:21 +0000</pubDate>
				<category><![CDATA[Cyber Security]]></category>
		<guid isPermaLink="false">https://cyberwiredaily.com/ai-companies-to-play-bigger-role-in-cve-program-says-cisa/</guid>

					<description><![CDATA[AI companies like OpenAI and Anthropic should play a bigger role in software vulnerability disclosures in the future, according to a leader of the world’s largest vulnerability disclosure scheme. Speaking at the opening of VulnCon26 in Scottsdale, Arizona, on April 14, Lindsey Cerkovnik said AI companies “should be better represented&#8221; in the Common Vulnerabilities and [...]]]></description>
										<content:encoded><![CDATA[<p> <br />
</p>
<div id="layout-3e0d934d-5b3c-4c45-ab37-e834bea8e6ea" data-layout-id="2" data-edit-folder-name="text" data-index="0">
<p>AI companies like OpenAI and Anthropic should play a bigger role in software vulnerability disclosures in the future, according to a leader of the world’s largest vulnerability disclosure scheme.</p>
<p>Speaking at the opening of VulnCon26 in Scottsdale, Arizona, on April 14, Lindsey Cerkovnik said AI companies “should be better represented&#8221; in the Common Vulnerabilities and Exposures (CVE) program.</p>
<p>As chief of the Vulnerability Response &amp; Coordination (VRC) Branch at the US Cybersecurity and Infrastructure Security Agency (CISA), sole sponsor of the MITRE-run CVE program, Cerkovnik and her team manage coordinated vulnerabilities disclosures for the CVE program.</p>
<p>She acknowledged that the program has faced a rapid growth of reported vulnerabilities over the past year and that the evolution of AI platforms will likely accelerate that growth.</p>
<p>“With the arrival of new AI tools, some helping discover valid vulnerabilities, others perhaps finding things with less value, we’re at a turning point,” Cerkovnik said.</p>
<h2><strong>Anthropic, OpenAI Speed Up on AI-Powered Vulnerability Research</strong></h2>
<p>Cerkovnik’s VulnCon speech came just a few days after the launch of Claude Mythos Preview, Anthropic’s new large language model (LLM) that promises to autonomously find and fix cybersecurity vulnerabilities at scale.</p>
<p>Today, Mythos is only available to the 40 members of Project Glasswing. </p>
<p>In testing, the model allegedly discovered thousands of zero-day vulnerabilities which had not previously been identified.</p>
<p>The model also autonomously found and chained several vulnerabilities in the Linux kernel, software used to run most of the world’s servers, which would allow an attacker to escalate from ordinary user access to complete control of a machine</p>
<p>Upon testing Mythos Preview in a simulation environment, researchers at the UK’s AI Security Institute (AISI) said they “cannot say for sure” whether Mythos Preview would be able to successfully attack “well-defended systems.” </p>
<p>On April 14, <a href="https://x.com/OpenAI/status/2044161906936791179?s=20" target="_blank">OpenAI launched GPT-5.4-Cyber</a>, a version of GPT-5.4 fine-tuned for cybersecurity use cases and only available to members of its &#8220;Trusted Access for Cyber Defense&#8221; program.</p>
<h2><strong>50,000 to 70,000 Expected CVEs in 2026</strong></h2>
<p>Notably, the speed of vulnerability disclosures was already accelerating long before the launch of Mythos and OpenAI&#8217;s GPT-5.4-Cyber.</p>
<p>The CVE program counts 327,000 unique CVE records to date. Of those , Jerry Gamblin, principal engineer at Cisco Threat Detection &amp; Response, observed 18,247 were reported in 2026, <a href="https://cve.icu/index.html" target="_blank">a 27.9% growth from the same period in 2025</a>.</p>
<p>Additionally, Gamblin calculated average of 174 CVEs reported daily this year, compared to 132 in 2025.</p>
<p>In February 2026, the Forum of Incident Response and Security Teams (FIRST), which co-hosts VulnCon with the CVE program, forecast a record-breaking 50,000 additional CVEs to be reported in 2026.</p>
<p>Gamblin expects an even bigger growth, with a forecast of<a href="https://cveforecast.org/" target="_blank"> 70,135 CVEs by the end of this year</a>. This would reflect a 45.6% growth rate compared to 48,171 in 2025.</p>
<h2><strong>AI Companies Could Become Official Vulnerability Reporters</strong></h2>
<p>Cerkovnik’s call for closer integration of AI companies into the CVE program aligns with the program’s broader diversification strategy.</p>
<p>This strategy was illustrated by the launch of two new forums in July 2025, the CVE Consumer Working Group (CWG) and the CVE Researcher Working Group (RWG).</p>
<p>One of the main objectives is to grow the number of CVE Numbering Authorities (CNAs), <a href="https://www.infosecurityeurope.com/en-gb/blog/guides-checklists/how-to-disclose-software-vulnerability.html" target="_blank">organizations that are allowed to publicly disclose a vulnerability</a> and attributed it a CVE identifier.</p>
<p>At the end of March 2026, the CVE program announced it had reached over 500 contributors, with 502 CNAs now registered.</p>
</div>
<div id="layout-dea21d3d-6409-44f8-91bd-ad1adb6eb238" data-layout-id="2" data-edit-folder-name="text" data-index="2">
<p>Diversification of the CVE program also means internationalization of the program, with more European-based CNAs to be vetted in the future, commented Nuno Rodrigues Carvalho, head of sector for Incidents and Vulnerability Services at the European Cybersecurity Agency (ENISA).</p>
<p>Speaking to <em>Infosecurity</em>, his colleague, Johannes Kaspar Clos, a responsible disclosure expert at ENISA, said he would welcome AI companies to also become CNAs.</p>
<p>“We need to include a diverse crowd of cybersecurity practitioners, from product and nationals computer emergency response teams (CERTs) and computer security incident response teams (CSIRTs) to researchers and vulnerability finders. Anthropic is one example of a company who identified vulnerabilities and therefore, is of course rightfully mentioned in being a potential CNA,” Clos said.</p>
<p>While he welcomed the launch of Claude Mythos and other AI-powered tools allowing researchers to find more vulnerabilities, Clos added said he would have preferred the capabilities of such models&#8217; capabilities to be disclosed &#8220;before the products are pushed to the market.&#8221;</p>
<p>&#8220;Security testing should be implemented before users are put at risk,&#8221; he added.</p>
<h2><strong>CVE Program: A “Top Priority” for CISA</strong></h2>
<p>Finally, Cerkovnik said the CVE program is “a top priority” for CISA and its parent administration, the US Department of Homeland Security (DHS) and that the security agency will continue funding the program in the future.</p>
<p><em>Read now: CISA Launches Roadmap for the CVE Program</em></p>
<p>While she declined to provide any specifics, she said, “Contracts and funding for the CVE program are secure. Funding has never been an issue.”</p>
<p>However, she highlighted that DHS was still technically in a shutdown situation and that it currently complicates decision-making at CISA, including around spending for outreach opportunities like her coming to VulnCon.</p>
<p><em>Image credits: Koshiro K / gguy /Shutterstock.com</em></p>
</div>
<p><br />
<br /><a href="https://www.infosecurity-magazine.com/news/ai-companies-to-play-bigger-role/" style="font-size: 11px;color:#D5DBDB">Source</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://cyberwiredaily.com/ai-companies-to-play-bigger-role-in-cve-program-says-cisa/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Malicious Chrome Extensions Campaign Exposes User Data</title>
		<link>https://cyberwiredaily.com/malicious-chrome-extensions-campaign-exposes-user-data/</link>
					<comments>https://cyberwiredaily.com/malicious-chrome-extensions-campaign-exposes-user-data/#respond</comments>
		
		<dc:creator><![CDATA[Team-CWD]]></dc:creator>
		<pubDate>Wed, 15 Apr 2026 06:54:04 +0000</pubDate>
				<category><![CDATA[Cyber Security]]></category>
		<guid isPermaLink="false">https://cyberwiredaily.com/malicious-chrome-extensions-campaign-exposes-user-data/</guid>

					<description><![CDATA[A large-scale campaign involving 108 malicious Chrome extensions has been uncovered, affecting roughly 20,000 users. The extensions, spread across categories such as gaming, social media tools and translation utilities, appear legitimate but secretly collect sensitive data. All are linked to a single command-and-control (C2) infrastructure to enable operators to aggregate stolen information in one place. [...]]]></description>
										<content:encoded><![CDATA[<p> <br />
</p>
<div id="layout-b0d0b62c-a665-4815-9f94-3edbe55b4a0f" data-layout-id="2" data-edit-folder-name="text" data-index="0">
<p>A large-scale campaign involving 108 malicious Chrome extensions has been uncovered, affecting roughly 20,000 users.</p>
<p>The extensions, spread across categories such as gaming, social media tools and translation utilities, appear legitimate but secretly collect sensitive data. All are linked to a single command-and-control (C2) infrastructure to enable operators to aggregate stolen information in one place.</p>
<p>The campaign,<a href="https://socket.dev/blog/108-chrome-ext-linked-to-data-exfil-session-theft-shared-c2" style="text-decoration:none;" target="_blank"> identified</a> by security researchers at Socket, stands out for its breadth and coordination. Although published under five separate developer identities, the team found consistent backend systems and shared operational patterns across all extensions.</p>
<h2><strong>Several Attack Techniques</strong></h2>
<p>The research highlighted several distinct attack techniques deployed simultaneously. Among the most serious is a Telegram-focused extension that captures active web sessions every 15 seconds, allowing full account access without passwords or <a href="https://www.infosecurityeurope.com/en-gb/blog/future-thinking/phishing-resistant-mfa-explained.html" target="_self">multi-factor authentication</a> (MFA).</p>
<p>Other extensions harvest Google account details using OAuth2 permissions, inject ads by bypassing browser security protections or open arbitrary web pages through hidden backdoors. Many operate continuously in the background, even if users never actively interact with them.</p>
<p>Key behaviors identified include:</p>
<ul>
<li>
<p>54 extensions collecting Google profile data</p>
</li>
<li>
<p>45 extensions containing a persistent backdoor triggered at browser start-up</p>
</li>
<li>
<p>Multiple tools injecting scripts or ads into popular platforms like YouTube and TikTok</p>
</li>
<li>
<p>One extension acting as a translation proxy through attacker-controlled servers</p>
</li>
</ul>
<h2><strong>Dual Behavior Complicates Detection</strong></h2>
<p>According to Socket, the extensions often deliver on their advertised functionality, such as games or messaging tools, while masking malicious activity running in the background. This dual behavior makes detection difficult for users.</p>
<p><em>Read more on browser extension security risks: Experts Sound Alarm Over &#8220;Prompt Poaching&#8221; Browser Extensions</em></p>
<p>The infrastructure also supports a Malware-as-a-Service (MaaS) model, where stolen data and active sessions can be accessed by third parties. Researchers linked the entire operation to a single operator through shared cloud resources, reused code and overlapping account identifiers.</p>
<p>All 108 extensions were still available at the time of discovery. The appropriate security teams have been notified, and takedown requests have been submitted.</p>
<p><em>Infosecurity </em>contacted Google for comment, but has not yet received a response. </p>
<p><em>Image credit: Mijansk786 / Shutterstock.com</em></p>
</div>
<p><br />
<br /><a href="https://www.infosecurity-magazine.com/news/chrome-extensions-expose-user-data/" style="font-size: 11px;color:#D5DBDB">Source</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://cyberwiredaily.com/malicious-chrome-extensions-campaign-exposes-user-data/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Microsoft 365 Tenant Security: How to Stay in Control of Your Data</title>
		<link>https://cyberwiredaily.com/microsoft-365-tenant-security-how-to-stay-in-control-of-your-data/</link>
					<comments>https://cyberwiredaily.com/microsoft-365-tenant-security-how-to-stay-in-control-of-your-data/#respond</comments>
		
		<dc:creator><![CDATA[Team-CWD]]></dc:creator>
		<pubDate>Fri, 10 Apr 2026 05:59:11 +0000</pubDate>
				<category><![CDATA[Cyber Security]]></category>
		<guid isPermaLink="false">https://cyberwiredaily.com/microsoft-365-tenant-security-how-to-stay-in-control-of-your-data/</guid>

					<description><![CDATA[When something goes wrong in Microsoft 365, it’s rarely a single clean “incident.” It’s a chain: a credential reused, a misconfigured Conditional Access policy, a risky mailbox rule, a guest with too much access. Tenant resilience is about making sure that, even when those chains start forming, you stay in control of identity, configuration and [...]]]></description>
										<content:encoded><![CDATA[<p> <br />
</p>
<div id="layout-90cbd54a-9504-4627-971a-cd0dd38739bb" data-layout-id="2" data-edit-folder-name="text" data-index="0">
<p>When something goes wrong in Microsoft 365, it’s rarely a single clean “incident.” It’s a chain: a credential reused, a misconfigured Conditional Access policy, a risky mailbox rule, a guest with too much access. Tenant resilience is about making sure that, even when those chains start forming, you stay in control of identity, configuration and collaboration – and can recover quickly without guesswork.</p>
<p>In this article we’ll look at what actually breaks tenants under pressure and how to harden Microsoft 365 against those failures.</p>
<h2><strong>Three Ways Microsoft 365 Fails under pressure</strong></h2>
<p>Most Microsoft 365 “bad days” fall into three patterns. Understanding them gives you a concrete blueprint for resilience.</p>
<h3><strong>1. Identity Goes Sideways</strong></h3>
<p>This is the classic story: a phishing email slips through, a user accepts a fake prompt, or an attacker buys leaked credentials on an underground market. They sign in legitimately, often bypassing poorly enforced policies.</p>
<p>Identity failures typically look like:</p>
<ul>
<li>Inconsistent MFA: high‑risk users and admins not fully covered</li>
<li>Excessive standing privileges: too many global admins, or broad “just in case” admin roles</li>
<li>Over‑trusted applications: Entra (Azure AD) apps and OAuth grants with powerful permissions no one is actively watching</li>
</ul>
<p>From there, attackers hunt for:</p>
<ul>
<li>Ways to escalate privileges (unused admin roles, poorly scoped permissions)</li>
<li>Weak Conditional Access coverage they can slip through</li>
<li>Mailboxes or Teams spaces that give them useful intelligence</li>
</ul>
<p>A resilient tenant assumes identity will be attacked constantly and designs controls to (a) make successful compromise less likely, and (b) ensure that, if it happens, the blast radius stays small.</p>
<p><strong>Resilience Moves:</strong></p>
<ul>
<li>Require MFA for all interactive users and admins, and retire legacy authentication wherever possible.</li>
<li>Reduce standing admin privileges; move to scoped, task‑based roles and just‑in‑time access where you can.</li>
<li>Inventory Entra apps and OAuth consents; retire unused apps and trim over‑privileged ones.</li>
</ul>
<h3><strong>2. Configuration Drifts into Dangerous Territory</strong></h3>
<p>Microsoft 365 tenants are never static. Admins make changes to fix issues or support projects. Microsoft ships new features and adjusts defaults. Service owners tweak policies under pressure from the business. Over months and years, the environment can drift far from the “standard” that security teams think they’re enforcing.</p>
<p>Configuration failures often look like:</p>
<ul>
<li>Conditional Access policies quietly loosened for a specific project – then never tightened again</li>
<li>Mailbox forwarding rules and transport rules changed, but not logged against a change ticket</li>
<li>Sharing settings relaxed for one department and unknowingly applied more broadly</li>
<li>Security features turned off temporarily to troubleshoot, and forgotten in that state</li>
</ul>
<p>Individually, these changes might seem minor. Cumulatively, they can create:</p>
<ul>
<li>Hidden backdoors for attackers</li>
<li>Unexpected lockouts when a later change interacts badly with an old exception</li>
<li>Unclear responsibility – no one can say exactly when or why a risky change happened</li>
</ul>
<p><strong>Resilience moves:</strong></p>
<ul>
<li>Define a practical baseline for critical settings (identity, mail, sharing, retention) and treat deviations as risk.</li>
<li>Implement continuous configuration monitoring so that high‑impact changes trigger alerts, not surprises.</li>
<li>Tie configuration changes to tickets or documented approvals so you can distinguish legitimate work from tampering.</li>
</ul>
<h3><strong>3. Recovery is Slow, Manual and Uncertain</strong></h3>
<p>Many organizations have invested in backing up their Microsoft 365 data – such as mailboxes, SharePoint, OneDrive and Teams. That’s vital, but it addresses only one part of the recovery story.</p>
<p>When an attacker (or an admin mistake) disrupts tenant configuration, the real questions are:</p>
<ul>
<li>Can you regain control of admin and identity quickly?</li>
<li>Can you roll back risky configuration changes without breaking everything else?</li>
<li>Do you know which policies, rules, and settings changed – and what “good” looked like beforehand?</li>
</ul>
<p>Without configuration resilience, recovery often comes down to:</p>
<ul>
<li>Manually clicking through portals to compare settings</li>
<li>Trying to reconstruct policies from old screenshots, export files, or documentation</li>
<li>Accepting “close enough” because no one can be sure what the exact previous state was</li>
</ul>
<p>That’s slow, stressful, and risky. Especially under regulatory or customer pressure.</p>
<p><strong>Resilience moves:</strong></p>
<ul>
<li>Back up tenant configuration, not just data, across Entra ID, Exchange, SharePoint/OneDrive, Teams, and Purview.</li>
<li>Practice restoring configurations in non‑production tenants so you understand what will happen when you roll back.</li>
<li>Separate “emergency restore” scenarios (e.g., lockout, mass misconfiguration) from routine drift correction and prepare playbooks for both.</li>
</ul>
<h2><strong>Copilot and AI: Accelerators of Both Value and Risk</strong></h2>
<p>Microsoft 365 Copilot changes the resilience equation by making content more discoverable to users – and, by extension, to anyone who compromises those users.</p>
<p>If a user has access to overly broad SharePoint sites, old Teams workspaces, or loosely governed OneDrive content, Copilot simply makes it faster to find and recombine that information. The same is true for compromised accounts: attackers can use AI‑driven search to map your environment far more quickly than before.</p>
<p>To keep Copilot from becoming a force multiplier for exposure:</p>
<ul>
<li>Tighten access and sharing before rollout: clean up over‑permissioned sites, groups, and Teams.</li>
<li>Enforce sensitivity labels and DLP policies so that even discoverable content can’t be automatically exfiltrated.</li>
<li>Review and govern plugins and connectors; treat them as additional integration points that must be vetted.</li>
<li>Plan monitoring around Copilot usage to spot unusual patterns and potential misuse.</li>
</ul>
<p>Resilient tenants treat Copilot as a reason to accelerate least privilege and data governance – not as an add-on feature.</p>
<h2><strong>A Practical Roadmap to a More Resilient Tenant</strong></h2>
<p>Rather than chasing every new feature or control, focus on a small set of high‑leverage steps:</p>
<p><strong>Clamp down on identity risk</strong></p>
<ol style="list-style-type:lower-alpha">
<li>Enforce MFA and modern auth widely.</li>
<li>Reduce the number and scope of standing admin roles.</li>
<li>Review and clean up high‑impact Entra apps and OAuth consents.</li>
</ol>
<p><strong>Stabilize configuration</strong></p>
<ol style="list-style-type:lower-alpha">
<li>Decide what “good” looks like for key policies (CA, sharing, retention, DLP, external collaboration).</li>
<li>Implement continuous monitoring for changes to those policies.</li>
<li>Make sure ownership and approval paths for high‑risk settings are clear.</li>
</ol>
<p><strong>Add configuration‑level backup and restore</strong></p>
<ol style="list-style-type:lower-alpha">
<li>Introduce regular, comprehensive backup of tenant configuration.</li>
<li>Test “roll back to yesterday’s known‑good” scenarios in lower environments.</li>
<li>Clarify who declares a configuration emergency and who executes the restore.</li>
</ol>
<p><strong>Prepare for AI‑driven exposure</strong></p>
<ol style="list-style-type:lower-alpha">
<li>Clean up access and sharing; remove stale and excessive permissions.</li>
<li>Enforce labels and DLP where sensitive data lives.</li>
<li>Build Copilot and AI usage into your monitoring and incident response thinking.</li>
</ol>
<p>These steps give you a foundation that makes future improvements easier and more reliable.</p>
<h2><strong>How CoreView Helps You Reach Your Tenant Resilience Goals</strong></h2>
<p>You can build resilience with native tools and custom automation, but it often requires stitching together exports, scripts, logs, and multiple admin portals. CoreView is designed to bring those resilience capabilities into a single operational model for Microsoft 365 teams.</p>
<p>With CoreView, organizations can:</p>
<ul>
<li><strong>See and control who really has power in the tenant</strong>
<ul style="list-style-type:circle">
<li>Analyze admin roles and privileges, then define “just enough” permissions with fine‑grained scope.</li>
<li>Segment administration by region, business unit, or function so that no single admin has unnecessary global reach.</li>
</ul>
</li>
<li><strong>Detect risky configuration changes before they become incidents</strong>
<ul style="list-style-type:circle">
<li>Continuously monitor changes across Entra ID, Exchange, SharePoint/OneDrive, Teams and Purview.</li>
<li>Focus alerts on high‑impact changes – like Conditional Access adjustments, mailbox forwarding rules, or sharing policy updates.</li>
</ul>
</li>
<li><strong>Back up and restore tenant configuration, not just data</strong>
<ul style="list-style-type:circle">
<li>Capture configuration state across critical workloads on a regular cadence.</li>
<li>Restore configuration to a known‑good state after misconfigurations, attacks, or failed change deployments – without manually rebuilding policies.</li>
</ul>
</li>
<li><strong>Govern collaboration and external access at scale</strong>
<ul style="list-style-type:circle">
<li>Get a clear view of guests, external users and high‑risk sharing patterns.</li>
<li>Apply consistent controls to keep collaboration productive without losing control of who can see what.</li>
</ul>
</li>
<li><strong>Stay audit‑ready across multiple frameworks</strong>
<ul style="list-style-type:circle">
<li>Map real configuration and change history to requirements from NIST, CIS, CMMC, HIPAA and others.</li>
<li>Produce evidence of how your tenant is configured today, how it changed over time and how you would restore it after a disruption.</li>
</ul>
</li>
</ul>
<p>The result is a Microsoft 365 tenant that’s not just secure on paper, but actually resilient in practice: you know who has access, you know when important settings change and you have a reliable way to get back to a trusted state when something goes wrong.</p>
</div>
<p><br />
<br /><a href="https://www.infosecurity-magazine.com/blogs/microsoft-365-tenant-security/" style="font-size: 11px;color:#D5DBDB">Source</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://cyberwiredaily.com/microsoft-365-tenant-security-how-to-stay-in-control-of-your-data/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
