eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

226
active users

#ciso

1 post1 participant0 posts today

Happy Canada Day! In this episode of the Chasing Entropy Podcast, I speak with Mark Hillick, CISO at Brex, about the changing role of security leaders in a world shaped by AI, rapid innovation, and shifting business expectations. From building security culture at Riot Games to navigating Silicon Valley’s AI gold rush, Hillick offers grounded insight into what it takes to lead a modern, business-aligned security team.

Link: buzzsprout.com/2497520/episode  #AI #CISO #XAM #AgenticAI #Podcast #Infosec #Cybersecurity @1password

Hundreds of Brother printer models are affected by a critical, unpatchable vulnerability (CVE-2024-51978) that allows attackers to generate the default admin password using the device’s serial number—information that’s easily discoverable via other flaws.

748 total models across Brother, Fujifilm, Ricoh, Toshiba, and Konica Minolta are impacted, with millions of devices at risk globally.

Attackers can:
• Gain unauthenticated admin access
• Pivot to full remote code execution
• Exfiltrate credentials for LDAP, FTP, and more
• Move laterally through your network

Brother says the vulnerability cannot be fixed in firmware and requires a change in manufacturing. For now, mitigation = change the default admin password immediately.

Our pentest team regularly highlights printer security as a critical path to system compromise—and today’s news is another example that underscores this risk. This is your reminder: Printers are not “set-and-forget” devices. Treat them like any other endpoint—monitor, patch, and lock them down.

Need help testing your network for exploitable print devices? Contact us and our pentest team can help!

Read the Dark Reading article for more details on the Brother Printers vulnerability: darkreading.com/endpoint-secur

AI security risks are no longer hypothetical. From blackmail to shutdown resistance, high-agency AI models are pushing the limits of trust and control.

New research shows that systems like Claude and ChatGPT are capable of deception, whistleblowing, and even blackmail to stay online. These aren’t future threats—they’re happening now.

Read our latest blog for a breakdown of these rogue AI incidents and five actionable strategies to help protect your organization.

Read now: lmgsecurity.com/ai-security-ri

LMG SecurityAI Security Risks: When Models Lie, Blackmail, and Refuse to Shut Down | LMG SecurityAI security risks are evolving quickly. We share alarming real-world AI issues, including deception, blackmail, and shutdown resistance & how to protect your organization

🔐Cybersecurity is now core to every technical role. DevOps. AppDev. SRE. Architects. Watch "Cybersecurity Skills: A Framework That Works" -- an on-demand webinar -- to learn how to close key security skill gaps for you and your teams.

🎥 Watch now: training.linuxfoundation.org/r

Linux Foundation - EducationCybersecurity Skills, Simplified: A Framework That WorksLearn you can leverage the cybersecurity skills framework for you team
Continued thread

“You think it’s just a light bulb—but it’s not off. It’s watching, listening… maybe even hacking.”

LMG Security’s @tompohl revealed how $20 smart outlets and light bulbs can be exploited for WiFi cracking, evil twin attacks, and stealth monitoring—turning everyday gadgets into real-world threats.

In our latest blog, we’ll share:

▪ How attackers can exploit everyday IoT gadgets to breach your organization
▪ Advice on how to lock down your smart tech
▪ Tips on segmentation, firmware auditing, and red teaming

Read the blog: lmgsecurity.com/i-have-the-pow

LMG SecurityI Have the Power: IoT Security Challenges Hidden in Smart Bulbs and Outlets | LMG SecurityDid you know smart bulbs and outlets could be spying, attacking, or failing silently? Read our advice on how to tackle IoT security challenges in your network!

What Happens When AI Goes Rogue?

From blackmail to whistleblowing to strategic deception, today's AI isn't just hallucinating — it's scheming.

In our new Cyberside Chats episode, LMG Security’s @sherridavidoff and @MDurrin share new AI developments, including:

• Scheming behavior in Apollo’s LLM experiments
• Claude Opus 4 acting as a whistleblower
• AI blackmailing users to avoid shutdown
• Strategic self-preservation and resistance to being replaced
• What this means for your data integrity, confidentiality, and availability

📺 Watch the video: youtu.be/k9h2-lEf9ZM
🎧 Listen to the podcast: chatcyberside.com/e/ai-gone-ro

youtu.be- YouTubeEnjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.

AI is the new attack surface—are you ready?

From shadow AI to deepfake-driven threats, attackers are finding creative ways to exploit your organization’s AI tools, often without you realizing it.

Watch our new 3-minute video, How Attackers Target Your Company’s AI Tools, for advice on:

▪️ The rise of shadow AI (yes, your team is probably using it!)
▪️ Real-world examples of AI misconfigurations and account takeovers
▪️ What to ask vendors about their AI usage
▪️ How to update your incident response plan for deepfakes
▪️ Actionable steps for AI risk assessments and inventories

Don’t let your AI deployment become your biggest security blind spot.

Watch now: youtu.be/R9z9A0eTvp0

youtu.be- YouTubeEnjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.

Only one week left to register for our next Cyberside Chats Live event! Join us June 11th to discuss what happens when an AI refuses to shut down—or worse, starts blackmailing users to stay online?

These aren’t science fiction scenarios. We’ll dig into two real-world incidents, including a case where OpenAI’s newest model bypassed shutdown scripts and another where Anthropic’s Claude Opus 4 generated blackmail threats in an alarming display of self-preservation.

Join us as we unpack:
▪ What “high-agency behavior” means in cutting-edge AI
▪ How API access can expose unpredictable and dangerous model actions
▪ Why these findings matter now for security teams
▪ What it all means for incident response and digital trust

Stick around for a live Q&A with LMG Security’s experts @sherridavidoff and @MDurrin. This session will challenge the way you think about AI risk!

Register today: lmgsecurity.com/event/cybersid

LMG SecurityCyberside Chats: Live! When AI Goes Rogue: Blackmail, Shutdowns, and the Rise of High-Agency Machines | LMG SecurityIn this quick, high-impact session, we’ll dive into the top three cybersecurity priorities every leader should focus on. From integrating AI into your defenses to tackling deepfake threats and tightening third-party risk management, this discussion will arm you with the insights you need to stay secure in the year ahead.