
How Your Ads Will Win in 2026
Great ads don’t happen by accident. And in a world flooded with AI-generated content, the difference between “nice idea” and “real impact” matters more than ever.
Join award-winning creative strategist Babak Behrad and Neurons CEO Thomas Z. Ramsøy for a practical, science-backed webinar on what actually drives performance in modern advertising.
They’ll break down how top campaigns earn attention, stick in your target’s memory, and build brands people remember.
You’ll see how to:
Apply neuroscience to creative decisions
Design branding moments that actually land
Make ads feel instantly relevant to real humans
In 2026, you have to earn attention. This webinar will show you exactly how to do it.
This week we’re talking about something that feels like science fiction but is actually a preview of enterprise reality.
A viral AI agent network called Moltbook blew up online and immediately exposed a hard truth. We are watching machines move faster than our governance models can keep up. This edition is about what that means for identity leaders. Not someday. Now.
Let’s get into it.
Moltbook Isn’t Just Weird. It’s a Warning.
A viral AI social network exploded onto the internet and suddenly we’re watching machines socialize faster than enterprises can govern them.
The platform is called Moltbook. It’s essentially a Reddit‑style environment where AI agents interact with each other while humans mostly observe. On the surface it’s entertaining. Underneath it’s a flashing red warning light for identity and security teams. Security researchers were able to breach Moltbook’s backend in minutes. The exposure included tens of thousands of emails, private messages, and roughly 1.5 million API authentication tokens. That means attackers could impersonate AI agents or inject malicious instructions directly into an autonomous ecosystem.
This is not a quirky bug. It is a preview of what happens when agent identities exist without mature governance.
Here’s the part that matters for enterprise.
First, identity is no longer human‑first. The “user” might be an AI agent executing actions across systems. If it has permissions, it carries risk. The difference is speed and scale. Agents act faster than any human approval loop was designed to handle.
Second, governance is not optional anymore. Enterprises are already deploying AI assistants with access to email, documents, workflows, and APIs. Without guardrails, you are creating shadow identities that operate outside visibility.
Third, our threat models are outdated. Agent‑to‑agent communication, prompt injection, and credential leakage are not edge cases. They are emerging attack surfaces that traditional IAM tooling does not fully see.
Moltbook did not invent the problem. It exposed it publicly. Autonomous agents are moving into real environments faster than our frameworks are evolving.
Identity leaders now have a choice. Treat this as internet theater or recognize it as an early signal that identity security must expand to govern non‑human actors with real access and real impact.
This is the next frontier of identity. It is not coming. It is here.
News
Security researchers from Wiz breached Moltbook’s infrastructure and uncovered exposed databases containing emails, private messages, and approximately 1.5 million API keys. The incident highlights how fragile AI agent ecosystems can be when identity controls and key management are immature. For identity leaders, it is a real‑world example of agent identity risk moving from theory to practice.
Varonis announced the acquisition of AllTrue, an AI trust and security specialist, signaling that enterprises are beginning to treat AI governance and risk as core security priorities. The deal reflects a broader shift. Data security and identity risk are converging as organizations deploy autonomous tools that require tighter oversight.
The Identity Theft Resource Center reported over 3,300 U.S. data compromises last year, the highest on record. At the same time, fewer breach disclosures include meaningful technical details. That combination creates a harder environment for defenders trying to understand identity‑driven risk. Rising incident volume plus shrinking transparency means organizations must rely more on proactive identity controls rather than post‑incident learning.
Podcasts
Announcements
Couple of annoucements: If you haven’t already check out Identity Season One. A little mini-series blog about starting an identity program. This was really fun to come up with, and we are up to Episode 4, and new episodes drop on Monday. You can catch up here: Identity Season One
Also an idea I’ve been noodling to drop for the Insider Access: Let’s vibe code a modern Identity Security Platform and open source it. I’m thinking about taking this on as a project, and inviting the premium subscribers to take part in it. Mostly because I think it would be fun to do, but also I think the community needs a open source tool to learn and practice on. Thoughts?
Also Season Four of the IDJ Podcast is officially underway. We are scheduling interviews, and signing up sponsors. So if you’re interested in being a guest on the podcast, or sponsoring, hit the inbox. We have limited slots as we are only doing 25 episodes per season. Also…planning a couple of live shows, so be on the lookout for that.
Finally if you only read this in email, I would encourage you to check out the website. We’ve got all kinds of good stuff available in out new Digital Products Section.
The Last Word
If you think your IAM model was ready for AI, Moltbook just proved otherwise.We are entering a world where a user is not always human. It might be an autonomous agent making decisions, executing workflows, and touching sensitive systems at machine speed.And we are still trying to govern that world with assumptions built for human logins and static roles. And while yes on one hand this is exciting, hell I’m considering buying a Mac Mini and downloading OpenClaw and seeing if I can finally build my version of Jarvis ( I’d call it Darius), but the security person in me can’t help but to cringe in terror. We are at a precipice folks, and the real lesson is simple. Every identity with access carries risk. It does not matter if that identity has a face or a model architecture. If it can act, it must be governed. AI is not a future problem. It is an identity problem happening in real time. The teams that recognize that early will define the next generation of security. The ones that do not will spend the next decade reacting to incidents they never modeled. The frontier is open. Choose whether you are mapping it or chasing it.
Be good to each other, be kind to each other, love each other





