ActiveFence is now Alice
x
Alice - Blog

Written in
the Cards

In a landscape that shifts as quickly as AI Wonderland, we’re here to help you read the signs. Explore fresh tales, industry trends, and timely dispatches from the heart of Alice.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

The 7 Subtle Sins of Agentic AI: Behavioral risks in autonomous systems

Mar 17, 2026
,
 
Mar 17, 2026
 -
 min read
March 17, 2026

Explore the 7 subtle “sins” of agentic AI—behavioral patterns that quietly introduce risk. Learn how autonomy can drift and how to keep systems aligned.

Learn More

Curiouser Soundbites: AI Risk Is Compounding and the Window to Act Is Closing

Mar 12, 2026
,
 
Mar 12, 2026
 -
3
 min read
March 12, 2026

Most AI governance conversations sound like they were written by a compliance team. This one doesn't.

Learn More

Building Boldly, Responsibly: How Lovable is Strengthening Safety in the Era of AI-Powered Creation

Mar 2, 2026
,
 
Mar 2, 2026
 -
2
 min read
March 2, 2026

What we learned partnering with Lovable to strengthen safety in AI-powered website creation

Learn More

Don't Let AI Experiments Become Business Risk

Mar 2, 2026
,
 
Mar 2, 2026
 -
3
 min read
March 2, 2026

As AI experiments scale, so does risk. Discover how to build continuous AI governance and security into production systems.

Learn More

The Rise and Risk of Reasoning Agents

Feb 18, 2026
,
 
Feb 18, 2026
 -
6
 min read
February 18, 2026

As AI agents gain the ability to reason, plan, and act autonomously, their internal thinking becomes a new attack surface that must be protected just as carefully as the tools they use.

Learn More

Securing Agentic AI: Meeting the 2026 Federal Assurance Bar

Feb 11, 2026
,
 
Feb 11, 2026
 -
4
 min read
February 11, 2026

Learn how the FY2026 NDAA and new NIST frameworks are shifting agentic AI from experimental to regulated. Master the security controls and Zero Trust principles required to win federal AI contracts this year.

Learn More

The Art of the Unseen: Why We Built Caterpillar

Feb 4, 2026
,
 
Feb 4, 2026
 -
3
 min read
February 4, 2026

Learn how to secure your AI agent ecosystem with Caterpillar, a free open-source tool designed to unmask hidden vulnerabilities, injection paths, and unsafe configurations in AI skills.

Learn More

Trusted by security and product teams in the world's most regulated industries

Alice brings years of adversarial intelligence expertise to AI security. We give enterprise teams the coverage that generic guardrails and one-time audits can't match.

Get a Demo