Welcome to my AI Lab Notebook

This is where I study AI not as a product, but as a system shaping human life.

Over time, three themes have defined my work:

1. AI Governance as Architecture: I build frameworks like the AI OSI Stack, persona architecture, and semantic version control because AI needs scaffolding, not slogans.

2. The Human Meaning Crisis in Machine Time: I explore how AI destabilizes identity, trust, and authenticity as machine speed outpaces human comprehension.

3. Power, Distribution, and Responsibility: I examine who benefits from AI, who is displaced, and how governance, economics, and control shape outcomes.

These pillars guide everything I write here. AI’s future won’t be determined by capability alone, it will be determined by the structures, meanings, and power dynamics we build around it.

Thanks for reading.

The Year Compute Broke Governance: Why Google’s Six-Month Doubling Cycle Signals the Collapse of Human-Time Oversight
Critique & Commentary Dan Critique & Commentary Dan

The Year Compute Broke Governance: Why Google’s Six-Month Doubling Cycle Signals the Collapse of Human-Time Oversight

The moment that may define the next decade of AI governance arrived quietly inside a Google all hands meeting. A single slide, delivered without drama, stated that Google must now double its compute every six months and pursue a thousandfold increase within five years. This is more than an engineering target. It signals a shift into a form of acceleration that human institutions are not built to track.

Read More
When Fraud Has Infinite Bandwidth: AI-Driven Espionage
Critique & Commentary Dan Critique & Commentary Dan

When Fraud Has Infinite Bandwidth: AI-Driven Espionage

Something fundamental shifted in late 2025. A quiet crack formed in the global cybersecurity order, and most people have not yet realized what slipped through it. For the first time, an AI system did not simply assist an attacker. It became the attacker. The discovery that Claude executed the majority of a state backed espionage campaign raises a deeper question. What happens when fraud, manipulation, and intrusion occur at machine time while society still responds at human time. This excerpt explores why the old governance assumptions have collapsed, why scams now scale to millions for pennies, and why the future of safety must operate at the infrastructural layer rather than the human layer.

Read More
Michael Burry Did Not Quit: He Stepped Out of a Market That No Longer Knows What It Is
Critique & Commentary Dan Critique & Commentary Dan

Michael Burry Did Not Quit: He Stepped Out of a Market That No Longer Knows What It Is

Michael Burry did not walk away from markets out of fatigue or frustration. He stepped out because the market’s reasoning layer has collapsed. AI shaped disclosures, synthetic earnings, and automated sentiment have produced an environment where value is no longer measurable in human terms. This essay explores why Burry’s exit is not a market call but a warning about the breakdown of governance and meaning inside modern finance. It also reveals why the AI OSI Stack has become an unexpected map for rebuilding an interpretable market before more investors follow him out the door.

Read More
How I Could Help BlackRock, Vanguard, and State Street Survive the Coming Governance Shock
Critique & Commentary Dan Critique & Commentary Dan

How I Could Help BlackRock, Vanguard, and State Street Survive the Coming Governance Shock

Something unusual is happening inside the machinery of American capitalism. What looks like a routine regulatory debate is beginning to reveal the outlines of a much larger struggle for control. The White House is quietly exploring moves that could rewrite how shareholder voting works, and the entire governance system is starting to tremble. If proxy advisers and index giants lose the ability to steer corporate decisions, the balance of power inside public markets could shift overnight. And all of this unfolds at the same time that AI is transforming trust, disclosure, and the very meaning of fiduciary judgment.

Read More
Why Community, Culture, and Local AI Will Define the Next Decade
Critique & Commentary Dan Critique & Commentary Dan

Why Community, Culture, and Local AI Will Define the Next Decade

Artificial intelligence is accelerating faster than human comprehension, yet the real crisis is not technical. It is cultural. It is civic. Beneath the AI OSI Stack sits a missing layer that determines who shapes the future and who benefits from it. I explore how local AI, community compute, Indigenous governance models, and decentralized cultural logic can create the civic commons layer that modern AI has lacked. This is a blueprint for reclaiming agency in an era where average is free and acceleration never rests.

Read More
How NACD Leaders Are Validating the AI OSI Stack

How NACD Leaders Are Validating the AI OSI Stack

The moment a senior advisor from the National Association of Corporate Directors (NACD) reached out, everything shifted. This was not another academic conversation or a passing industry nod. This was someone who has shaped national cybersecurity strategy at the SEC and now influences how corporate boards understand systemic risk. Our worlds rarely meet. Yet here we are, preparing to talk about how a civic protocol I built might become part of the fiduciary vocabulary that guides real institutions.

Read More
The Work That Found Me

The Work That Found Me

In the rapidly evolving field of AI governance, the need for transparent, accountable structures has never been more urgent. So here’s a little about my journey and how it’s shaping how AI systems are governed, and why it matters so much to me. From the earliest sparks of an idea to a full-fledged framework, my work is about more than just building systems — it’s about ensuring that technology serves humanity, not the other way around. In this post, I reflect on my personal journey, the values that drive me, and where I hope to take this work in the future.

Read More
Update — The AI OSI Stack: A Governance Blueprint for Scalable and Trusted AI

Update — The AI OSI Stack: A Governance Blueprint for Scalable and Trusted AI

Following my September 9, 2025 post on the AI OSI Stack, this update expands the conversation with the release of the AI OSI Stack’s canonical specification and GitHub repo. It marks a shift from concept to infrastructure: transforming the Stack into a working blueprint for accountable intelligence. Each layer, spanning civic mandate, compute, data stewardship, and reasoning integrity, turns trust into something structural and verifiable.

Read More
Quiet on the Outside, Building on the Inside

Quiet on the Outside, Building on the Inside

In October, I went a little quiet. The lab went quiet. But that quiet was full of motion. What began as loose sketches of AI philosophy solidified the AI OSI Stack: a structured architecture linking human judgment, governance logic, and technical standards like ISO 42001 and NIST’s AI RMF. Now it has a few formal papers and a Github repo. Alongside it, a new agent prototype, GERDY, began reasoning through compliance tasks autonomously, showing that governance can be both automated and transparent.

Read More
Power, Psychology, and the New Governance Frontier

Power, Psychology, and the New Governance Frontier

OpenAI’s Sora 2 is a mirror held up to civilization itself. With text-to-video realism approaching cinematic fidelity, Sora 2 forces us to confront a new kind of truth crisis: one where faces, voices, and histories can be reconstructed with perfect accuracy. The question is no longer “Is this fake?” but “Can society survive when everything looks real?” This essay explores how Sora 2 blurs the boundary between content and identity, reshapes the psychology of belief, and challenges governance to evolve faster than innovation.

Read More
My Journey Through the Berghain Challenge

My Journey Through the Berghain Challenge

When a mysterious billboard appeared in San Francisco showing only strings of numbers, few realized it hid an invitation to an underground coding arena: the Berghain Challenge. Designed by Listen Labs, the game asked players to become the bouncer at Berlin’s most exclusive club—only this time, the line outside was made of data. What follows is a personal reflection on that experiment, and how stepping up to the algorithmic door became a lesson in creativity, probability, and self-trust.

Read More
The Warning from Deutsche Bank: What Survives After the Hype?

The Warning from Deutsche Bank: What Survives After the Hype?

Deutsche Bank has warned that the U.S. economy is being held aloft by AI capital spending. Billions are flowing into data centers, GPUs, and infrastructure, creating a temporary economic lift. Yet these gains are less about AI services and more about the labor of construction and deployment. Markets are already dangerously overexposed, with projections of an $800 billion revenue shortfall by 2030. Baidu’s Robin Li has gone so far as to predict that 99 percent of AI firms will not survive. The question is: what happens when this wave of investment slows, and what remains after the hype fades?

Read More