Hi, I'm Milene

Hi, I'm Milene

Hi, I'm Milene

If you've made it this far, I suspect you already know what I'm talking about.

You're not worried AI is going to replace you. You're worried it's already changing how you think and you haven't fully decided to let it.


That's exactly the right thing to be paying attention to. And it's exactly what this work is for.


A bit of background, if you want it:

You're not worried AI is going to replace you. You're worried it's already changing how you think and you haven't fully decided to let it.


That's exactly the right thing to be paying attention to. And it's exactly what this work is for.


A bit of background, if you want it:

  • Advisor to senior leaders on judgment, decision-making, and cognitive authorship in AI-driven organizations.

  • Researcher at the intersection of cognitive science, automation bias, and what AI does to professional expertise.

  • MBA in Decision Science (Psychology, Economics & Consumption) · Postgraduate in Anthropology · Bachelor in Law specialized in Digital Law


  • Machine Learning Specialization (Stanford / DeepLearning.AI) · Innovation Management (Erasmus University Rotterdam)

  • 12 years in law and business strategy · 6 years in digital product leadership

  • Led Condé Nast's first AI innovation program


  • Writing on AI and human cognition since 2019

Now let's me explain what the CV doesn't

I grew up reading the future

I grew up
reading the future

Every book that mattered asked me to imagine a future and interrogate it.

Is this a future I want? Who benefits from this?

What kind of person would I be, living inside it?

Every book that mattered asked me to imagine a future and interrogate it.

Is this a future I want? Who benefits from this?

What kind of person would I be, living inside it?

YES, I'M A SCI-FI READER

Science fiction, at its best, has never really been about spaceships or alien civilisations.

Ursula K. Le Guin described science fiction better than almost anyone, not merely as extrapolation but as something closer to a mental experiment. A metaphor. A way of borrowing the dominant languages of a civilisation — science, technology, history, anthropology — and using them to ask the questions that ordinary conversation tends to avoid.


Isaac Asimov asked a different version of the same question. Not what does it mean to be human but what happens when you build a system designed to serve human values, and the system turns out to be more rigorous than the values it was built on? His robots followed their rules perfectly. That was always the problem.

Together, those two lenses gave me a way of looking at any system — social, technological, institutional — and asking not just how it works, but what it does to the people inside it.

Every major technology in history displaced physical or mechanical effort. This one is different. When AI takes a cognitive task — analysis, synthesis, reasoning, judgment — it doesn't just do the work. Over time, it quietly changes your relationship with the work.


Leaders built authority through a specific loop: think hard → make a call → learn from it → build judgment. That loop is how expertise accumulates and how confidence becomes legitimate.

AI compresses that loop. Pressure to move fast. A result in 30 seconds. You evaluate, approve, move on. It feels like thinking. It is not quite the same thing.


The outputs stay high. The dashboards stay green. But nobody announces that judgment has left the building, and that's exactly what makes this dangerous.


This is what I call a cognitive authorship problem. And it is one of the most significant leadership challenges of this decade — precisely because it doesn't look like a problem until it already is one.

That lens took me through some unusual places.

Law first.

Which, if you think about it, isn’t so far from science fiction. Both are attempts to answer the same question: how do we build systems that protect human dignity and keep behaviour accountable to human values?


One uses institutions. The other uses imagination.

I was drawn to both for the same reason.

Law first.

Which, if you think about it, isn’t so far from science fiction. Both are attempts to answer the same question: how do we build systems that protect human dignity and keep behaviour accountable to human values?


One uses institutions. The other uses imagination.

I was drawn to both for the same reason.

Then anthropology

The study of how cultures make meaning, how groups build shared reality, how identity forms and deforms under pressure.

Then anthropology

The study of how cultures make meaning, how groups build shared reality, how identity forms and deforms under pressure.

Then cognitive science and decision science

The specific, strange, beautiful mechanics of how individual minds actually work when they have to choose.

Then cognitive science and decision science

The specific, strange, beautiful mechanics of how individual minds actually work when they have to choose.

Each field was, in its own way, science fiction's question asked in technical language.

What are we, really? What do we do under pressure? What do we become when the conditions change?

Each field was, in its own way, science fiction's question asked in technical language.

What are we, really? What do we do under pressure? What do we become when the conditions change?

AND THEN THE CONDITIONS CHANGED

Generative AI arrived, not as a distant future, but as the present.
And it did exactly what the best science fiction had warned it would.

Not with robots. Not with superintelligence.


Quietly, inside the workflows of people who were good at their jobs, taking, for the first time, not just the effort, but the thinking those people used to do themselves.

If this sounds familiar, you're in the right place.

A Curiosity

Let me tell you something about myself. When I was younger, immersed in science fiction, there was one character I wanted to become, not as a career plan, but as a way of being in the world.

Susan Calvin. Asimov's robopsychologist.

The scientist who spent her career studying artificial minds to understand what they revealed about human ones. She never looked at the problem from the outside, never in alarm. She looked at it with precise, patient attention to what was actually happening between the human and the machine. What the machine made possible.

What the machine, quietly and without announcement, took away.

That's the posture I've brought to this work — yes, inspired by my favourite sci-fi character. Not the commentator watching from a distance but the person inside the mechanism, asking what it does to the people who depend on it.

Hi, I'm Milene. I help senior leaders reclaim their judgment in a world where AI is already in the room.

So I went and looked.

I studied, I run research, I interviewed people from different backgrounds. I talked to senior leaders across Europe and Brazil, and I listened for what they couldn't quite say. The hesitation that didn't have a name yet. The quiet discomfort underneath the productivity gains.


What emerged was consistent, serious, and almost entirely absent from the existing conversation.


Leaders weren't losing their jobs. They were losing something harder to name:

The sense that their judgment was still genuinely theirs. That their decisions reflected their thinking, their experience, their accountability and not a polished output they'd approved on the way to the next meeting.

I called it a cognitive authorship problem.

And I built this practice to close it.

This is the conversation most
leaders haven't had yet.

A first conversation costs nothing except honesty. We start with what you're already feeling — and see where it leads.

Drone shot of a running track at night

This is the conversation most
leaders haven't had yet.

A first conversation costs nothing except honesty. We start with what you're already feeling — and see where it leads.

Drone shot of a running track at night

This is the conversation most leaders
haven't had yet.

A first conversation costs nothing except honesty. We start with what you're already feeling — and see where it leads.

Drone shot of a running track at night