AI is not the enemy. It’s an invitation.

There is a lot of noise about artificial intelligence at the moment.

Some of it is breathless excitement.
Some of it is apocalyptic fear.

Both miss the point.

AI is not coming for our humanity. It is forcing us to use it properly.

I am firmly in the camp that believes AI is good. Not perfect. Not neutral. Not harmless. But good – if we approach it with agency and responsibility.

In a previous CN&CO blog I wrote about breadth versus depth, I argued that while depth matters enormously, there is a growing premium on those who can connect disciplines, translate between silos and see patterns across fields. That argument feels even more relevant now.

AI compresses depth. It accelerates access to specialised knowledge. But it increases the value of synthesis.

Recently I was told that project management is one of the hottest skills in Silicon Valley. Not pure coding. Not just building models. But managing complexity. Coordinating humans. Translating technical capability into real world execution.

That should make us pause.

You need technical literacy. You need to understand what AI is doing and where its limits are. But domain knowledge and emotional intelligence are becoming more valuable, not less.

Domain knowledge is deep contextual understanding of a field – law, finance, governance, education, medicine, art. AI can generate an answer. It cannot understand the lived nuance of a regulatory boardroom, the politics of a community project or the ethical trade offs embedded in public policy.

Judgement still sits with us.

Satya Nadella, CEO of Microsoft, has repeatedly spoken about the future belonging to “learn it alls” rather than “know it alls”. That is not a tech slogan. It is a humanities mindset. Curiosity. Adaptability. Humility.

Reid Hoffman, co founder of LinkedIn, argues that in an AI world the competitive advantage shifts to those who can ask better questions and build better networks. Again, that is not about syntax. It is about thinking.

Ethan Mollick at Wharton has written extensively about how AI works best as a co pilot, not a replacement. His research shows that people who actively collaborate with AI tools outperform those who ignore them. The key difference is not raw intelligence. It is willingness to experiment.

Azeem Azhar, in his work on exponential technologies, makes a similar point: technological acceleration rewards those who adapt culturally, not just technically.

For those of us who studied literature, history, politics or philosophy, this moment is not a threat. It may be our time (so yes, reading novels, printed on paper, is a vital skill!)

For years we were told generalists would struggle. That you had to specialise narrowly to survive. And yes, depth still matters. But in a world where AI can draft contracts, summarise research, write first versions of code and generate financial models, the differentiator becomes judgement.

Judgement is formed through reading widely. Through understanding history. Through exposure to ambiguity and contradiction.

Homer did not write about neural networks. But The Odyssey is about navigating uncertainty, facing unknown forces and adapting strategically. Odysseus survives because he is agile and reflective, not because he is the strongest man in the room.

Across Africa, traditional art forms encode systems of meaning. Beadwork patterns signal identity. Sculptures carry spiritual narratives. Textiles communicate lineage and belonging. These are data systems of culture. AI, at its core, is also a pattern recognition engine. When we understand that link, the future does not feel alien. It feels like another chapter in our long history of building tools that extend human capability.

From oral storytelling to written language.
From manuscript to printing press.
From typewriter to internet.
Now to AI.

Each wave disrupted labour markets. Each wave created new roles.

Yes, jobs will disappear. Some already have. Routine cognitive work is increasingly automated. But new work will emerge: AI integrators, ethicists, trainers, cross functional strategists, domain translators. All the possibilities are bloody exciting!

The people who will thrive are those who are agile, curious and open to learning at any age.

AI helps us outsource the mundane. It drafts the first version. It cleans up the spreadsheet. It summarises the 200 page document. That is not laziness. It is leverage.

It also helps with creative exploration. When I say AI can make creative work “better”, I mean it enables faster iteration, broader exploration and sharper synthesis. It can generate ten alternative framings in seconds. It can surface references you had forgotten. It can challenge your blind spots.

But it cannot replace taste. It cannot replace moral courage. It cannot replace lived experience.

And that is where literature, art and philosophy matter more, not less.

If anything, the AI era may drive a renewed love of the humanities. As routine tasks become automated, originality of perspective becomes scarce. Cultural literacy becomes strategic. Ethical reasoning becomes commercial advantage.

If you want to explore this intersection practically, start experimenting. Every day I am learning by testing AI tools. And I am just getting started. Maybe:

Use ChatGPT or Claude to analyse a poem and then critique the analysis yourself.
Upload a speech and ask for alternative rhetorical structures.
Use Midjourney or DALL·E to reinterpret an African sculpture style in a modern visual language.
Try Perplexity to compare historical events across centuries.

Do not outsource your thinking. Augment it.

Uncertainty is real. But uncertainty is not the same as danger. And need not be feared.

We are not passive recipients of this wave. We are participants.

The question is simple: will we cling to what is fading, or will we build with what is emerging?

I am optimistic.

Not because disruption is painless.
But because human creativity, across millennia from Homer to contemporary African artists, has always found a way to adapt.

AI is not the end of human relevance.

It is the invitation to deepen it.

Carel is an investor in people and businesses, believing that 1+1 = (at least) 22. Working with a few basic concepts – best encapsulated in his believe that unless we are dead, anything is possible – Carel aims to build long-term sustainable value with like-minded individuals and companies, while having (a lot of!) fun.