Interesting episode on GlobalCapital’s Podcast (link below) which dives deep into the hype and limits of AI in investment banking, which leaves an open question which is worth giving it a thought: Can generative AI tools like ChatGPT really make bankers smarter, or just faster?
Banks are exploring how AI can streamline repetitive, data-heavy tasks: building pitch decks, summarizing financials, or even drafting basic client notes. The promise? Freeing up junior bankers from soul-crushing grunt work. But here’s the twist: automation alone doesn’t create intelligence.
The hosts reflect on how investment banking has drifted away from its roots, epitomised led by legendary financier Siegmund Warburg, who emphasized deep client relationships, tailored judgment, and original thinking. The hope is that AI, paradoxically, could revive those values, by allowing bankers to think more, not less.
A lively example comes up: Section 899, a bizarre tax clause from Trump’s old “One Big Beautiful Bill.” It’s vaguely worded, politically motivated, and nearly impossible to interpret cleanly. Even the best LLMs struggle here. Why? Because AI lacks judgment, intuition, and an understanding of political context.
The discussion also touches on the real-world market backdrop, frenzied SSA bond issuance, yield curve anomalies in FIG credit, where, again, AI can’t yet grasp the “why” behind the moves.
So, can AI revive real intelligence? Only if humans reengage with the work AI can’t do.
Wanted to open up the floor with a few questions for the everyone in this forum, which will definitely create an interesting discussion, as some of us are currently experiencing this on our day-day as juniors in banking, or at least I am.
1. How can banks repurpose junior bankers’ time post-AI automation?
2. Where exactly does AI fall short in advisory/analysis roles?
3. Can AI be trained to flag ambiguity or jurisdictional complexity?
4. How should investment banks manage the transition?
5. Could hybrids—“AI + human”—be the new model?
Link to podcast: https://open.spotify.com/episode/3qarKmh81cTkGtDdjnJ8V7?si=-n8Gwne2Qlqf5hGc7zbFcg
Jose Antonio, thanks for sharing this insightful and timely post. You raise an important point: AI might make bankers faster, but real intelligence still depends on human judgment. I especially liked the reference to Warburg’s legacy — it’s a great reminder that relationships and deep thinking are still at the core of good advisory work. If AI can take care of the repetitive tasks, maybe it can help us return to that more thoughtful model. Looking forward to hearing more perspectives on the questions you raised. Here are my thoughts about your questions:
1. How can banks repurpose junior bankers’ time post-AI automation?
Banks should shift junior roles toward higher-value tasks: learning client strategy, developing market insight, and contributing to tailored solutions. With AI doing the grunt work, juniors could spend more time shadowing senior bankers, engaging in client meetings, and building the commercial judgment that AI can’t replicate.
2. Where exactly does AI fall short in advisory/analysis roles?
AI struggles with ambiguity, political context, and nuanced interpretation — especially in areas like regulation, complex legal structures, or reading between the lines in client conversations. It also lacks the emotional intelligence needed to build trust, understand client intent, or handle sensitive negotiations.
3. Can AI be trained to flag ambiguity or jurisdictional complexity?
To an extent, yes — LLMs can be trained to highlight unusual phrasing, inconsistencies, or legal red flags. But recognizing why something is ambiguous, or what the implications are in a specific jurisdiction, still requires legal and cultural judgment that goes beyond pattern recognition.
4. How should investment banks manage the transition?
By embedding AI as a collaborative tool, not a replacement. That means investing in training, adjusting evaluation metrics (so juniors aren’t rewarded just for long hours), and creating career paths that focus more on analytical and interpersonal skills. Change management will be critical.
5. Could hybrids — “AI + human” — be the new model?
Absolutely — and arguably the most powerful one. AI can handle speed and scale, while humans provide context, creativity, and judgment. The firms that thrive will be those that master this symbiosis — where humans learn to ask better questions, and AI helps them move faster toward the answers.