Discussion about this post

User's avatar
Pierre's avatar

your insights are super valuable Nicolas. much thanks for sharing. keep them coming ✊

Irina Malkova's avatar

I’ve been thinking about this a lot too.

The core assumption is that application builders don’t have much influence on model performance - so it has to be good enough when the model comes out of the lab.

I think it’s a correct assumption - in-context learning is not really learning at all - but what a weird world this creates!

For example, you rightfully talk about how models are bad on financial workflows, and hence the whole sector of applications is stalled. But when OpenAI gets to prioritize training GPT for financial workflows, all it takes is to hire a hundred investment bankers to label data for fine tuning (https://fortune.com/2025/10/22/sam-altman-openai-wall-street-junior-bankers-ai-entry-level-jobs/)

Doesn’t that blow your mind? There are 400,000 investment bankers in US. Why is the whole sector of innovation in financial AI stalled waiting for OpenAI to hire a 100 of them?

Somebody needs to go disrupt this nonsense.

1 more comment...

No posts

Ready for more?