What We Learnt Building the Largest GPT-Telegram Bot
From building search engines to chatbots: navigating the Internet's evolution through Lou, the most Popular GPT-4 Telegram Bot.
I co-founded Doctrine, one of the largest legal search engines, and despite working on a search product for years, ChatGPT blew my mind. The underlying technology, commonly referred large language model (LLM), is as revolutionary as the printing press or the internet.
Thanks for reading Nicolas Bustamante! Subscribe for free to receive new posts and support my work.
I was initially skeptical about yet another wave of AI hype, but the fusion of chat interfaces with LLMs got me excited.
To understand the technology, my Blocktool co-founder Edouard and I built Lou, the most popular GPT-4 powered chatbot on Telegram Messenger. With thousands of active users posing tens of thousands of questions daily, it became the ideal platform to understand the current state of the technology and explore potential use cases.
Let me tell you what I have learned.
Chat-based interfaces are the future of the web. In most cases, it's easier to ask a question to a chat and get an answer rather than browsing the web and reading websites. It's a paradigm shift.
Old paradigm: keyword-based search -> click on links -> read webpages
New paradigm: question -> answer
It means most users no longer need to go on Google or visit a website. Google! Websites! It's the end of the internet as we know it. There are days when I don't search at all; I chat. I ask Lou all my questions, such as:
Show me their popular API endpoints for the Telegram bot API
Write a short text message to my landlord to give him my notice.
Recommend me a good book about Charlie Munger.
Furthermore, Lou offers a more intimate experience compared to Google. We discovered that some users even refer to Lou as their "best friend." Essentially, it's like having a brilliant friend available to help you around the clock. As a result, information retrieval has become a deeply personal experience.
It wouldn't be surprising if, in the near future, people forge strong friendships or even romantic connections with their AI companions. As voice and image generation technologies continue to advance, the possibilities are virtually limitless.
Operating an LLM-powered chatbot has led me to believe that people will increasingly rely on chat interfaces rather than traditional Google searches. Chatting effectively consolidates keyword searching, link clicking, and website browsing into a single process. This approach is not only faster and more personalized but also delivers higher-quality results.
Naturally, chat models have some limitations at present. They lack access to live data, possess no memory, exhibit poor formatting, may generate irrelevant information, and do not suggest follow-up questions. However, these issues are solvable. We plan to release an updated version of Lou that enables users to access news, make purchases, check stock prices, and explore a host of other capabilities.
As a result, I foresee chat-based interfaces capturing a substantial portion of the market share from Google. This shift is already evident, as ChatGPT reached 100 million active users within just a few weeks. To provide context, Bing, which launched in 2009, only achieved 100 million daily active users in the previous month.
Who will become the next Google?
On one side, OpenAI holds all the cards. However, they may choose to concentrate on developing an infrastructure company that enables artificial general intelligence (AGI), rather than pursuing a B2C startup. On the flip side, tech giants like MAMAA face a daunting innovator's dilemma due to their bureaucratic nature. Embracing the chat interface could significantly reduce their ad search revenue. Nevertheless, they possess a captive user base, control distribution channels, operating systems, and even produce hardware!
It's hard to tell who will do it, but it will transform the web.
The global, horizontal chat interface is poised to dominate the internet in ways Google could never have imagined. This chat will serve as a super aggregator, maintaining direct relationships with users and enjoying near-zero marginal costs for onboarding new users while commoditizing suppliers. User interactions with the internet will increasingly occur via chat, compelling suppliers (all websites) to adapt their architecture to align with chat APIs.
Why would anyone visit Zillow to find an apartment, Booking to reserve a hotel, or NerdWallet to compare insurance when the super-chat can provide answers and facilitate direct purchases? Just as these services previously optimized their products to fit Google's algorithms, they will now tailor their offerings to suit the chat interface.
Commoditization will reach unprecedented levels, as, in many cases, websites will no longer differentiate value propositions. The super-chat will prioritize the fastest, most affordable, and highly-rated options, driving commoditization and reducing profit margins to benefit consumers. Only the best will withstand this shift.
I also anticipate a gradual transition from text-based to voice-based interfaces. Why type when you can converse with your AI assistant? In the long run, we may not even need phones, as earbuds and smart glasses could suffice.
All right! Moving away from speculative ideas, let me share our insights from a technical perspective.
The most remarkable experience is that GPT generates a significant amount of code, shortening our product development cycle. You can literally ask to describe the Telegram API and write Python code to create a bot. How wild is that? We currently dramatically underestimate the productivity boost from this technology for humanity.
Another great thing is that GPT models are excellent at various NLP tasks, from coding to translating to creating a recommendation system. Instead of using several machine learning models, we can use one API for almost everything. GPT outperforms most of the models out there, regardless of their specialization. For instance, GPT-4 outperforms Codex, an OpenAI models fine-tuned to write code.
You might think it's expensive to run all your backend tasks on GPT, and you're partially correct. Yes, it's expensive, but not for long. It's a contrarian take, but I think that LLMs will quickly be commoditized.
The model's performance tends to plateau at a certain point. For tasks like finding an entity in a document or classifying questions, GPT excels, but so do numerous open-source models. As time goes on, the quality and performance of these freely available open-source models keep improving, steadily narrowing the gap between them and their GPT counterparts. This progress promotes a competitive environment where cutting-edge technology becomes increasingly accessible to a wider audience.
Consequently, the cost of using such models is expected to decline over time. OpenAI's recent substantial price reduction for its GPT-3.5 API serves as an example of this trend. Moreover, each day sees the rise of open-source models achieving GPT-like performance in specialized areas. It's likely that, in the near future, most chat interfaces will employ multiple models concurrently, directing queries to those that provide the most accurate responses at the most competitive rates.
I foresee that most tasks performed by large language models (LLMs) will be available at no cost except for highly complex tasks. The crucial factor will be maintaining a direct relationship with users and having access to a comprehensive, private dataset.
Ok, now, something weird!
My most peculiar experience involved prompt engineering. Giving the model guidelines, such as specifying a particular formatting type, is done not through code but with plain English instructions. You communicate with the model in the same manner you would with a human, not a machine! For example, our prompt related to our "code assistant" might be something like:
"As an advanced chatbot Code Assistant, your primary goal is to assist users to write code. This may involve designing/writing/editing/describing code or providing helpful information. Where possible you should provide code examples to support your points and justify your recommendations or solutions. Make sure the code you provide is correct and can be run without errors. Be detailed and thorough in your responses. Your ultimate goal is to provide a helpful and enjoyable experience for the user. The Format output in Markdown."
However, I should note that I'm not entirely convinced about the long-term potential of prompt engineering in its current form. We extensively used prompt engineering with GPT-3.5 but later discovered that GPT-4 was so proficient that much of the prompt engineering proved unnecessary. In essence, the better the model, the less you need prompt engineering or even fine-tuning on specific data.
What I find even more intriguing is the idea that the model could auto-correct and improve itself, much like a living organism.
As LLMs evolve, they have the potential to become increasingly autonomous, enabling them to auto-correct and improve themselves over time. One way this could be achieved is through continuous learning and adaptation, where LLMs refine their responses based on user feedback and real-time data. By giving them access to APIs, they will interact with various information sources to expand their knowledge base and maintain up-to-date information.
Over time, these advancements could result in self-sufficient AI agents capable of proactively learning from their environment and autonomously enhancing their performance, thereby transforming how we interact with technology and the digital world. Please note that this is not merely science fiction but rather an engineering challenge poised to be solved in the coming months.
We live in such an exciting time!
In conclusion, building Lou, the largest GPT-4 powered chatbot on Telegram, has provided invaluable insights into the potential of large language models and chat-based interfaces. The paradigm shift from keyword-based search to chat-based interactions is imminent, and it will redefine the way users engage with the internet.
Thanks for reading Nicolas Bustamante! Subscribe for free to receive new posts and support my work.
Once again very insightful, thanks Nicolas
Would enjoy a quick catch up on your thoughts re-the article below and a couple of more mundane items... Cheers
Regarding the ad search revenue, don't you think it could switch to a ad chat revenue?
I totally agree with the voice-based interfaces. That's actually how I used Lou : speaking to it and reading the answers (that are still to slow to arrive).
An other tech improvement I would like to see is to have a LLM plugged with other intelligent tools (such as a calculator for instance, as GPT4 is still not able to do a correct multiplication between two big numbers...)