The coming AI landscape

 min. read
May 13, 2024
The coming AI landscape

It's been an incredible few years for AI. There's never been a moment like this in the industry and it's exciting, unnerving, and overhelming all at the same time. The following are some brief thoughts on where I think we could be headed.

On Product Design

I think AI will begin to unlock fundamentally new paradigms for product design. For example, organising information through tradional UX patterns (folders, lists, etc) I think will fundamentally change. A GPT-5 class system (or whatever that may look like) might be able to generate, on-the-fly, a very relevant folder structure or information heirarchy that a user can navigate without building that heirarchy and maintaining it themselves (like say a google drive folder). In a more advanced use case they may be able to generate an entirely new information navigation system for a user to interact with their information, tuned and personalised for each particular user.Product design I think will definitely be a differentiator. Who does the design, whether it's an AI or a human, will I think be an interesting revelation in the future.

On Open Source

The gap between open-source models and close-source models like GPT-4 is narrowing, yet I think GPT-4 remains a standout for a broad range of LLM tasks, offering superior performance across various applications. Despite the progress within the open-source community, which has seen models like Mistral, Llama and more recently Gemma make significant strides, GPT-4 does seem like, at least from my own experience, a clear leader in output accuracy for the time being.

Perhaps more fine-tuning or with a more finely tuned RAG system along-side I could achieve better results. Right now, and as much as I'd like to move to using an open source model, out of the box I've just found GPT-4 to be so good.

On Scale and Compute

Right now I think the big players are getting orders of magnitude improvements in capability by throwing more GPUs and more compute at these models. OpenAI is betting big that these scaling laws will hold for a while (Sam Altman's trillion dollar fund raise is insane) and NVIDIA's market cap is a leading indicator that the industry as a whole is betting big on this too. I do wonder though if there is a computational discovery around the corner that would herald a leap in model performance beyond evolutions of transformer architecture, without the need for scaled compute.

The human brain at the moment uses far less resources and energy than our current AI systems do and one can only then extrapolate that maybe there is some sort of possibility we'll see an advancement in model design that would trigger even larger leaps than the current transformer architectures are providing. That changes the game in a big way. Will this mean that smaller players can now compete with the bigger players on model performance if this scenario eventuates?

GPT-5 class and beyond

One thing I don't think any of us are prepared for is how good these models can get in the future. Right now we are building systems using RAG, fine-tuning etc, trying to fit things in a context window limit (even with Gemini 1.5's insanely large context window). All of that might actually get thrown out the window when more powerful AI's come online that don't have any of these limitations.