The Battle Between Proprietary and Open Source AI

Published on
Product Minting
How to

Brace yourselves folks, AI is taking over the world!

Well, maybe it is not quite there yet. But still, you can’t deny the level of impact it has had over the past 12 months.

The weird part of it all is that “artificial intelligence” as a concept and as a field of study is not that new, but it “feels” new with all the chatbots and AI-powered tools that have come up this year.

More important than that is the fact that by this time last year, the only widely known AI tool was GPT-3 and then ChatGPT. But now, you can have a hard time tracking all the different AI tools, chatbots, and LLMs available.

All of which fall into 2 categories:

Closed source (like ChatGPT and Claude)

Open source (like Falcon or Mistral)

And here is where it gets interesting.

As popular and highly polished as proprietary AI models can be, there are open source models making waves in the AI space and punching above their weight class.

That’s what we’ll be looking at in this article. With technology as revolutionary as AI, is proprietary, black-box-like, software the way, or is open source a better option?

This and other questions will be answered in today’s episode.


Open or Closed? That is the question

To start off, the whole scientific process has been built on principles of honesty, integrity, and transparency. It involves openness, collaboration, and peer reviews to validate findings.

Many of the world’s biggest scientific advancements, like pasteurization, penicillin, and fertilizers, were possible thanks to the collaborative work of many scientists over the years.

Often, they tackled a big problem where they lacked the resources at that time. They published their findings and scientists many years later used that as the foundation to develop a solution to the original problem for the benefit of mankind.

And this also applies to open-source technology. The world changed when computers went from being huge machines that occupied entire rooms to devices that every home can have.

And after the internet came, that was another step forward in allowing many people access to technology instead of just the privileged few.

Tim Berners-Lee invented the World Wide Web in 1989 and made it freely available to everyone without any patent or royalties. This fueled the rapid growth of the internet and the many innovations that came in the next decade.

A similar story happens with operating systems, think Windows vs Linux. And the same has happened with web technologies.

With all these previous examples, it stands to reason that a technology as transformative as AI, can (and should) follow a similar path.

So, let’s look at how both sides (closed & open-source AI) have progressed this year.

The State of Proprietary AI

By now, it is no news for anyone the impact that ChatGPT had when it was released last November. And for the rest of the year, proprietary AI became the talk of the town.

In March of 2023, the successor to GPT-3, GPT-4 was released. That event sparked the AI race.

Soon enough, Google joined the fray with Bard. Then came Anthropic, founded by former OpenAI researchers, releasing Claude, a contender to the popular ChatGPT.

OpenAI is, as of now, the company with the most “hits” in the market.

Those are the GPT models, different Dall-E versions, and Whisper. Microsoft is also up there with its new and improved Bing Chat (which is based on OpenAI’s technology) and the soon-to-be-included-everywhere Copilot.

Google joined the race with the early research project Bard which underwhelmed everyone at first and made us pay more attention to Microsoft and its initiatives.

But after that “science fair” project, Google stepped up its game and released offerings like Vertex AI, PaLM (and PaLM2), Imagen, and Codey.

And then there’s Anthropic with different versions of its powerful Claude (Claude-instant, Claude 2). The interesting part is the approach they've used to train Claude, what they call "Constitutional AI". This approach puts safety at the forefront and helps create AI that is aligned with human interests and values.

Those are great advancements in the AI field that are more widely known thanks to the fact that they are developed by companies that have several employees, extensive resources, and great marketing departments.

Now, let’s look at the other side of the coin.

The State of Open Source AI

Since the release of GPT-4, not only tech giants have jumped in to take part in the AI race, but other independent projects have also emerged. Made possible by open-source ML frameworks like TensorFlow and PyTorch.

Stability AI released Stable Diffusion, an alternative to Dall-E, and many tech enthusiasts have experimented extensively with its capabilities to the point that it brought up ethical concerns regarding the nature of art and creativity.

Meta announced the release of a quasi-open large language model called LLaMA (with several model sizes and then a second version).

That model along with Hugging Face services (like Gradio, Spaces, Transformers) sparked a revolution because for the first time, people all around the world had access to open-source technology that rivaled the likes of ChatGPT or PaLM.

And do you know what happens when a group of techies, hackers, and tech enthusiasts have enough time and resources? Yeah, they can go crazy building stuff.

The niche internet forums and IRC channels from the 90’s with the advent of the internet were replaced by Hugging Face discussions, GitHub issues, and Discord servers.

Something else that contributed to open-source growth was the Pile dataset from EleutherAI. This initiative helped progress unsupervised and self-supervised learning, reducing the need for large labeled datasets.

With the large language models, the datasets to train/finetune them, and the reduced requirements for computing, a whole ecosystem of products and services soon emerged.

(When I say reduced requirements for computing, I mean that LLMs don’t need a ton of parameters to produce the quality of results generated by proprietary models, this is shown by models like LLaMA 13B and Mistral 7B)

There are a ton of projects, pre-trained and fine-tuned models alike, datasets, and tools in this space available for everyone who wants to get in and collaborate with others.

We now have different types of chatbots that don’t rely on GPT-3/GPT-4 to work like Zephyr-chat, LLaMA2-chat, Mistral-instruct, and Falcon-chat.

LLMs fine-tuned for code generation and assistance like Code-LLaMA, CodeGen, and StarCoder.

An open-access multilingual language model called Bloom.

Multimodal LLMs (that are not just text) like LLaVA and Fuyu.

A Hugging Face leaderboard that evaluates and ranks all the existing open-source models.

Several datasets for pre-training and fine-tuning LLMs like RedPajama or OpenOrca.

And most recently we have more autonomous models called “AI agents”.

The most popular ones are powered by GPT-3.5 but there are others based on LLaMA.

And it seems we’re racing to build agents that don’t get stuck in loops or can finish tasks independently without spewing a bunch of text that looks convincing but it’s either inaccurate or plain wrong.

There’s been a ton of progress in the past 6 months alone and you can be sure that no one front is showing signs of slowing down.

Going Forward

Even with all the head-spinning, fast-paced progress we’ve seen in the past year, we’re still early in AI’s development. There are several things that we need to figure out, different aspects to consider like AI privacy, ethics, inbuilt biases, and so on.

As with everything in life, no one side is completely wrong and the other is right. Both proprietary and open-source AI have their pros and cons.

Proprietary AI can leverage a greater amount of resources to train new and more powerful models, while providing access to people at a wider scale. But they operate like a black box, lack observability, and their interests might be more aligned to big players with money than the regular consumer.

Open-source AI, on the other hand, benefits from worldwide collaboration, transparency, and open innovation. But it lacks organization, resources for more ambitious initiatives, and is at risk if stricter regulations are established.

The question now is how we can keep the progress in AI going in a hybrid manner.

A way in which we can collaborate jointly with some of the brightest minds in the space and with the necessary resources to drive this innovation forward in a responsible manner that puts safety and privacy at the forefront.

A way in which the interests and benefits of a few don’t trump the ones of the rest of us. A way in which such revolutionary technology like AI doesn’t get privatized, restricted, or weaponized against groups of people deemed “enemies” of the greater powers.

We are in a unique moment in history where the decisions that we make and the way we handle technology will determine how the future will take shape, for the better or for worse.

Thanks for reading.

Don’t forget to subscribe on Hackernoon and don’t miss the coming articles.

Discussion (20)

Not yet any reply