mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-29 21:41:00 +08:00
83 lines
4.4 KiB
Markdown
83 lines
4.4 KiB
Markdown
[#]: subject: "Open-Source Model 'Dolly' Claims to be a Cheaper Alternative to ChatGPT"
|
|
[#]: via: "https://news.itsfoss.com/open-source-model-dolly/"
|
|
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
|
|
[#]: collector: "lkxed"
|
|
[#]: translator: " "
|
|
[#]: reviewer: " "
|
|
[#]: publisher: " "
|
|
[#]: url: " "
|
|
|
|
Open-Source Model 'Dolly' Claims to be a Cheaper Alternative to ChatGPT
|
|
======
|
|
|
|
An affordable alternative to ChatGPT? And, open-source? Looks like we're joining the open-source race against ChatGPT.
|
|
|
|
![open source model dolly][1]
|
|
|
|
![][2]
|
|
|
|
Databricks is a software company that has established itself in a variety of sectors, with data warehousing, and AI-based solutions being their primary focus.
|
|
|
|
In recent times, we have seen the meteoric rise of ChatGPT, resulting in similar efforts from the likes of Meta, Google, and even Mozilla.
|
|
|
|
And now, Databricks is trying in their own way by open-sourcing its [large language model][3] (LLM) 'Dolly'.
|
|
|
|
Let's take a look at it.
|
|
|
|
**What is happening?:** In a recent announcement, Databricks introduced what they term as **'a cheap-to-build'** LLM that functions by using an existing open-source parameter [model][4] by [EleutherAI][5].
|
|
|
|
The model has been slightly tweaked to give Dolly instruction following capabilities such as brainstorming and text generation.
|
|
|
|
When you compare the **175 billion parameters** in GPT-3, Dolly's **6 billion parameters** might seem puny in comparison.
|
|
|
|
But, the folks over at Databricks were surprised when they saw that even with this much data, Dolly was **able to****exhibit many of the same capabilities as ChatGPT**.
|
|
|
|
Below is one of the examples they showcased:
|
|
|
|
![a screenshot of how dolly performs in an open question and answer scenario][6]
|
|
|
|
The original model used data from [Alpaca][7], the model built by Stanford using the [LLaMA][8] LLM by Meta as a base.
|
|
|
|
But, as you can see, the original model produced a very haphazard result, whereas Dolly, with its different model and tweaks, was able to produce a far usable answer.
|
|
|
|
> 📝 Fun Fact: The name was taken from the first cloned mammal, Dolly the sheep.
|
|
|
|
**Why now?:** According to Databricks, they think that **many companies would prefer to build their own model** rather than sending data to some centralized provider who has locked their model behind an API.
|
|
|
|
Many companies might not be comfortable handing over their most sensitive data to a third party, and then there are the various tradeoffs in terms of model quality, cost, and desired behavior.
|
|
|
|
**Do you want to check it out?**
|
|
|
|
Sure, but there's a catch.
|
|
|
|
You will have to **use their platform to use Dolly**, they have open-sourced a [Databricks notebook][9] that will help you build it on Databricks.
|
|
|
|
Moreover, if you want to get access to the trained weights, you will have to contact them. I am uncertain whether they will provide access to it for free, though.
|
|
|
|
In a nutshell, this move to open-source their model should be good for companies to help safeguard their data, save on operating costs, and more by enabling them to create their own model.
|
|
|
|
You can check out the[announcement blog][10] to learn more about the technical details and other plans for it.
|
|
|
|
--------------------------------------------------------------------------------
|
|
|
|
via: https://news.itsfoss.com/open-source-model-dolly/
|
|
|
|
作者:[Sourav Rudra][a]
|
|
选题:[lkxed][b]
|
|
译者:[译者ID](https://github.com/译者ID)
|
|
校对:[校对者ID](https://github.com/校对者ID)
|
|
|
|
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
|
|
|
[a]: https://news.itsfoss.com/author/sourav/
|
|
[b]: https://github.com/lkxed/
|
|
[1]: https://news.itsfoss.com/content/images/size/w1304/2023/03/opensource-ai-model-dolly.png
|
|
[2]: https://news.itsfoss.com/content/images/2023/03/linux-mega-packt.webp
|
|
[3]: https://en.wikipedia.org/wiki/Large_language_model?ref=its-foss-news
|
|
[4]: https://huggingface.co/EleutherAI/gpt-j-6B?ref=its-foss-news
|
|
[5]: https://www.eleuther.ai/?ref=its-foss-news
|
|
[6]: https://news.itsfoss.com/content/images/2023/03/Dolly_AI.jpg
|
|
[7]: https://crfm.stanford.edu/2023/03/13/alpaca.html?ref=its-foss-news
|
|
[8]: https://ai.facebook.com/blog/large-language-model-llama-meta-ai/?ref=its-foss-news
|
|
[9]: https://github.com/databrickslabs/dolly?ref=its-foss-news
|
|
[10]: https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html?ref=its-foss-news |