How to create your own personalized AI assistant for free with HuggingChat

Hugging Face has unveiled its open-source and free alternative to OpenAI’s GPTs. A store lists all the assistants created by users.

The open-source community continues to chase the AI giants. After making the best open-source models of its platform available on HuggingChat, Hugging Face announces the arrival of a free and open alternative to OpenAI’s GPTs. Announced on X Friday, February 2 by Phillip Schmid, technical lead & LLMs director of Hugging Face, the Assistants are already available in HuggingChat. The operation is very simple and allows you to quickly configure an assistant by customizing its prompt, all for free. Note that it is essential to have a Hugging Face account (free) and to be connected to create your assistant.

How to create your assistant on HuggingChat?

Creating your assistant with HuggingChat is actually simpler than with the OpenAI interface. The assistant configurator is integrated directly into HuggingChat, the open-source equivalent of the ChatGPT interface at Hugging Face. To access it, go to huggingface.co/chat/ and click on “Assistants” and then “Create New assistant”.

The different steps to access the assistant creator. © Screenshot

The interface requires entering several fields:

  • Avatar: the presentation image of your assistant
  • Name: the name
  • Description: the description of the task(s) performed by your bot
  • User start messages: the example message that can be sent to your assistant
  • Model: to choose the open-source model to use
  • Instructions: this is the heart of the matter, where it is necessary to enter your prompt
The Hugging Face assistant creator interface. © Screenshot

Which model to use?

The two most important actions are choosing the model and adding the instructions. For now, in February 2024, six LLMs are available: Mixtral-8x7B-Instruct-v0.1 (Mistral AI’s flagship model), Llama-2-70b-chat-hf (Meta’s open-source chat model), Nous-Hermes-2-Mixtral-8x7B-DPO (a model developed by Nous Research based on a fine-tuned version of Mixtral-8x7B), CodeLlama-70b-Instruct-hf (Meta’s code model), Mistral-7B-Instruct-v0.2 (Mistral AI’s 7 billion parameter model) and openchat-3.5-0106 (the latest version of the open-source chat-optimized model developed by Alignment Lab AI, a laboratory from the Tsinghua University in Shanghai). Although it may seem quite complex initially, choosing the model is actually quite simple.

For the majority of use cases and for use in French, we recommend using Mixtral-8x7B-Instruct-v0.1. Based on an SMoE architecture, Mixtral-8x7B is undoubtedly the most performant and versatile generalist LLM in the open-source ecosystem in early 2024. A Llama 2 can also be suitable and provide very good results, especially in English. Finally, for uses related to code, we recommend prioritizing CodeLlama-70b-Instruct-hf. The latest model developed by Meta’s teams is perfect for the most complex tasks in a wide variety of languages (C++, PHP, Javascript, C#…).

Instructions: how to prompt your assistant correctly?

The heart of the assistant, the instructions guide the model on the tasks it must perform. Preferably in English, the instructions should be written in clear and simple language. To maximize the efficiency and relevance of your assistant, we advise you to give it a specific role. For example: “You are a specialist in spreadsheet systems for several years”. Second tip: quickly describe the expected task (“Your role will be to help me write macros for an Excel system”). Then specify the different tasks to be performed and the way to achieve them (“Carefully analyze the instructions I will give you and generate the Macro code. Ask me questions if necessary to clarify.”).

One additional tip: use examples of expected correct results. LLMs offer better performance output when the expected result is illustrated by an example. (Ex: “Here is an example of the expected result: [INSERT RESULT]”).

Finally, to further refine your prompt’s understanding by the AI, ask the model you want to use to rephrase the prompt in its own way. (“Analyze and lengthen or rephrase this prompt to maximize its efficiency: [PROMPT]”). Once the configuration is complete, simply click on “Create” to publish your assistant.

Attention, for now, the assistants created with Hugging Face’s tool are publicly visible by default. It is not possible to change this setting. Therefore, do not provide any personal or confidential information in the instructions.

An assistant store is also available

In a good imitation of OpenAI’s GPTs, HuggingChat also offers a store where the many assistants already created by users are listed. The bots are classified by the model used and it is not possible to search. A few dozen assistants are already available, with diverse and varied use cases.

For now, Hugging Face’s assistants are limited and only allow text processing. There is no integration with third-party APIs yet. However, the teams at the New York-based startup are working on a host of new features: temperature setting, repetition penalty, automated chat-based configurator (like OpenAI), integration of external APIs, RAG support… The competition between open source and commercial AI giants seems to be intensifying even more at the beginning of this year.

Leave a Reply

Your email address will not be published. Required fields are marked *