The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.The
Hugging Face Hub
also offers various endpoints to build ML applications.
This example showcases how to connect to the different Endpoints types.
In particular, text generation inference is powered by Text Generation Inference: a custom-built Rust, Python and gRPC server for blazing-faset text generation inference.
huggingface_hub
python package installed.
HuggingFaceEndpoint
integration of the serverless Inference Providers API.
HuggingFaceEndpoint
class can be used with a local HuggingFace TGI instance serving the LLM. Check out the TGI repository for details on various hardware (GPU, TPU, Gaudi…) support.