Hugging Face
Since Camel 4.19
Only producer is supported
The Hugging Face (HF) component provides an opinionated integration with Hugging Face models for various AI tasks, such as text classification, generation, and audio processing. It uses Deep Java Library's Python engine to run Hugging Face’s Transformers library in a Python subprocess, allowing easy access to HF pipelines. The Python environment needs to be setup before using the component.
This component is made to allow users to hit the ground running with HF models. If high-throughput/low-latency or the Python subprocess is an issue, users are encouraged to use the camel-djl for a more Java native approach.
To use the HF component, Maven users will need to add the following dependency to their pom.xml:
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-huggingface</artifactId>
<version>x.x.x</version>
<!-- use the same version as your Camel core version -->
</dependency> URI format
huggingface:task?modelId=model
Where model is the model name hosted on Hugging Face and task is one of the supported HF task listed below. For example, to use the Qwen2.5-3B-Instruct model with the chat task:
from("direct:start-chat")
.to("huggingface:chat?modelId=Qwen/Qwen2.5-3B-Instruct"); Supported Tasks
The component supports the following tasks:
| Task | Input Type | Output Type | Options | Models |
|---|---|---|---|---|
text-classification | String (text to classify) | ai.djl.modality.Classifications | revision, device | Classification-tuned models (e.g., distilbert-base-uncased-finetuned-sst-2-english for sentiment, roberta-large for multi-label) |
text-generation | String (prompt) | String (generated text) | revision, device, maxTokens, temperature | Generative models (e.g., gpt2 for completion, mistralai/Mistral-7B-Instruct-v0.2 for instruct-tuned generation) |
question-answering | QAInput | String (extracted answer) | revision, device | QA-tuned models (e.g., distilbert-base-cased-distilled-squad for extractive QA, deepset/roberta-base-squad2 for robust Q&A) |
summarization | String (text to summarize) | String (summary text) | revision, device, minLength, maxTokens, temperature | Summarization-tuned models (e.g., facebook/bart-large-cnn for abstractive summarization, t5-small for translation-like summarization) |
zero-shot-classification | String[] or List<String> [text, label1, label2, …] | String (best label) | revision, device, multiLabel, autoSelect | NLI-based zero-shot models (e.g., facebook/bart-large-mnli for general zero-shot, joeddav/deberta-v3-large-zeroshot-v1 for multi-label) |
sentence-embeddings | String[], List<String> or String (sentences to embed) | float[][] (2D embedding tensor: batch × dimension) | revision, device | Embedding-tuned models (e.g., sentence-transformers/all-MiniLM-L6-v2) |
text-to-image | String (prompt) | byte[] (PNG image bytes) | revision, device | Diffusion-based generation models (e.g., CompVis/stable-diffusion-v1-4 for general images, runwayml/stable-diffusion-v1-5 for improved quality) |
automatic-speech-recognition | Audio or byte[] (audio bytes) | String (transcribed text) | revision, device | ASR-tuned models (e.g., facebook/wav2vec2-base-960h) |
text-to-speech | String (text prompt) | ai.djl.modality.audio | revision, device | TTS-tuned models (e.g., facebook/mms-tts-eng for English TTS, microsoft/speecht5_tts for multi-speaker) |
chat | String (user message) | String (LLM response) | revision, device, maxTokens, temperature, systemPrompt, userRole, memoryIdHeader | Instruct-tuned/chat models (e.g., mistralai/Mistral-7B-Instruct-v0.2 for conversational, microsoft/Phi-3-mini-4k-instruct for efficient chat) |
custom | TBD | TBD | predictorBean (required), revision, device | Any HF model compatible with the custom predictor bean (e.g., Helsinki-NLP/opus-mt-en-fr for translation, distilgpt2 for custom generation) |
Configuring Options
Camel components are configured on two separate levels:
-
component level
-
endpoint level
Configuring Component Options
At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level.
For example, a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
You can configure components using:
-
the Component DSL.
-
in a configuration file (
application.properties,*.yamlfiles, etc). -
directly in the Java code.
Configuring Endpoint Options
You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java.
A good practice when configuring options is to use Property Placeholders.
Property placeholders provide a few benefits:
-
They help prevent using hardcoded urls, port numbers, sensitive information, and other settings.
-
They allow externalizing the configuration from the code.
-
They help the code to become more flexible and reusable.
The following two sections list all the options, firstly for the component followed by the endpoint.
Component Options
The Hugging Face component supports 21 options, which are listed below.
| Name | Description | Default | Type |
|---|---|---|---|
HF API token for private models. | String | ||
If true, auto-select the best label (highest score) for zero-shot classification. | true | boolean | |
The configuration. | HuggingFaceConfiguration | ||
Device for inference (cpu, gpu, auto). | String | ||
Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean | |
Max tokens for generation tasks. | int | ||
Header name for conversation memory ID (for multi-user chats). | CamelChatMemoryId | String | |
Min tokens for summarization tasks. | int | ||
Required Hugging Face model ID (e.g., distilbert-base-uncased-finetuned-sst-2-english). | String | ||
Model loading timeout in seconds, if negative then use default (240 seconds). | int | ||
Allow multi-label classifications for zero-shot tasks. | false | boolean | |
Bean name of a custom TaskPredictor implementation (for tasks not covered by built-in predictors). | String | ||
Predict timeout in seconds, if negative then use default (120 seconds). | int | ||
Model revision or branch (default: main). | String | ||
Initial system prompt for chat tasks (e.g., 'You are a helpful assistant named Alan.'). | String | ||
Temperature for sampling (0.0-1.0). | float | ||
Top-k parameter for classification tasks. | 5 | int | |
Role for user messages in chat history (e.g., 'user' or 'human'). | user | String | |
Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean | |
Used for enabling or disabling all consumer based health checks from this component. | true | boolean | |
Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true. | true | boolean |
Endpoint Options
The Hugging Face endpoint is configured using URI syntax:
huggingface:task
With the following path and query parameters:
Path Parameters (1 parameters)
| Name | Description | Default | Type |
|---|---|---|---|
Required The Hugging Face task to perform (e.g., TEXT_CLASSIFICATION). Enum values:
| HuggingFaceTask |
Query Parameters (17 parameters)
| Name | Description | Default | Type |
|---|---|---|---|
HF API token for private models. | String | ||
If true, auto-select the best label (highest score) for zero-shot classification. | true | boolean | |
Device for inference (cpu, gpu, auto). | String | ||
Max tokens for generation tasks. | int | ||
Header name for conversation memory ID (for multi-user chats). | CamelChatMemoryId | String | |
Min tokens for summarization tasks. | int | ||
Required Hugging Face model ID (e.g., distilbert-base-uncased-finetuned-sst-2-english). | String | ||
Model loading timeout in seconds, if negative then use default (240 seconds). | int | ||
Allow multi-label classifications for zero-shot tasks. | false | boolean | |
Bean name of a custom TaskPredictor implementation (for tasks not covered by built-in predictors). | String | ||
Predict timeout in seconds, if negative then use default (120 seconds). | int | ||
Model revision or branch (default: main). | String | ||
Initial system prompt for chat tasks (e.g., 'You are a helpful assistant named Alan.'). | String | ||
Temperature for sampling (0.0-1.0). | float | ||
Top-k parameter for classification tasks. | 5 | int | |
Role for user messages in chat history (e.g., 'user' or 'human'). | user | String | |
Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
Message Headers
The Hugging Face component supports 1 message header(s), which is/are listed below:
| Name | Description | Default | Type |
|---|---|---|---|
CamelHuggingFaceOutput (producer) Constant: | The output from the model. | String |
Examples
from("direct:start")
.to("huggingface:text-classification?modelId=modelId=cardiffnlp/twitter-roberta-base-sentiment-latest&topK=2")
.to("log:result"); Input : "I love this movie!"
Output : DJL Classifications [{"className" : "positive","probability" : 0.9847}, {"className" : "neutral", "probability" : 0.01182}] Simple chat route with automatic history:
from("direct:start-chat")
.to("huggingface:chat?modelId=mistralai/Mistral-7B-Instruct-v0.2&systemPrompt=You are a helpful assistant&maxTokens=100&temperature=0.7")
.to("log:response"); Send multiple messages to "direct:start-chat" — history is maintained automatically.
For a custom task (e.g., translation): Define a custom predictor bean in your application or test:
public class TranslationPredictor extends AbstractTaskPredictor {
// Implement
}
@Bean
public TranslationPredictor myCustomPredictor() {
return new TranslationPredictor();
} Route:
from("direct:start-custom")
.to("huggingface:custom?modelId=Helsinki-NLP/opus-mt-en-fr&predictorBean=myCustomPredictor")
.to("log:translated"); This allows extending the component for most HF tasks.
When using a large model for the first time, downloading can take some time so make sure to set the modelLoadingTimeout option (in seconds, default is 240).
When performing a computationally expensive task, make sure to set the predictTimeout option (in seconds, default is 120).
For more examples, see the tests in the source code. For questions or contributions, checkout the Apache Camel community.