Telegram AI Bot
Connect a Telegram bot to an AI-powered documentation assistant using Ubex workflows.
Overview
This workflow receives messages from a Telegram bot via webhook, searches a knowledge base using semantic search, generates an AI response with an LLM, converts the formatting for Telegram, and sends the reply back. It's a complete AI chatbot running entirely inside Ubex — no external bot framework needed.
This is a great starting point for building AI-powered Telegram bots for customer support, documentation assistants, or any conversational use case.
Prerequisites
- A Telegram account
- A Telegram bot created via @BotFather
- Your bot token (from BotFather)
- A datasource with documents or knowledge base content for RAG
What You'll Build
A Telegram bot that:
- Receives user messages via webhook
- Parses the Telegram update payload
- Searches your knowledge base using semantic search (RAG)
- Generates a contextual AI response using Claude
- Converts markdown to Telegram-compatible HTML
- Sends the formatted reply back to the user
Endpoint: POST /api/v1/YOUR_ID/telegram
Flow:
Telegram Message → API Trigger → Parse Message → Query Knowledge Base → LLM → Format for Telegram → Send Reply
Setting Up the Telegram Bot
1. Create the bot
- Open Telegram and search for
@BotFather - Send
/newbot - Enter a display name (e.g. "Ubex AI Assistant")
- Enter a username ending in
bot(e.g.ubex_ai_bot) - Copy the bot token — looks like
7123456789:AAH...
2. Customize the bot (optional)
In BotFather, you can also set:
/setuserpic— upload a profile picture/setdescription— text users see before starting the chat/setabouttext— short bio in the bot's profile/setcommands— add a command menu (e.g./help)
3. Store the token as a secret
In your Ubex workspace, go to Settings → Vault and add:
| Key | Value |
|---|---|
telegram_secret |
Your bot token |
Never hardcode the bot token in the workflow. Use
{{secrets.telegram_secret}}to reference it securely.
Workflow Nodes
1. Flow Start - API Endpoint
| Setting | Value |
|---|---|
| Trigger Type | API |
| Method | POST |
| Custom Path | telegram |
| Rate Limit | 5/min |
| Timeout | 60s |
| Auth | None (Telegram sends unsigned webhooks) |
Set the timeout to 60 seconds. LLM responses can take a few seconds, and Telegram retries the webhook if it doesn't get a response within ~30 seconds.
2. Code - Parse Telegram Message
Extracts the chat ID, user info, and message text from the Telegram update payload.
Output variable: codeJs
var message = variables.message;
var result = { skip: true };
if (message && message.text) {
result = {
chatId: message.chat.id,
userId: message.from.id,
username: message.from.username || message.from.first_name,
text: message.text,
messageId: message.message_id
};
}
result;
Understanding the Telegram update
| Field | Description |
|---|---|
message.chat.id |
Unique ID for the chat — used to send replies |
message.from.id |
The user's Telegram ID |
message.from.username |
The user's @username (may be absent) |
message.from.first_name |
The user's display name |
message.text |
The actual message text |
message.message_id |
Unique ID for this specific message |
The API trigger automatically parses the JSON body, so
variables.messagegives you the Telegram message object directly.
3. Query Data - Search Knowledge Base
Searches your datasource using semantic similarity to find relevant documentation for the user's question.
| Setting | Value |
|---|---|
| Datasource | Your knowledge base datasource |
| Query | {{codeJs.text}} |
| Search Type | Similarity |
| Top K | 5 |
| Similarity Threshold | 0.7 |
Output variable: ubexDoc
The query uses the user's message text to find semantically similar content. Adjust
topKandsimilarityThresholdbased on your knowledge base size and quality.
4. LLM - Generate Response
Uses Claude to generate a response based on the retrieved documentation context.
| Setting | Value |
|---|---|
| Model | Claude 4.5 Sonnet (or any supported model) |
| Prompt | {{codeJs.text}} |
| Temperature | 0.7 |
| Max Tokens | 2048 |
System instructions should include your assistant's personality, rules, and the retrieved context:
You are an AI assistant. Answer questions based on the provided documentation.
TELEGRAM FORMATTING:
- Do NOT use markdown headers (# ## ###). Use bold text instead.
- Use short paragraphs separated by blank lines.
- Use • or - for bullet points, not markdown lists.
- Keep lines short — Telegram is a mobile-first app.
- Use *bold* sparingly for emphasis.
- Do not use tables — they don't render in Telegram.
- Emojis are fine but don't overdo it.
DOCUMENTATION CONTEXT:
{{ubexDoc}}
Output variable: model
Adding Telegram-specific formatting rules to the system instructions prevents the LLM from generating markdown that doesn't render well in Telegram.
5. Code - Format for Telegram
Converts any remaining markdown in the LLM response to Telegram-compatible HTML and builds the API payload.
Output variable: telegramPayload
var text = variables.model || "Sorry, I couldn't generate a response.";
// Convert markdown to Telegram HTML
text = text.replace(/^### (.+)$/gm, '<b>$1</b>');
text = text.replace(/^## (.+)$/gm, '<b>$1</b>');
text = text.replace(/^# (.+)$/gm, '<b>$1</b>');
text = text.replace(/\*\*(.+?)\*\*/g, '<b>$1</b>');
text = text.replace(/\*(.+?)\*/g, '<i>$1</i>');
text = text.replace(/`([^`]+)`/g, '<code>$1</code>');
text = text.replace(/```[\w]*\n?([\s\S]*?)```/g, '<pre>$1</pre>');
var payload = {
chat_id: variables.codeJs.chatId,
text: text,
parse_mode: "HTML"
};
JSON.stringify(payload);
Why build the payload in a Code node?
LLM responses contain newlines, quotes, and special characters that break raw JSON templates. Using JSON.stringify() handles all escaping automatically — same pattern as the Contact Form tutorial's email body.
Supported Telegram HTML tags
| Tag | Renders as |
|---|---|
<b>text</b> |
bold |
<i>text</i> |
italic |
<code>text</code> |
inline code |
<pre>text</pre> |
code block |
<a href="url">text</a> |
hyperlink |
6. HTTP Request - Send Reply to Telegram
Posts the formatted response back to the Telegram chat.
| Setting | Value |
|---|---|
| Method | POST |
| URL | https://api.telegram.org/bot{{secrets.telegram_secret}}/sendMessage |
Headers:
| Key | Value |
|---|---|
| Content-Type | application/json |
Body: {{telegramPayload}}
Output variable: httpRequest
Use
{{secrets.telegram_secret}}in the URL so the bot token is never exposed in the workflow JSON.
Registering the Webhook
After deploying the workflow, tell Telegram where to send messages. Run this once:
curl -X POST "https://api.telegram.org/bot<YOUR_TOKEN>/setWebhook" \
-H "Content-Type: application/json" \
-d '{"url": "https://workflow.ubex.ai/api/v1/YOUR_ID/telegram"}'
Expected response:
{"ok": true, "result": true, "description": "Webhook was set"}
To verify the webhook is set:
curl "https://api.telegram.org/bot<YOUR_TOKEN>/getWebhookInfo"
To remove the webhook:
curl -X POST "https://api.telegram.org/bot<YOUR_TOKEN>/deleteWebhook"
Handling Telegram Retries
Telegram retries webhook deliveries if it doesn't receive a 200 OK response within ~30 seconds. Since LLM responses can take time, this can cause duplicate messages.
To prevent this:
- Set the API trigger timeout to 60 seconds
- Keep the rate limit low (5/min) to throttle retries
- Optionally, add deduplication logic using
message.message_id
Deduplication with message ID (optional)
Add a Code node after the parser to check if you've already processed this message:
var messageId = variables.codeJs.messageId;
var lastProcessed = variables.lastMessageId || 0;
if (messageId <= lastProcessed) {
({ skip: true, reason: "duplicate" });
} else {
({ skip: false, messageId: messageId });
}
Then add a Condition node to skip duplicates before hitting the LLM.
Testing
Send a test message
Open your bot in Telegram and send a message. You should receive an AI-generated response within a few seconds.
Using curl to simulate a Telegram update
curl -X POST "https://workflow.ubex.ai/api/v1/YOUR_ID/telegram" \
-H "Content-Type: application/json" \
-d '{
"update_id": 123456789,
"message": {
"message_id": 1,
"from": {"id": 12345, "is_bot": false, "first_name": "Test"},
"chat": {"id": 12345, "first_name": "Test", "type": "private"},
"date": 1700000000,
"text": "What is Ubex?"
}
}'
What to verify
| Check | Expected |
|---|---|
| Bot receives message | Workflow triggers on each Telegram message |
| Knowledge base search | Query Data returns relevant results |
| LLM generates response | Model output contains a contextual answer |
| Formatting | No raw markdown in Telegram — clean HTML rendering |
| Reply appears in chat | Bot sends the response back to the correct chat |
| Bot token not exposed | URL uses {{secrets.telegram_secret}} |
Extending the Bot
Add command handling
Check for bot commands in the parser Code node:
var message = variables.message;
var result = { skip: true };
if (message && message.text) {
var text = message.text;
var isCommand = text.startsWith('/');
result = {
chatId: message.chat.id,
userId: message.from.id,
username: message.from.username || message.from.first_name,
text: isCommand ? text.split(' ').slice(1).join(' ') || text : text,
messageId: message.message_id,
command: isCommand ? text.split(' ')[0].toLowerCase() : null
};
}
result;
Then add a Condition node to route /start, /help, etc. to different responses.
Add session memory
Enable session memory on the LLM node using {{codeJs.chatId}} as the session ID. This gives each Telegram user their own conversation history.
Add inline keyboards
Include reply_markup in the payload:
var payload = {
chat_id: variables.codeJs.chatId,
text: text,
parse_mode: "HTML",
reply_markup: {
inline_keyboard: [[
{ text: "Visit Docs", url: "https://ubex.ai/docs" },
{ text: "Get Help", callback_data: "help" }
]]
}
};
Security Checklist
| Control | Status |
|---|---|
| Bot token stored as secret | ✅ |
| POST only endpoint | ✅ |
| Rate limiting (5/min) | ✅ |
| 60s timeout for LLM latency | ✅ |
| No real secrets in tutorial JSON | ✅ |
| HTML parse mode (safer than Markdown) | ✅ |
| Input null-checks in parser | ✅ |