LangBot provides a series of APIs for plugins to operate various LangBot modules and control message contexts.
Request API
Operations for the currently processing user request (message). Only available in EventListener and Command components. Access methods:
- In event handler methods of
EventListener: Internal methods of event_context: context.EventContext object
- In subcommand handler methods of
Command: Internal methods of context: ExecuteContext object
Direct Reply Message
Directly reply with a message chain to the session where the current request is located.
For message chain construction methods, please refer to Message Platform Entities.
async def reply(
self, message_chain: platform_message.MessageChain, quote_origin: bool = False
):
"""Reply to the message request
Args:
message_chain (platform.types.MessageChain): LangBot message chain
quote_origin (bool): Whether to quote the original message
"""
# Usage example
await event_context.reply(
platform_message.MessageChain([
platform_message.Plain(text="Hello, world!"),
]),
)
Get Bot UUID
Get the UUID of the bot that originated the current request.
async def get_bot_uuid(self) -> str:
"""Get the bot uuid"""
# Usage example
bot_uuid = await event_context.get_bot_uuid()
Set Request Variables
Some information in a single request is stored in request variables. When using external LLMOps platforms like Dify, these variables are explicitly passed to LLMOps platforms.
async def set_query_var(self, key: str, value: Any):
"""Set a query variable"""
# Usage example
await event_context.set_query_var("key", "value")
Get Request Variables
Get a single request variable.
async def get_query_var(self, key: str) -> Any:
"""Get a query variable"""
# Usage example
query_var = await event_context.get_query_var("key")
Get All Request Variables
async def get_query_vars(self) -> dict[str, Any]:
"""Get all query variables"""
# Usage example
query_vars = await event_context.get_query_vars()
Get the knowledge bases configured in the pipeline used by the current request.
async def list_pipeline_knowledge_bases(self) -> list[dict[str, Any]]:
"""List knowledge bases configured for the current pipeline"""
# Usage example
knowledge_bases = await event_context.list_pipeline_knowledge_bases()
# Return example
[
{
"uuid": "kb_uuid",
"name": "Product Docs",
"description": "Internal product documentation",
}
]
This only returns knowledge bases bound in the current pipeline’s Local Agent configuration. If the pipeline has no knowledge base configured, or is not using Local Agent, an empty list is returned.
Retrieve Knowledge from the Current Pipeline Scope
Run retrieval against the knowledge bases that are allowed for the current request’s pipeline.
async def retrieve_knowledge(
self,
kb_id: str,
query_text: str,
top_k: int = 5,
filters: dict[str, Any] | None = None,
) -> list[dict[str, Any]]:
"""Retrieve relevant documents from a knowledge base"""
# Usage example
knowledge_bases = await event_context.list_pipeline_knowledge_bases()
results = await event_context.retrieve_knowledge(
kb_id=knowledge_bases[0]["uuid"],
query_text="How do I configure the enterprise knowledge base?",
top_k=3,
filters={"document_type": {"$eq": "manual"}},
)
# Return example
[
{
"id": "chunk_0",
"content": [{"type": "text", "text": "Knowledge base configuration guide"}],
"metadata": {"document_id": "doc_001"},
"distance": 0.12,
"score": 0.88,
}
]
kb_id must come from list_pipeline_knowledge_bases and must belong to a knowledge base configured for the current pipeline. Otherwise, LangBot returns an error.
LangBot API
These APIs can be called in any plugin component. Access methods:
- In the plugin root directory
main.py: Internal methods of the self object, these APIs are all provided by the plugin class parent class BasePlugin.
- In any plugin component class: Internal methods of the
self.plugin object.
Get Plugin Configuration
Plugin configuration format can be written in manifest.yaml, and users need to fill it out according to the plugin configuration format in LangBot’s plugin management. Plugin code can then call this API to get plugin configuration information.
def get_config(self) -> dict[str, typing.Any]:
"""Get the config of the plugin."""
# Usage example
config = self.plugin.get_config()
Get LangBot Version
Get the LangBot version number, returned as a string in format v<major>.<minor>.<patch>.
async def get_langbot_version(self) -> str:
"""Get the langbot version"""
# Usage example
langbot_version = await self.plugin.get_langbot_version()
Returns a list of all bot UUIDs.
async def get_bots(self) -> list[str]:
"""Get all bots"""
# Usage example
bots = await self.plugin.get_bots()
Get bot information.
async def get_bot_info(self, bot_uuid: str) -> dict[str, Any]:
"""Get a bot info"""
# Usage example
bot_info = await self.plugin.get_bot_info("de639861-be05-4018-859b-c2e2d3e0d603")
# Return example
{
"uuid": "de639861-be05-4018-859b-c2e2d3e0d603",
"name": "aiocqhttp",
"description": "Migrated from LangBot v3",
"adapter": "aiocqhttp",
"enable": true,
"use_pipeline_name": "ChatPipeline",
"use_pipeline_uuid": "c30a1dca-e91c-452b-83ec-84d635a30028",
"created_at": "2025-05-10T13:53:08",
"updated_at": "2025-08-12T11:27:30",
"adapter_runtime_values": { # Present if the bot is currently running
"bot_account_id": 960164003 # Bot account ID
}
}
Send Proactive Message
Send proactive messages through bot UUID and target session ID.
For message chain construction methods, please refer to Message Platform Entities.
async def send_message(
self,
bot_uuid: str,
target_type: str,
target_id: str,
message_chain: platform_message.MessageChain,
) -> None:
"""Send a message to a session"""
# Usage example
await self.plugin.send_message(
bot_uuid="de639861-be05-4018-859b-c2e2d3e0d603",
target_type="person",
target_id="1010553892",
message_chain=platform_message.MessageChain([platform_message.Plain(text="Hello, world!")]),
)
Returns a list of UUIDs for all configured LLM models.
async def get_llm_models(self) -> list[str]:
"""Get all LLM models"""
# Usage example
llm_models = await self.plugin.get_llm_models()
Invoke LLM Model
Invoke an LLM model, returns an LLM message. Non-streaming.
async def invoke_llm(
self,
llm_model_uuid: str,
messages: list[provider_message.Message],
funcs: list[resource_tool.LLMTool] = [],
extra_args: dict[str, Any] = {},
) -> provider_message.Message:
"""Invoke an LLM model"""
# Usage example
llm_message = await self.plugin.invoke_llm(
llm_model_uuid="llm_model_uuid",
messages=[provider_message.Message(role="user", content="Hello, world!")],
funcs=[],
extra_args={},
)
List Available Parsers
List Parser plugins currently available on the host, optionally filtered by MIME type.
async def list_parsers(self, mime_type: str | None = None) -> list[dict[str, Any]]:
"""List available Parser plugins"""
# Usage example
parsers = await self.plugin.list_parsers(mime_type="application/pdf")
# Each item includes plugin_id, plugin_author, plugin_name, name, description, supported_mime_types
Set Plugin Persistent Data
Persistently store plugin data. Data stored through this interface can only be accessed by this plugin. Values need to be converted to bytes manually.
async def set_plugin_storage(self, key: str, value: bytes) -> None:
"""Set a plugin storage value"""
# Usage example
await self.plugin.set_plugin_storage("key", b"value")
Get Plugin Persistent Data
async def get_plugin_storage(self, key: str) -> bytes:
"""Get a plugin storage value"""
# Usage example
plugin_storage = await self.plugin.get_plugin_storage("key")
Get All Plugin Persistent Data Keys
async def get_plugin_storage_keys(self) -> list[str]:
"""Get all plugin storage keys"""
# Usage example
plugin_storage_keys = await self.plugin.get_plugin_storage_keys()
Delete Plugin Persistent Data
async def delete_plugin_storage(self, key: str) -> None:
"""Delete a plugin storage value"""
# Usage example
await self.plugin.delete_plugin_storage("key")
Get Workspace Persistent Data
Data stored through this interface can be accessed by all plugins. Values need to be converted to bytes manually.
async def set_workspace_storage(self, key: str, value: bytes) -> None:
"""Set a workspace storage value"""
# Usage example
await self.plugin.set_workspace_storage("key", b"value")
Get Workspace Persistent Data
async def get_workspace_storage(self, key: str) -> bytes:
"""Get a workspace storage value"""
# Usage example
workspace_storage = await self.plugin.get_workspace_storage("key")
Get All Workspace Persistent Data Keys
async def get_workspace_storage_keys(self) -> list[str]:
"""Get all workspace storage keys"""
# Usage example
workspace_storage_keys = await self.plugin.get_workspace_storage_keys()
Delete Workspace Persistent Data
async def delete_workspace_storage(self, key: str) -> None:
"""Delete a workspace storage value"""
# Usage example
await self.plugin.delete_workspace_storage("key")
Get Plugin File-typed Config Field Data
async def get_config_file(self, file_key: str) -> bytes:
"""Get a config file value"""
# Usage example
file_bytes = await self.plugin.get_config_file("key")
Use this in conjunction with configuration fields of type file or array[file.
RAG API
These APIs are available for KnowledgeEngine components to access the LangBot host’s embedding models, vector database, and file storage. Access method:
- In
KnowledgeEngine component classes: Internal methods of the self.plugin object.
Invoke Embedding Model
Generate text embeddings using the host’s configured embedding model.
async def invoke_embedding(
self,
embedding_model_uuid: str,
texts: list[str],
) -> list[list[float]]:
"""Generate embeddings using host's embedding model
Args:
embedding_model_uuid: Embedding model UUID
texts: List of texts to embed
Returns:
List of embedding vectors, one per input text
"""
# Usage example
vectors = await self.plugin.invoke_embedding("model_uuid", ["Hello", "World"])
Vector Upsert
Upsert vectors to the host’s vector database.
async def vector_upsert(
self,
collection_id: str,
vectors: list[list[float]],
ids: list[str],
metadata: list[dict[str, Any]] | None = None,
documents: list[str] | None = None,
) -> None:
"""Upsert vectors
Args:
collection_id: Target collection ID
vectors: List of vectors
ids: List of unique IDs for vectors
metadata: Optional list of metadata dicts
documents: Optional raw text documents. Required for full-text
and hybrid search in backends that support them.
"""
# Usage example
await self.plugin.vector_upsert(
collection_id="kb_uuid",
vectors=[[0.1, 0.2, ...], [0.3, 0.4, ...]],
ids=["chunk_0", "chunk_1"],
metadata=[{"document_id": "doc1"}, {"document_id": "doc1"}],
documents=["chunk text 0", "chunk text 1"],
)
Vector Search
Search similar vectors in the host’s vector database.
async def vector_search(
self,
collection_id: str,
query_vector: list[float],
top_k: int = 5,
filters: dict[str, Any] | None = None,
search_type: str = "vector",
query_text: str = "",
) -> list[dict[str, Any]]:
"""Vector search
Args:
collection_id: Target collection ID
query_vector: Query vector for similarity search
top_k: Number of results to return
filters: Optional metadata filters
search_type: One of 'vector', 'full_text', 'hybrid'
query_text: Raw query text, used for full_text and hybrid search
Returns:
List of search results (dict with id, score, metadata, etc.)
"""
# Usage example
results = await self.plugin.vector_search(
collection_id="kb_uuid",
query_vector=[0.1, 0.2, ...],
top_k=5,
search_type="hybrid",
query_text="search query",
)
# Return format: [{"id": "chunk_0", "score": 0.123, "metadata": {"document_id": "doc1", ...}}, ...]
Each result returned by vector_search is a dict containing id (vector ID), score (distance score), and metadata (metadata provided during upsert). If you need text content in retrieval results, store the text in metadata during ingestion.
Vector Delete
Delete vectors from the host’s vector database.
async def vector_delete(
self,
collection_id: str,
file_ids: list[str] | None = None,
filters: dict[str, Any] | None = None,
) -> int:
"""Vector delete
Args:
collection_id: Target collection ID
file_ids: File IDs whose vectors should be deleted
filters: Optional metadata filters for deletion
Returns:
Number of deleted items
"""
# Usage example
deleted = await self.plugin.vector_delete(
collection_id="kb_uuid",
file_ids=["doc_001"],
)
The filters parameter supports Chroma-style where syntax for metadata filtering. Multiple top-level keys are AND-ed. Supported operators: $eq, $ne, $gt, $gte, $lt, $lte, $in, $nin.# Implicit $eq
results = await self.plugin.vector_search(
collection_id="kb_uuid",
query_vector=[0.1, 0.2, ...],
filters={"file_id": "abc"},
)
# Comparison operator
results = await self.plugin.vector_search(
collection_id="kb_uuid",
query_vector=[0.1, 0.2, ...],
filters={"created_at": {"$gte": 1700000000}},
)
# In-list operator
results = await self.plugin.vector_search(
collection_id="kb_uuid",
query_vector=[0.1, 0.2, ...],
filters={"file_type": {"$in": ["pdf", "docx"]}},
)
# Delete by filter
deleted = await self.plugin.vector_delete(
collection_id="kb_uuid",
filters={"file_type": {"$eq": "pdf"}},
)
Note: Chroma, Qdrant, and SeekDB store full metadata and can filter on any field. Milvus and pgvector only store text, file_id, and chunk_uuid — filters on other fields will be silently ignored.
Get uploaded file content from the host’s storage.
async def get_knowledge_file_stream(self, storage_path: str) -> bytes:
"""Get file content
Args:
storage_path: File storage path (from FileObject.storage_path)
Returns:
File content as bytes
"""
# Usage example
file_bytes = await self.plugin.get_knowledge_file_stream(context.file_object.storage_path)