{
"data": [
{
"ai_agent_id": "<string>",
"agent_name": "<string>",
"llm": {
"llm_id": "<string>",
"llm_type": "<string>",
"model_provider": "openai",
"model_name": "<string>",
"system_prompt": "<string>",
"model_temperature": 0,
"llm_name": "<string>",
"llm_description": "<string>",
"base_url": "<string>",
"api_key": "<string>",
"max_tokens": 123,
"required_dynamic_data": [
"<string>"
],
"tools": [
{}
]
},
"stt": {
"provider": "<string>",
"model": "nova-2-general"
},
"tts": {
"provider": "<string>",
"voice_id": "<string>",
"voice_name": "<string>",
"model_name": "eleven_turbo_v2_5",
"voice_temperature": 0.2
},
"created_by_user_id": "<string>",
"language_code": "en-US",
"knowledge_base_id": "<string>",
"enable_user_interruptions": true,
"minimum_speech_duration_for_interruptions": 0.5,
"minimum_words_before_interruption": 0,
"wait_time_before_detecting_end_of_speech": 0.5,
"ambient_sound": "none",
"ambient_sound_volume": 1,
"webhook_url": "<string>",
"end_call_after_silence_seconds": 10,
"max_call_duration_seconds": 1800,
"welcome_message": "<string>",
"voicemail_detection_timeout_seconds": 90,
"dynamic_data_config": [
{
"url": "<string>",
"method": "<string>",
"timeout": 123,
"headers": {},
"body": {},
"query": {},
"cache": true,
"response_data": [
{
"name": "<string>",
"data": "<string>",
"context": "<string>"
}
]
}
],
"post_call_analysis": [
{
"type": "<string>",
"name": "<string>",
"description": "<string>",
"system_prompt": "<string>",
"examples": [
"<string>"
]
}
]
}
],
"next_page_token": "<string>"
}List AI agents with pagination.
{
"data": [
{
"ai_agent_id": "<string>",
"agent_name": "<string>",
"llm": {
"llm_id": "<string>",
"llm_type": "<string>",
"model_provider": "openai",
"model_name": "<string>",
"system_prompt": "<string>",
"model_temperature": 0,
"llm_name": "<string>",
"llm_description": "<string>",
"base_url": "<string>",
"api_key": "<string>",
"max_tokens": 123,
"required_dynamic_data": [
"<string>"
],
"tools": [
{}
]
},
"stt": {
"provider": "<string>",
"model": "nova-2-general"
},
"tts": {
"provider": "<string>",
"voice_id": "<string>",
"voice_name": "<string>",
"model_name": "eleven_turbo_v2_5",
"voice_temperature": 0.2
},
"created_by_user_id": "<string>",
"language_code": "en-US",
"knowledge_base_id": "<string>",
"enable_user_interruptions": true,
"minimum_speech_duration_for_interruptions": 0.5,
"minimum_words_before_interruption": 0,
"wait_time_before_detecting_end_of_speech": 0.5,
"ambient_sound": "none",
"ambient_sound_volume": 1,
"webhook_url": "<string>",
"end_call_after_silence_seconds": 10,
"max_call_duration_seconds": 1800,
"welcome_message": "<string>",
"voicemail_detection_timeout_seconds": 90,
"dynamic_data_config": [
{
"url": "<string>",
"method": "<string>",
"timeout": 123,
"headers": {},
"body": {},
"query": {},
"cache": true,
"response_data": [
{
"name": "<string>",
"data": "<string>",
"context": "<string>"
}
]
}
],
"post_call_analysis": [
{
"type": "<string>",
"name": "<string>",
"description": "<string>",
"system_prompt": "<string>",
"examples": [
"<string>"
]
}
]
}
],
"next_page_token": "<string>"
}Successful Response
List of AI agents matching the query criteria
Show child attributes
Unique identifier for the AI agent
Name of the AI agent
Language model configuration for the AI agent
Show child attributes
Unique identifier for the Language Learning Model (LLM)
Indicates this is a simple LLM type that uses standard model providers
"simple"The provider of the language model - OpenAI, Anthropic, verbex, or OpenAI Compatible
openai, anthropic, verbex, openai_compatible Name of the specific model from the provider (e.g. 'gpt-4', 'claude-2')
The base prompt that defines the LLM's behavior, role and operating parameters
Controls randomness in model outputs - 0 is focused/deterministic, 1 is more creative/random
0 <= x <= 1Name of the LLM to use
Description of the LLM to use
Base URL for the LLM API endpoint
API key for authentication with the LLM service
Maximum number of tokens allowed in the response
List of dynamic data fields that must be provided to this LLM during operation
List of tools/functions available to the LLM for external interactions and API calls
Speech-to-text configuration for the AI agent
Show child attributes
Indicates Deepgram is being used as the STT provider
"deepgram"Specific Deepgram model being used for speech recognition, each optimized for different use cases
nova-2-general, nova-2-phonecall, nova-meeting, nova-2-meeting, nova-2-finance, nova-2-conversationalai, nova-2-voicemail, nova-2-video, nova-2-medical, nova-2-drivethru, nova-2-automotive, enhanced-general, enhanced-meeting, enhanced-phonecall, enhanced-finance, base, meeting, phonecall, finance, conversationalai, voicemail, video, whisper-tiny, whisper-base, whisper-small, whisper-medium, whisper-large Text-to-speech configuration for the AI agent
Show child attributes
Indicates ElevenLabs is being used as the TTS provider
"elevenlabs"Unique identifier for the selected ElevenLabs voice
Human-readable name of the selected voice
Specific ElevenLabs model version being used for voice synthesis
eleven_turbo_v2_5, eleven_turbo_v2, eleven_multilingual_v2, eleven_monolingual_v1 Controls variation in voice output - higher values increase variability
0 <= x <= 1ID of the user who created this AI agent
Language code for the AI agent in BCP-47 format
ID of the associated knowledge base, if any
Whether to allow users to interrupt the AI agent while speaking
Minimum duration of speech in seconds before interruptions are allowed
0 <= x <= 15Minimum number of words that must be spoken before interruption is allowed
0 <= x <= 5Time in seconds to wait before considering speech has ended
0 <= x <= 15Type of ambient background sound to use during calls
office, cafe, restaurant, park, street, home, library, airport, train_station, beach, none Volume level for ambient sound, from 0 (muted) to 2 (loud)
0 <= x <= 2URL to send webhook notifications about call events
Number of seconds of silence before automatically ending the call
10 <= x <= 30Maximum duration of a call in seconds
30 <= x <= 3600Custom welcome message to play at the start of calls
Time in seconds to wait for voicemail detection
x >= 0Configuration for dynamic data sources that can be used during calls
Show child attributes
Endpoint URL for retrieving dynamic data
HTTP method to use when calling the endpoint (GET, POST, etc)
Maximum time in seconds to wait for the endpoint response
Request body data to send to the endpoint
Whether to cache the response data for subsequent use
Configuration for parsing and using the endpoint response data
Show child attributes
Identifier for this dynamic data field
The actual data value retrieved from the external source
Additional context or metadata about this data field
Configuration for analysis items to be processed after calls
Show child attributes
Indicates this analysis item returns a text string
"string"Identifier for this analysis item
Detailed description of what this analysis item measures or determines
Prompt used to guide the AI in performing this analysis
Example responses to guide the analysis
Token to retrieve the next page of results. Null if there are no more results