Call-Emma.ai: CCS AI Solution
Configuration Options

Call-Emma.ai is a cloud or self-hosted, enterprise-grade call center software addon designed to process call recordings, generate AI-powered insights and analysis, and directly deliver the results into your CCS platform via an API. The solution is scalable in the cloud, highly configurable, multi-tenant ready, and built for usage-based billing.

API AI Insights CCA Audio Recording Containerized Enterprise grade Cloud Agnostic Scalable Multi-Tenant Configurable High Volume Easy to Integrate

We are accepting applications for preferred licensing terms for Call-Emma AI Call Center Solution. Please contact Sales.

Contact Sales

The Configuration Page in Emma Call Center Software puts you in control, offering a simple way to customize the platform to match your specific business processes. Whether you're ready to use Emma right out of the box or need to fine-tune it to fit your exact requirements, this page makes it easy. Built with both flexibility and ease of use in mind, it's designed for technical teams and developers alike to adapt Emma to your evolving needs—so your call center works the way you do.


Overview of Configuration Options

  1. Database Connection - Configures storage location for audio files and transcripts on a per-client basis.
  2. Call List Table Location - Specifies the location of database tables where all call information is stored.
  3. Notification URL - Specifies the web URL for receiving status notifications about conversations being processed.
  4. Audio Filter Model - Specifies which noise filter model is used to process audio files. The processed audio file is saved after filtering.
  5. Audio Speed Optimization - Adjusts the phoneme rate to an optimal level for the speech-to-text model.
  6. Retain Original Audio File - Controls whether the original audio file is kept or deleted after processing.
  7. Remove Personal Information - Removes personally identifiable information (PII) from conversation transcripts.
  8. AI Summary Enabled - Controls the generation of conversation summaries from chat transcripts.
  9. AI Summary LLM Model - Specifies which large language model (LLM) is used for generating conversation summaries.
  10. AI Summary API Type - Specifies the API protocol used to communicate with the selected LLM model.
  11. AI Summary URL - Specifies the location of the LLM used for generating summaries, which may be on the local server or at a remote endpoint.
  12. AI Summary Token - Specifies the authentication token for accessing the AI model used for summaries.
  13. LLM Query Configuration - Controls which specific LLM query operations are executed on the AI model.
  14. LLM Query LLM Model - Specifies which large language model (LLM) is used for processing queries.
  15. LLM Query API Type - Specifies the API protocol used to communicate with the selected LLM model for query processing.
  16. LLM Query URL - Specifies the location of the LLM used for processing queries, which may be on the local server or at a remote endpoint.
  17. LLM Query Token - Specifies the authentication token for accessing the AI model used for query processing.

Specific Configuration Details

Database Connection

TitleDatabase Connection
DescriptionConfigures storage location for audio files and transcripts on a per-client basis.
BenefitProvides data isolation in multi-tenant deployments, enhancing security and meeting compliance requirements.
Variable namedatabase
RequiredNo
Accepted valuesdatabase connection address:port:login
NotesEnables data separation in multi-tenant environments for security, compliance, or redundancy purposes. Particularly useful when clients require isolated data storage.
Default valuedefaults to environmental file's configuration

Call List Table Location

TitleCall List Table Location
DescriptionSpecifies the location of the database table where all call information is stored on a per-client basis.
BenefitEnables segregation of call data for enhanced security and compliance in multi-tenant environments.
Variable nametable
RequiredNo
Accepted valuesDatabase location URL
NotesAllows clients to store their call database tables in secure, isolated locations. Particularly valuable in multi-tenant deployments requiring data separation.
Default valuedefaults to environmental file's configuration

Notification URL

TitleNotification URL
DescriptionSpecifies the web URL for receiving status notifications about conversations being processed.
BenefitEnables real-time monitoring and integration with external systems to track conversation processing status.
Variable namecallback
RequiredNo
Accepted valuesURL string
NotesMust include either the jobId or call recording name as a parameter when making callback requests.
Default valuenone

Audio Filter Model

TitleAudio Filter Model
DescriptionSpecifies which noise filter model is used to process audio files. The processed audio file is saved after filtering.
BenefitImproves speech-to-text conversion accuracy by removing background noise and other audio artifacts.
Variable namenoise_filter
RequiredNo
Accepted valuesBlank value (turns off this feature), or filter model name
NotesWhen enabled, the selected noise filter removes background and other noise artifacts, significantly enhancing the accuracy of speech-to-text conversion.
Default valueSpectralGating

Audio Speed Optimization

TitleAudio Speed Optimization
DescriptionAdjusts the phoneme rate to an optimal level for the speech-to-text model.
BenefitSignificantly enhances transcript accuracy by optimizing audio playback speed for the speech recognition engine.
Variable namespeed_filter
RequiredYes
Accepted valuestrue, false
NotesWhen enabled, this feature modifies the audio playback rate to match the ideal phoneme processing rate of the speech-to-text model, resulting in improved transcription accuracy.
Default valueTRUE

Retain Original Audio File

TitleRetain Original Audio File
DescriptionControls whether the original audio file is kept or deleted after processing.
BenefitAllows organizations to balance privacy requirements with record retention needs.
Variable namekeep_audio
RequiredNo
Accepted valuestrue (keep audio), false (delete audio)
NotesPrivacy regulations and storage capacity constraints may dictate whether a company retains the original audio files alongside transcript files.
Default valueFALSE

Remove Personal Information

TitleRemove Personal Information
DescriptionRemoves personally identifiable information (PII) from conversation transcripts.
BenefitEnhances privacy compliance and reduces liability when processing sensitive conversations.
Variable nameremove_pii
RequiredOptional
Accepted valuestrue (activate PII removal model), false (do not activate PII removal)
NotesWhen enabled, PII is replaced with standardized tags based on information type (e.g., names, dates, phone numbers). The system uses the following replacement schema: "PERSON": "NAME", "GPE": "GEOGRAPHIC_SUBDIVISION", "DATE": "DATE", "PHONE": "PHONE_NUMBER", "VEHICLE": "VEHICLE_ID", "FAX": "FAX_NUMBER", "DEVICE": "DEVICE_IDENTIFIER", "EMAIL": "EMAIL", "URL": "URL", "SSN": "SOCIAL_SECURITY_NUMBER", "NID": "NATIONAL_ID_NUMBER", "MRN": "MEDICAL_RECORD_NUMBER", "IP": "IP_ADDRESS", "BIOMETRIC": "BIOMETRIC_IDENTIFIER", "PHOTO": "FULL_FACE_PHOTOGRAPHIC_IMAGE", "ACCOUNT": "ACCOUNT_NUMBER", "CERTIFICATE": "CERTIFICATE_NUMBER", "LICENSE": "LICENSE_NUMBER", "OTHER": "UNIQUE_IDENTIFIER"
Default valueTRUE

AI Summary Enabled

TitleAI Summary Enabled
DescriptionControls the generation of conversation summaries from chat transcripts.
BenefitProvides quick insight into conversation content without requiring full transcript review.
Variable namesummary
RequiredOptional
Accepted valuestrue (make summary), false (no summary)
NotesThis is a key feature of the system. The prompt used to generate the summary can be customized for different use cases or information requirements.
Default valueTRUE

AI Summary LLM Model

TitleAI Summary LLM Model
DescriptionSpecifies which large language model (LLM) is used for generating conversation summaries.
BenefitAllows customization of summary quality, cost, and performance based on specific needs.
Variable namesummary_model_api_type
RequiredYes
Accepted valuesLLM model name: (ENTER LIST OF MODELS)
NotesThis is a key feature that determines the quality and characteristics of generated summaries.
Default valueLlama3_3

AI Summary API Type

TitleAI Summary API Type
DescriptionSpecifies the API protocol used to communicate with the selected LLM model.
BenefitProvides flexibility to integrate with different AI service providers based on cost, performance, or feature requirements.
Variable namesummary_model_api_type
RequiredYes
Accepted values4 options, (David fill this in)
NotesThe LLM's URL configuration is independent of the API protocol selection.
Default valueOpenAI API

AI Summary URL

TitleAI Summary URL
DescriptionSpecifies the location of the LLM used for generating summaries, which may be on the local server or at a remote endpoint.
BenefitProvides flexibility to use on-premises or cloud-based LLM resources based on security, latency, or cost requirements.
Variable namesummary_model_url
RequiredYes
Accepted valuesURL string
NotesThe URL configuration is independent of the API protocol selection.
Default valuenone

AI Summary Token

TitleAI Summary Token
DescriptionSpecifies the authentication token for accessing the AI model used for summaries.
BenefitEnables secure authentication when connecting to commercial or private LLM services.
Variable namesummary_model_token
RequiredNo
Accepted valuesToken string
NotesAuthentication token is independent of the API protocol selection.
Default valuenone

LLM Query Configuration

TitleLLM Query Configuration
DescriptionControls which specific LLM query operations are executed on the AI model.
BenefitProvides granular control over which AI operations are performed, optimizing resource usage and customizing functionality.
Variable namellm_query
RequiredOptional
Accepted valuesInteger or string
NotesThis setting determines which numbered LLM queries will be executed. Each query is identified by a unique number - setting this value enables only the specified query.
Default valueTRUE

LLM Query LLM Model

TitleLLM Query LLM Model
DescriptionSpecifies which large language model (LLM) is used for processing queries.
BenefitAllows selection of the most appropriate AI model based on performance requirements, cost considerations, and specific capabilities.
Variable namellm_query_model_api_type
RequiredYes
Accepted valuesLLM model name: (ENTER LIST OF MODELS)
NotesThis is a key feature that determines the quality and characteristics of query responses.
Default valueLlama3_3

LLM Query API Type

TitleLLM Query API Type
DescriptionSpecifies the API protocol used to communicate with the selected LLM model for query processing.
BenefitEnables integration with different AI service providers based on specific requirements for performance, features, or pricing.
Variable namellm_query_model_api_type
RequiredYes
Accepted values4 options, (David fill this in)
NotesThe LLM's URL configuration is independent of the API protocol selection.
Default valueOpenAI API

LLM Query URL

TitleLLM Query URL
DescriptionSpecifies the location of the LLM used for processing queries, which may be on the local server or at a remote endpoint.
BenefitProvides flexibility to use on-premises or cloud-based LLM resources based on security, compliance, or performance requirements.
Variable namellm_query_model_url
RequiredYes
Accepted valuesURL string
NotesThe URL configuration is independent of the API protocol selection.
Default valuenone

LLM Query Token

TitleLLM Query Token
DescriptionSpecifies the authentication token for accessing the AI model used for query processing.
BenefitEnables secure authentication when connecting to commercial or private LLM services for query operations.
Variable namellm_query_model_token
RequiredNo
Accepted valuesToken string
NotesAuthentication token is independent of the API protocol selection.
Default valuenone