Class: AWS.TranscribeService
- Inherits:
-
AWS.Service
- Object
- AWS.Service
- AWS.TranscribeService
- Identifier:
- transcribeservice
- API Version:
- 2017-10-26
- Defined in:
- (unknown)
Overview
Constructs a service interface object. Each API operation is exposed as a function on service.
Service Description
Operations and objects for transcribing speech to text.
Sending a Request Using TranscribeService
var transcribeservice = new AWS.TranscribeService();
transcribeservice.createCallAnalyticsCategory(params, function (err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Locking the API Version
In order to ensure that the TranscribeService object uses this specific API, you can
construct the object by passing the apiVersion
option to the constructor:
var transcribeservice = new AWS.TranscribeService({apiVersion: '2017-10-26'});
You can also set the API version globally in AWS.config.apiVersions
using
the transcribeservice service identifier:
AWS.config.apiVersions = {
transcribeservice: '2017-10-26',
// other service API versions
};
var transcribeservice = new AWS.TranscribeService();
Version:
-
2017-10-26
Constructor Summary collapse
-
new AWS.TranscribeService(options = {}) ⇒ Object
constructor
Constructs a service object.
Property Summary collapse
-
endpoint ⇒ AWS.Endpoint
readwrite
An Endpoint object representing the endpoint URL for service requests.
Properties inherited from AWS.Service
Method Summary collapse
-
createCallAnalyticsCategory(params = {}, callback) ⇒ AWS.Request
Creates an analytics category.
-
createLanguageModel(params = {}, callback) ⇒ AWS.Request
Creates a new custom language model.
-
createMedicalVocabulary(params = {}, callback) ⇒ AWS.Request
Creates a new custom vocabulary that you can use to modify how Amazon Transcribe Medical transcribes your audio file.
.
-
createVocabulary(params = {}, callback) ⇒ AWS.Request
Creates a new custom vocabulary that you can use to change the way Amazon Transcribe handles transcription of an audio file.
.
-
createVocabularyFilter(params = {}, callback) ⇒ AWS.Request
Creates a new vocabulary filter that you can use to filter words, such as profane words, from the output of a transcription job.
.
-
deleteCallAnalyticsCategory(params = {}, callback) ⇒ AWS.Request
Deletes a call analytics category using its name.
.
-
deleteCallAnalyticsJob(params = {}, callback) ⇒ AWS.Request
Deletes a call analytics job using its name.
.
-
deleteLanguageModel(params = {}, callback) ⇒ AWS.Request
Deletes a custom language model using its name.
.
-
deleteMedicalTranscriptionJob(params = {}, callback) ⇒ AWS.Request
Deletes a transcription job generated by Amazon Transcribe Medical and any related information.
.
-
deleteMedicalVocabulary(params = {}, callback) ⇒ AWS.Request
Deletes a vocabulary from Amazon Transcribe Medical.
.
-
deleteTranscriptionJob(params = {}, callback) ⇒ AWS.Request
Deletes a previously submitted transcription job along with any other generated results such as the transcription, models, and so on.
.
-
deleteVocabulary(params = {}, callback) ⇒ AWS.Request
Deletes a vocabulary from Amazon Transcribe.
-
deleteVocabularyFilter(params = {}, callback) ⇒ AWS.Request
Removes a vocabulary filter.
.
-
describeLanguageModel(params = {}, callback) ⇒ AWS.Request
Gets information about a single custom language model.
-
getCallAnalyticsCategory(params = {}, callback) ⇒ AWS.Request
Retrieves information about a call analytics category.
.
-
getCallAnalyticsJob(params = {}, callback) ⇒ AWS.Request
Returns information about a call analytics job.
-
getMedicalTranscriptionJob(params = {}, callback) ⇒ AWS.Request
Returns information about a transcription job from Amazon Transcribe Medical.
-
getMedicalVocabulary(params = {}, callback) ⇒ AWS.Request
Retrieves information about a medical vocabulary.
.
-
getTranscriptionJob(params = {}, callback) ⇒ AWS.Request
Returns information about a transcription job.
-
getVocabulary(params = {}, callback) ⇒ AWS.Request
Gets information about a vocabulary.
-
getVocabularyFilter(params = {}, callback) ⇒ AWS.Request
Returns information about a vocabulary filter.
.
-
listCallAnalyticsCategories(params = {}, callback) ⇒ AWS.Request
Provides more information about the call analytics categories that you've created.
-
listCallAnalyticsJobs(params = {}, callback) ⇒ AWS.Request
List call analytics jobs with a specified status or substring that matches their names.
.
-
listLanguageModels(params = {}, callback) ⇒ AWS.Request
Provides more information about the custom language models you've created.
-
listMedicalTranscriptionJobs(params = {}, callback) ⇒ AWS.Request
Lists medical transcription jobs with a specified status or substring that matches their names.
.
-
listMedicalVocabularies(params = {}, callback) ⇒ AWS.Request
Returns a list of vocabularies that match the specified criteria.
-
listTagsForResource(params = {}, callback) ⇒ AWS.Request
Lists all tags associated with a given transcription job, vocabulary, or resource.
.
-
listTranscriptionJobs(params = {}, callback) ⇒ AWS.Request
Lists transcription jobs with the specified status.
.
-
listVocabularies(params = {}, callback) ⇒ AWS.Request
Returns a list of vocabularies that match the specified criteria.
-
listVocabularyFilters(params = {}, callback) ⇒ AWS.Request
Gets information about vocabulary filters.
.
-
startCallAnalyticsJob(params = {}, callback) ⇒ AWS.Request
Starts an asynchronous analytics job that not only transcribes the audio recording of a caller and agent, but also returns additional insights.
-
startMedicalTranscriptionJob(params = {}, callback) ⇒ AWS.Request
Starts a batch job to transcribe medical speech to text.
.
-
startTranscriptionJob(params = {}, callback) ⇒ AWS.Request
Starts an asynchronous job to transcribe speech to text.
.
-
tagResource(params = {}, callback) ⇒ AWS.Request
Tags an Amazon Transcribe resource with the given list of tags.
.
-
untagResource(params = {}, callback) ⇒ AWS.Request
Removes specified tags from a specified Amazon Transcribe resource.
.
-
updateCallAnalyticsCategory(params = {}, callback) ⇒ AWS.Request
Updates the call analytics category with new values.
-
updateMedicalVocabulary(params = {}, callback) ⇒ AWS.Request
Updates a vocabulary with new values that you provide in a different text file from the one you used to create the vocabulary.
-
updateVocabulary(params = {}, callback) ⇒ AWS.Request
Updates an existing vocabulary with new values.
-
updateVocabularyFilter(params = {}, callback) ⇒ AWS.Request
Updates a vocabulary filter with a new list of filtered words.
.
Methods inherited from AWS.Service
makeRequest, makeUnauthenticatedRequest, waitFor, setupRequestListeners, defineService
Constructor Details
new AWS.TranscribeService(options = {}) ⇒ Object
Constructs a service object. This object has one method for each API operation.
Examples:
Constructing a TranscribeService object
var transcribeservice = new AWS.TranscribeService({apiVersion: '2017-10-26'});
Options Hash (options):
-
params
(map)
—
An optional map of parameters to bind to every request sent by this service object. For more information on bound parameters, see "Working with Services" in the Getting Started Guide.
-
endpoint
(String|AWS.Endpoint)
—
The endpoint URI to send requests to. The default endpoint is built from the configured
region
. The endpoint should be a string like'https://{service}.{region}.amazonaws.com'
or an Endpoint object. -
accessKeyId
(String)
—
your AWS access key ID.
-
secretAccessKey
(String)
—
your AWS secret access key.
-
sessionToken
(AWS.Credentials)
—
the optional AWS session token to sign requests with.
-
credentials
(AWS.Credentials)
—
the AWS credentials to sign requests with. You can either specify this object, or specify the accessKeyId and secretAccessKey options directly.
-
credentialProvider
(AWS.CredentialProviderChain)
—
the provider chain used to resolve credentials if no static
credentials
property is set. -
region
(String)
—
the region to send service requests to. See AWS.TranscribeService.region for more information.
-
maxRetries
(Integer)
—
the maximum amount of retries to attempt with a request. See AWS.TranscribeService.maxRetries for more information.
-
maxRedirects
(Integer)
—
the maximum amount of redirects to follow with a request. See AWS.TranscribeService.maxRedirects for more information.
-
sslEnabled
(Boolean)
—
whether to enable SSL for requests.
-
paramValidation
(Boolean|map)
—
whether input parameters should be validated against the operation description before sending the request. Defaults to true. Pass a map to enable any of the following specific validation features:
- min [Boolean] — Validates that a value meets the min
constraint. This is enabled by default when paramValidation is set
to
true
. - max [Boolean] — Validates that a value meets the max constraint.
- pattern [Boolean] — Validates that a string value matches a regular expression.
- enum [Boolean] — Validates that a string value matches one of the allowable enum values.
- min [Boolean] — Validates that a value meets the min
constraint. This is enabled by default when paramValidation is set
to
-
computeChecksums
(Boolean)
—
whether to compute checksums for payload bodies when the service accepts it (currently supported in S3 only)
-
convertResponseTypes
(Boolean)
—
whether types are converted when parsing response data. Currently only supported for JSON based services. Turning this off may improve performance on large response payloads. Defaults to
true
. -
correctClockSkew
(Boolean)
—
whether to apply a clock skew correction and retry requests that fail because of an skewed client clock. Defaults to
false
. -
s3ForcePathStyle
(Boolean)
—
whether to force path style URLs for S3 objects.
-
s3BucketEndpoint
(Boolean)
—
whether the provided endpoint addresses an individual bucket (false if it addresses the root API endpoint). Note that setting this configuration option requires an
endpoint
to be provided explicitly to the service constructor. -
s3DisableBodySigning
(Boolean)
—
whether S3 body signing should be disabled when using signature version
v4
. Body signing can only be disabled when using https. Defaults totrue
. -
s3UsEast1RegionalEndpoint
('legacy'|'regional')
—
when region is set to 'us-east-1', whether to send s3 request to global endpoints or 'us-east-1' regional endpoints. This config is only applicable to S3 client. Defaults to
legacy
-
s3UseArnRegion
(Boolean)
—
whether to override the request region with the region inferred from requested resource's ARN. Only available for S3 buckets Defaults to
true
-
retryDelayOptions
(map)
—
A set of options to configure the retry delay on retryable errors. Currently supported options are:
- base [Integer] — The base number of milliseconds to use in the exponential backoff for operation retries. Defaults to 100 ms for all services except DynamoDB, where it defaults to 50ms.
- customBackoff [function] — A custom function that accepts a
retry count and error and returns the amount of time to delay in
milliseconds. If the result is a non-zero negative value, no further
retry attempts will be made. The
base
option will be ignored if this option is supplied. The function is only called for retryable errors.
-
httpOptions
(map)
—
A set of options to pass to the low-level HTTP request. Currently supported options are:
- proxy [String] — the URL to proxy requests through
- agent [http.Agent, https.Agent] — the Agent object to perform
HTTP requests with. Used for connection pooling. Defaults to the global
agent (
http.globalAgent
) for non-SSL connections. Note that for SSL connections, a special Agent object is used in order to enable peer certificate verification. This feature is only available in the Node.js environment. - connectTimeout [Integer] — Sets the socket to timeout after
failing to establish a connection with the server after
connectTimeout
milliseconds. This timeout has no effect once a socket connection has been established. - timeout [Integer] — Sets the socket to timeout after timeout milliseconds of inactivity on the socket. Defaults to two minutes (120000).
- xhrAsync [Boolean] — Whether the SDK will send asynchronous HTTP requests. Used in the browser environment only. Set to false to send requests synchronously. Defaults to true (async on).
- xhrWithCredentials [Boolean] — Sets the "withCredentials" property of an XMLHttpRequest object. Used in the browser environment only. Defaults to false.
-
apiVersion
(String, Date)
—
a String in YYYY-MM-DD format (or a date) that represents the latest possible API version that can be used in all services (unless overridden by
apiVersions
). Specify 'latest' to use the latest possible version. -
apiVersions
(map<String, String|Date>)
—
a map of service identifiers (the lowercase service class name) with the API version to use when instantiating a service. Specify 'latest' for each individual that can use the latest available version.
-
logger
(#write, #log)
—
an object that responds to .write() (like a stream) or .log() (like the console object) in order to log information about requests
-
systemClockOffset
(Number)
—
an offset value in milliseconds to apply to all signing times. Use this to compensate for clock skew when your system may be out of sync with the service time. Note that this configuration option can only be applied to the global
AWS.config
object and cannot be overridden in service-specific configuration. Defaults to 0 milliseconds. -
signatureVersion
(String)
—
the signature version to sign requests with (overriding the API configuration). Possible values are: 'v2', 'v3', 'v4'.
-
signatureCache
(Boolean)
—
whether the signature to sign requests with (overriding the API configuration) is cached. Only applies to the signature version 'v4'. Defaults to
true
. -
dynamoDbCrc32
(Boolean)
—
whether to validate the CRC32 checksum of HTTP response bodies returned by DynamoDB. Default:
true
. -
useAccelerateEndpoint
(Boolean)
—
Whether to use the S3 Transfer Acceleration endpoint with the S3 service. Default:
false
. -
clientSideMonitoring
(Boolean)
—
whether to collect and publish this client's performance metrics of all its API requests.
-
endpointDiscoveryEnabled
(Boolean|undefined)
—
whether to call operations with endpoints given by service dynamically. Setting this
-
endpointCacheSize
(Number)
—
the size of the global cache storing endpoints from endpoint discovery operations. Once endpoint cache is created, updating this setting cannot change existing cache size. Defaults to 1000
-
hostPrefixEnabled
(Boolean)
—
whether to marshal request parameters to the prefix of hostname. Defaults to
true
. -
stsRegionalEndpoints
('legacy'|'regional')
—
whether to send sts request to global endpoints or regional endpoints. Defaults to 'legacy'.
Property Details
Method Details
createCallAnalyticsCategory(params = {}, callback) ⇒ AWS.Request
Creates an analytics category. Amazon Transcribe applies the conditions specified by your analytics categories to your call analytics jobs. For each analytics category, you specify one or more rules. For example, you can specify a rule that the customer sentiment was neutral or negative within that category. If you start a call analytics job, Amazon Transcribe applies the category to the analytics job that you've specified.
Service Reference:
Examples:
Calling the createCallAnalyticsCategory operation
var params = {
CategoryName: 'STRING_VALUE', /* required */
Rules: [ /* required */
{
InterruptionFilter: {
AbsoluteTimeRange: {
EndTime: 'NUMBER_VALUE',
First: 'NUMBER_VALUE',
Last: 'NUMBER_VALUE',
StartTime: 'NUMBER_VALUE'
},
Negate: true || false,
ParticipantRole: AGENT | CUSTOMER,
RelativeTimeRange: {
EndPercentage: 'NUMBER_VALUE',
First: 'NUMBER_VALUE',
Last: 'NUMBER_VALUE',
StartPercentage: 'NUMBER_VALUE'
},
Threshold: 'NUMBER_VALUE'
},
NonTalkTimeFilter: {
AbsoluteTimeRange: {
EndTime: 'NUMBER_VALUE',
First: 'NUMBER_VALUE',
Last: 'NUMBER_VALUE',
StartTime: 'NUMBER_VALUE'
},
Negate: true || false,
RelativeTimeRange: {
EndPercentage: 'NUMBER_VALUE',
First: 'NUMBER_VALUE',
Last: 'NUMBER_VALUE',
StartPercentage: 'NUMBER_VALUE'
},
Threshold: 'NUMBER_VALUE'
},
SentimentFilter: {
Sentiments: [ /* required */
POSITIVE | NEGATIVE | NEUTRAL | MIXED,
/* more items */
],
AbsoluteTimeRange: {
EndTime: 'NUMBER_VALUE',
First: 'NUMBER_VALUE',
Last: 'NUMBER_VALUE',
StartTime: 'NUMBER_VALUE'
},
Negate: true || false,
ParticipantRole: AGENT | CUSTOMER,
RelativeTimeRange: {
EndPercentage: 'NUMBER_VALUE',
First: 'NUMBER_VALUE',
Last: 'NUMBER_VALUE',
StartPercentage: 'NUMBER_VALUE'
}
},
TranscriptFilter: {
Targets: [ /* required */
'STRING_VALUE',
/* more items */
],
TranscriptFilterType: EXACT, /* required */
AbsoluteTimeRange: {
EndTime: 'NUMBER_VALUE',
First: 'NUMBER_VALUE',
Last: 'NUMBER_VALUE',
StartTime: 'NUMBER_VALUE'
},
Negate: true || false,
ParticipantRole: AGENT | CUSTOMER,
RelativeTimeRange: {
EndPercentage: 'NUMBER_VALUE',
First: 'NUMBER_VALUE',
Last: 'NUMBER_VALUE',
StartPercentage: 'NUMBER_VALUE'
}
}
},
/* more items */
]
};
transcribeservice.createCallAnalyticsCategory(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
CategoryName
— (String
)The name that you choose for your category when you create it.
Rules
— (Array<map>
)To create a category, you must specify between 1 and 20 rules. For each rule, you specify a filter to be applied to the attributes of the call. For example, you can specify a sentiment filter to detect if the customer's sentiment was negative or neutral.
NonTalkTimeFilter
— (map
)A condition for a time period when neither the customer nor the agent was talking.
Threshold
— (Integer
)The duration of the period when neither the customer nor agent was talking.
AbsoluteTimeRange
— (map
)An object you can use to specify a time range (in milliseconds) for when no one is talking. For example, you could specify a time period between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)An object that allows percentages to specify the proportion of the call where there was silence. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
Negate
— (Boolean
)Set to
TRUE
to look for a time period when people were talking.
InterruptionFilter
— (map
)A condition for a time period when either the customer or agent was interrupting the other person.
Threshold
— (Integer
)The duration of the interruption.
ParticipantRole
— (String
)Indicates whether the caller or customer was interrupting.
Possible values include:"AGENT"
"CUSTOMER"
AbsoluteTimeRange
— (map
)An object you can use to specify a time range (in milliseconds) for when you'd want to find the interruption. For example, you could search for an interruption between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)An object that allows percentages to specify the proportion of the call where there was a interruption. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
Negate
— (Boolean
)Set to
TRUE
to look for a time period where there was no interruption.
TranscriptFilter
— (map
)A condition that catches particular words or phrases based on a exact match. For example, if you set the phrase "I want to speak to the manager", only that exact phrase will be returned.
TranscriptFilterType
— required — (String
)Matches the phrase to the transcription output in a word for word fashion. For example, if you specify the phrase "I want to speak to the manager." Amazon Transcribe attempts to match that specific phrase to the transcription.
Possible values include:"EXACT"
AbsoluteTimeRange
— (map
)A time range, set in seconds, between two points in the call.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)An object that allows percentages to specify the proportion of the call where you would like to apply a filter. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole
— (String
)Determines whether the customer or the agent is speaking the phrases that you've specified.
Possible values include:"AGENT"
"CUSTOMER"
Negate
— (Boolean
)If
TRUE
, the rule that you specify is applied to everything except for the phrases that you specify.Targets
— required — (Array<String>
)The phrases that you're specifying for the transcript filter to match.
SentimentFilter
— (map
)A condition that is applied to a particular customer sentiment.
Sentiments
— required — (Array<String>
)An array that enables you to specify sentiments for the customer or agent. You can specify one or more values.
AbsoluteTimeRange
— (map
)The time range, measured in seconds, of the sentiment.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)The time range, set in percentages, that correspond to proportion of the call.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole
— (String
)A value that determines whether the sentiment belongs to the customer or the agent.
Possible values include:"AGENT"
"CUSTOMER"
Negate
— (Boolean
)Set to
TRUE
to look for sentiments that weren't specified in the request.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:CategoryProperties
— (map
)The rules and associated metadata used to create a category.
CategoryName
— (String
)The name of the call analytics category.
Rules
— (Array<map>
)The rules used to create a call analytics category.
NonTalkTimeFilter
— (map
)A condition for a time period when neither the customer nor the agent was talking.
Threshold
— (Integer
)The duration of the period when neither the customer nor agent was talking.
AbsoluteTimeRange
— (map
)An object you can use to specify a time range (in milliseconds) for when no one is talking. For example, you could specify a time period between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)An object that allows percentages to specify the proportion of the call where there was silence. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
Negate
— (Boolean
)Set to
TRUE
to look for a time period when people were talking.
InterruptionFilter
— (map
)A condition for a time period when either the customer or agent was interrupting the other person.
Threshold
— (Integer
)The duration of the interruption.
ParticipantRole
— (String
)Indicates whether the caller or customer was interrupting.
Possible values include:"AGENT"
"CUSTOMER"
AbsoluteTimeRange
— (map
)An object you can use to specify a time range (in milliseconds) for when you'd want to find the interruption. For example, you could search for an interruption between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)An object that allows percentages to specify the proportion of the call where there was a interruption. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
Negate
— (Boolean
)Set to
TRUE
to look for a time period where there was no interruption.
TranscriptFilter
— (map
)A condition that catches particular words or phrases based on a exact match. For example, if you set the phrase "I want to speak to the manager", only that exact phrase will be returned.
TranscriptFilterType
— required — (String
)Matches the phrase to the transcription output in a word for word fashion. For example, if you specify the phrase "I want to speak to the manager." Amazon Transcribe attempts to match that specific phrase to the transcription.
Possible values include:"EXACT"
AbsoluteTimeRange
— (map
)A time range, set in seconds, between two points in the call.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)An object that allows percentages to specify the proportion of the call where you would like to apply a filter. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole
— (String
)Determines whether the customer or the agent is speaking the phrases that you've specified.
Possible values include:"AGENT"
"CUSTOMER"
Negate
— (Boolean
)If
TRUE
, the rule that you specify is applied to everything except for the phrases that you specify.Targets
— required — (Array<String>
)The phrases that you're specifying for the transcript filter to match.
SentimentFilter
— (map
)A condition that is applied to a particular customer sentiment.
Sentiments
— required — (Array<String>
)An array that enables you to specify sentiments for the customer or agent. You can specify one or more values.
AbsoluteTimeRange
— (map
)The time range, measured in seconds, of the sentiment.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)The time range, set in percentages, that correspond to proportion of the call.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole
— (String
)A value that determines whether the sentiment belongs to the customer or the agent.
Possible values include:"AGENT"
"CUSTOMER"
Negate
— (Boolean
)Set to
TRUE
to look for sentiments that weren't specified in the request.
CreateTime
— (Date
)A timestamp that shows when the call analytics category was created.
LastUpdateTime
— (Date
)A timestamp that shows when the call analytics category was most recently updated.
-
(AWS.Response)
—
Returns:
createLanguageModel(params = {}, callback) ⇒ AWS.Request
Creates a new custom language model. Use Amazon S3 prefixes to provide the location of your input files. The time it takes to create your model depends on the size of your training data.
Service Reference:
Examples:
Calling the createLanguageModel operation
var params = {
BaseModelName: NarrowBand | WideBand, /* required */
InputDataConfig: { /* required */
DataAccessRoleArn: 'STRING_VALUE', /* required */
S3Uri: 'STRING_VALUE', /* required */
TuningDataS3Uri: 'STRING_VALUE'
},
LanguageCode: en-US | hi-IN | es-US | en-GB | en-AU, /* required */
ModelName: 'STRING_VALUE', /* required */
Tags: [
{
Key: 'STRING_VALUE', /* required */
Value: 'STRING_VALUE' /* required */
},
/* more items */
]
};
transcribeservice.createLanguageModel(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
LanguageCode
— (String
)The language of the input text you're using to train your custom language model.
Possible values include:"en-US"
"hi-IN"
"es-US"
"en-GB"
"en-AU"
BaseModelName
— (String
)The Amazon Transcribe standard language model, or base model used to create your custom language model.
If you want to use your custom language model to transcribe audio with a sample rate of 16,000 Hz or greater, choose
Wideband
.If you want to use your custom language model to transcribe audio with a sample rate that is less than 16,000 Hz, choose
Possible values include:Narrowband
."NarrowBand"
"WideBand"
ModelName
— (String
)The name you choose for your custom language model when you create it.
InputDataConfig
— (map
)Contains the data access role and the Amazon S3 prefixes to read the required input files to create a custom language model.
S3Uri
— required — (String
)The Amazon S3 prefix you specify to access the plain text files that you use to train your custom language model.
TuningDataS3Uri
— (String
)The Amazon S3 prefix you specify to access the plain text files that you use to tune your custom language model.
DataAccessRoleArn
— required — (String
)The Amazon Resource Name (ARN) that uniquely identifies the permissions you've given Amazon Transcribe to access your Amazon S3 buckets containing your media files or text data. ARNs have the format
arn:partition:service:region:account-id:resource-type/resource-id
.
Tags
— (Array<map>
)Adds one or more tags, each in the form of a key:value pair, to a new language model at the time you create this new model.
Key
— required — (String
)The first part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the key is 'Department'.
Value
— required — (String
)The second part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the value is 'Sales'.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:LanguageCode
— (String
)The language code of the text you've used to create a custom language model.
Possible values include:"en-US"
"hi-IN"
"es-US"
"en-GB"
"en-AU"
BaseModelName
— (String
)The Amazon Transcribe standard language model, or base model you've used to create a custom language model.
Possible values include:"NarrowBand"
"WideBand"
ModelName
— (String
)The name you've chosen for your custom language model.
InputDataConfig
— (map
)The data access role and Amazon S3 prefixes you've chosen to create your custom language model.
S3Uri
— required — (String
)The Amazon S3 prefix you specify to access the plain text files that you use to train your custom language model.
TuningDataS3Uri
— (String
)The Amazon S3 prefix you specify to access the plain text files that you use to tune your custom language model.
DataAccessRoleArn
— required — (String
)The Amazon Resource Name (ARN) that uniquely identifies the permissions you've given Amazon Transcribe to access your Amazon S3 buckets containing your media files or text data. ARNs have the format
arn:partition:service:region:account-id:resource-type/resource-id
.
ModelStatus
— (String
)The status of the custom language model. When the status is
Possible values include:COMPLETED
the model is ready to use."IN_PROGRESS"
"FAILED"
"COMPLETED"
-
(AWS.Response)
—
Returns:
createMedicalVocabulary(params = {}, callback) ⇒ AWS.Request
Creates a new custom vocabulary that you can use to modify how Amazon Transcribe Medical transcribes your audio file.
Service Reference:
Examples:
Calling the createMedicalVocabulary operation
var params = {
LanguageCode: af-ZA | ar-AE | ar-SA | cy-GB | da-DK | de-CH | de-DE | en-AB | en-AU | en-GB | en-IE | en-IN | en-US | en-WL | es-ES | es-US | fa-IR | fr-CA | fr-FR | ga-IE | gd-GB | he-IL | hi-IN | id-ID | it-IT | ja-JP | ko-KR | ms-MY | nl-NL | pt-BR | pt-PT | ru-RU | ta-IN | te-IN | tr-TR | zh-CN | zh-TW | th-TH | en-ZA | en-NZ, /* required */
VocabularyFileUri: 'STRING_VALUE', /* required */
VocabularyName: 'STRING_VALUE', /* required */
Tags: [
{
Key: 'STRING_VALUE', /* required */
Value: 'STRING_VALUE' /* required */
},
/* more items */
]
};
transcribeservice.createMedicalVocabulary(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
VocabularyName
— (String
)The name of the custom vocabulary. This case-sensitive name must be unique within an Amazon Web Services account. If you try to create a vocabulary with the same name as a previous vocabulary, you get a
ConflictException
error.LanguageCode
— (String
)The language code for the language used for the entries in your custom vocabulary. The language code of your custom vocabulary must match the language code of your transcription job. US English (en-US) is the only language code available for Amazon Transcribe Medical.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
VocabularyFileUri
— (String
)The location in Amazon S3 of the text file you use to define your custom vocabulary. The URI must be in the same Amazon Web Services Region as the resource that you're calling. Enter information about your
VocabularyFileUri
in the following format:https://s3.<aws-region>.amazonaws.com/<bucket-name>/<keyprefix>/<objectkey>
The following is an example URI for a vocabulary file that is stored in Amazon S3:
https://s3.us-east-1.amazonaws.com/AWSDOC-EXAMPLE-BUCKET/vocab.txt
For more information about Amazon S3 object names, see Object Keys in the Amazon S3 Developer Guide.
For more information about custom vocabularies, see Medical Custom Vocabularies.
Tags
— (Array<map>
)Adds one or more tags, each in the form of a key:value pair, to a new medical vocabulary at the time you create this new vocabulary.
Key
— required — (String
)The first part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the key is 'Department'.
Value
— required — (String
)The second part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the value is 'Sales'.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:VocabularyName
— (String
)The name of the vocabulary. The name must be unique within an Amazon Web Services account and is case sensitive.
LanguageCode
— (String
)The language code for the entries in your custom vocabulary. US English (en-US) is the only valid language code for Amazon Transcribe Medical.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
VocabularyState
— (String
)The processing state of your custom vocabulary in Amazon Transcribe Medical. If the state is
Possible values include:READY
, you can use the vocabulary in aStartMedicalTranscriptionJob
request."PENDING"
"READY"
"FAILED"
LastModifiedTime
— (Date
)The date and time that you created the vocabulary.
FailureReason
— (String
)If the
VocabularyState
field isFAILED
, this field contains information about why the job failed.
-
(AWS.Response)
—
Returns:
createVocabulary(params = {}, callback) ⇒ AWS.Request
Creates a new custom vocabulary that you can use to change the way Amazon Transcribe handles transcription of an audio file.
Service Reference:
Examples:
Calling the createVocabulary operation
var params = {
LanguageCode: af-ZA | ar-AE | ar-SA | cy-GB | da-DK | de-CH | de-DE | en-AB | en-AU | en-GB | en-IE | en-IN | en-US | en-WL | es-ES | es-US | fa-IR | fr-CA | fr-FR | ga-IE | gd-GB | he-IL | hi-IN | id-ID | it-IT | ja-JP | ko-KR | ms-MY | nl-NL | pt-BR | pt-PT | ru-RU | ta-IN | te-IN | tr-TR | zh-CN | zh-TW | th-TH | en-ZA | en-NZ, /* required */
VocabularyName: 'STRING_VALUE', /* required */
Phrases: [
'STRING_VALUE',
/* more items */
],
Tags: [
{
Key: 'STRING_VALUE', /* required */
Value: 'STRING_VALUE' /* required */
},
/* more items */
],
VocabularyFileUri: 'STRING_VALUE'
};
transcribeservice.createVocabulary(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
VocabularyName
— (String
)The name of the vocabulary. The name must be unique within an Amazon Web Services account. The name is case sensitive. If you try to create a vocabulary with the same name as a previous vocabulary you will receive a
ConflictException
error.LanguageCode
— (String
)The language code of the vocabulary entries. For a list of languages and their corresponding language codes, see transcribe-whatis.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
Phrases
— (Array<String>
)An array of strings that contains the vocabulary entries.
VocabularyFileUri
— (String
)The S3 location of the text file that contains the definition of the custom vocabulary. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
For more information about custom vocabularies, see Custom vocabularies.
Tags
— (Array<map>
)Adds one or more tags, each in the form of a key:value pair, to a new Amazon Transcribe vocabulary at the time you create this new vocabulary.
Key
— required — (String
)The first part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the key is 'Department'.
Value
— required — (String
)The second part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the value is 'Sales'.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:VocabularyName
— (String
)The name of the vocabulary.
LanguageCode
— (String
)The language code of the vocabulary entries.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
VocabularyState
— (String
)The processing state of the vocabulary. When the
Possible values include:VocabularyState
field containsREADY
the vocabulary is ready to be used in aStartTranscriptionJob
request."PENDING"
"READY"
"FAILED"
LastModifiedTime
— (Date
)The date and time that the vocabulary was created.
FailureReason
— (String
)If the
VocabularyState
field isFAILED
, this field contains information about why the job failed.
-
(AWS.Response)
—
Returns:
createVocabularyFilter(params = {}, callback) ⇒ AWS.Request
Creates a new vocabulary filter that you can use to filter words, such as profane words, from the output of a transcription job.
Service Reference:
Examples:
Calling the createVocabularyFilter operation
var params = {
LanguageCode: af-ZA | ar-AE | ar-SA | cy-GB | da-DK | de-CH | de-DE | en-AB | en-AU | en-GB | en-IE | en-IN | en-US | en-WL | es-ES | es-US | fa-IR | fr-CA | fr-FR | ga-IE | gd-GB | he-IL | hi-IN | id-ID | it-IT | ja-JP | ko-KR | ms-MY | nl-NL | pt-BR | pt-PT | ru-RU | ta-IN | te-IN | tr-TR | zh-CN | zh-TW | th-TH | en-ZA | en-NZ, /* required */
VocabularyFilterName: 'STRING_VALUE', /* required */
Tags: [
{
Key: 'STRING_VALUE', /* required */
Value: 'STRING_VALUE' /* required */
},
/* more items */
],
VocabularyFilterFileUri: 'STRING_VALUE',
Words: [
'STRING_VALUE',
/* more items */
]
};
transcribeservice.createVocabularyFilter(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
VocabularyFilterName
— (String
)The vocabulary filter name. The name must be unique within the account that contains it. If you try to create a vocabulary filter with the same name as another vocabulary filter, you get a
ConflictException
error.LanguageCode
— (String
)The language code of the words in the vocabulary filter. All words in the filter must be in the same language. The vocabulary filter can only be used with transcription jobs in the specified language.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
Words
— (Array<String>
)The words to use in the vocabulary filter. Only use characters from the character set defined for custom vocabularies. For a list of character sets, see Character Sets for Custom Vocabularies.
If you provide a list of words in the
Words
parameter, you can't use theVocabularyFilterFileUri
parameter.VocabularyFilterFileUri
— (String
)The Amazon S3 location of a text file used as input to create the vocabulary filter. Only use characters from the character set defined for custom vocabularies. For a list of character sets, see Character Sets for Custom Vocabularies.
The specified file must be less than 50 KB of UTF-8 characters.
If you provide the location of a list of words in the
VocabularyFilterFileUri
parameter, you can't use theWords
parameter.Tags
— (Array<map>
)Adds one or more tags, each in the form of a key:value pair, to a new Amazon Transcribe vocabulary filter at the time you create this new vocabulary filter.
Key
— required — (String
)The first part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the key is 'Department'.
Value
— required — (String
)The second part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the value is 'Sales'.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:VocabularyFilterName
— (String
)The name of the vocabulary filter.
LanguageCode
— (String
)The language code of the words in the collection.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
LastModifiedTime
— (Date
)The date and time that the vocabulary filter was modified.
-
(AWS.Response)
—
Returns:
deleteCallAnalyticsCategory(params = {}, callback) ⇒ AWS.Request
Deletes a call analytics category using its name.
Service Reference:
Examples:
Calling the deleteCallAnalyticsCategory operation
var params = {
CategoryName: 'STRING_VALUE' /* required */
};
transcribeservice.deleteCallAnalyticsCategory(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
CategoryName
— (String
)The name of the call analytics category that you're choosing to delete. The value is case sensitive.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs.
-
(AWS.Response)
—
Returns:
deleteCallAnalyticsJob(params = {}, callback) ⇒ AWS.Request
Deletes a call analytics job using its name.
Service Reference:
Examples:
Calling the deleteCallAnalyticsJob operation
var params = {
CallAnalyticsJobName: 'STRING_VALUE' /* required */
};
transcribeservice.deleteCallAnalyticsJob(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
CallAnalyticsJobName
— (String
)The name of the call analytics job you want to delete.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs.
-
(AWS.Response)
—
Returns:
deleteLanguageModel(params = {}, callback) ⇒ AWS.Request
Deletes a custom language model using its name.
Service Reference:
Examples:
Calling the deleteLanguageModel operation
var params = {
ModelName: 'STRING_VALUE' /* required */
};
transcribeservice.deleteLanguageModel(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
ModelName
— (String
)The name of the model you're choosing to delete.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs.
-
(AWS.Response)
—
Returns:
deleteMedicalTranscriptionJob(params = {}, callback) ⇒ AWS.Request
Deletes a transcription job generated by Amazon Transcribe Medical and any related information.
Service Reference:
Examples:
Calling the deleteMedicalTranscriptionJob operation
var params = {
MedicalTranscriptionJobName: 'STRING_VALUE' /* required */
};
transcribeservice.deleteMedicalTranscriptionJob(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
MedicalTranscriptionJobName
— (String
)The name you provide to the
DeleteMedicalTranscriptionJob
object to delete a transcription job.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs.
-
(AWS.Response)
—
Returns:
deleteMedicalVocabulary(params = {}, callback) ⇒ AWS.Request
Deletes a vocabulary from Amazon Transcribe Medical.
Service Reference:
Examples:
Calling the deleteMedicalVocabulary operation
var params = {
VocabularyName: 'STRING_VALUE' /* required */
};
transcribeservice.deleteMedicalVocabulary(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
VocabularyName
— (String
)The name of the vocabulary that you want to delete.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs.
-
(AWS.Response)
—
Returns:
deleteTranscriptionJob(params = {}, callback) ⇒ AWS.Request
Deletes a previously submitted transcription job along with any other generated results such as the transcription, models, and so on.
Service Reference:
Examples:
Calling the deleteTranscriptionJob operation
var params = {
TranscriptionJobName: 'STRING_VALUE' /* required */
};
transcribeservice.deleteTranscriptionJob(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
TranscriptionJobName
— (String
)The name of the transcription job to be deleted.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs.
-
(AWS.Response)
—
Returns:
deleteVocabulary(params = {}, callback) ⇒ AWS.Request
Deletes a vocabulary from Amazon Transcribe.
Service Reference:
Examples:
Calling the deleteVocabulary operation
var params = {
VocabularyName: 'STRING_VALUE' /* required */
};
transcribeservice.deleteVocabulary(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
VocabularyName
— (String
)The name of the vocabulary to delete.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs.
-
(AWS.Response)
—
Returns:
deleteVocabularyFilter(params = {}, callback) ⇒ AWS.Request
Removes a vocabulary filter.
Service Reference:
Examples:
Calling the deleteVocabularyFilter operation
var params = {
VocabularyFilterName: 'STRING_VALUE' /* required */
};
transcribeservice.deleteVocabularyFilter(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
VocabularyFilterName
— (String
)The name of the vocabulary filter to remove.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs.
-
(AWS.Response)
—
Returns:
describeLanguageModel(params = {}, callback) ⇒ AWS.Request
Gets information about a single custom language model. Use this information to see details about the language model in your Amazon Web Services account. You can also see whether the base language model used to create your custom language model has been updated. If Amazon Transcribe has updated the base model, you can create a new custom language model using the updated base model. If the language model wasn't created, you can use this operation to understand why Amazon Transcribe couldn't create it.
Service Reference:
Examples:
Calling the describeLanguageModel operation
var params = {
ModelName: 'STRING_VALUE' /* required */
};
transcribeservice.describeLanguageModel(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
ModelName
— (String
)The name of the custom language model you submit to get more information.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:LanguageModel
— (map
)The name of the custom language model you requested more information about.
ModelName
— (String
)The name of the custom language model.
CreateTime
— (Date
)The time the custom language model was created.
LastModifiedTime
— (Date
)The most recent time the custom language model was modified.
LanguageCode
— (String
)The language code you used to create your custom language model.
Possible values include:"en-US"
"hi-IN"
"es-US"
"en-GB"
"en-AU"
BaseModelName
— (String
)The Amazon Transcribe standard language model, or base model used to create the custom language model.
Possible values include:"NarrowBand"
"WideBand"
ModelStatus
— (String
)The creation status of a custom language model. When the status is
Possible values include:COMPLETED
the model is ready for use."IN_PROGRESS"
"FAILED"
"COMPLETED"
UpgradeAvailability
— (Boolean
)Whether the base model used for the custom language model is up to date. If this field is
true
then you are running the most up-to-date version of the base model in your custom language model.FailureReason
— (String
)The reason why the custom language model couldn't be created.
InputDataConfig
— (map
)The data access role and Amazon S3 prefixes for the input files used to train the custom language model.
S3Uri
— required — (String
)The Amazon S3 prefix you specify to access the plain text files that you use to train your custom language model.
TuningDataS3Uri
— (String
)The Amazon S3 prefix you specify to access the plain text files that you use to tune your custom language model.
DataAccessRoleArn
— required — (String
)The Amazon Resource Name (ARN) that uniquely identifies the permissions you've given Amazon Transcribe to access your Amazon S3 buckets containing your media files or text data. ARNs have the format
arn:partition:service:region:account-id:resource-type/resource-id
.
-
(AWS.Response)
—
Returns:
getCallAnalyticsCategory(params = {}, callback) ⇒ AWS.Request
Retrieves information about a call analytics category.
Service Reference:
Examples:
Calling the getCallAnalyticsCategory operation
var params = {
CategoryName: 'STRING_VALUE' /* required */
};
transcribeservice.getCallAnalyticsCategory(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
CategoryName
— (String
)The name of the category you want information about. This value is case sensitive.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:CategoryProperties
— (map
)The rules you've defined for a category.
CategoryName
— (String
)The name of the call analytics category.
Rules
— (Array<map>
)The rules used to create a call analytics category.
NonTalkTimeFilter
— (map
)A condition for a time period when neither the customer nor the agent was talking.
Threshold
— (Integer
)The duration of the period when neither the customer nor agent was talking.
AbsoluteTimeRange
— (map
)An object you can use to specify a time range (in milliseconds) for when no one is talking. For example, you could specify a time period between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)An object that allows percentages to specify the proportion of the call where there was silence. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
Negate
— (Boolean
)Set to
TRUE
to look for a time period when people were talking.
InterruptionFilter
— (map
)A condition for a time period when either the customer or agent was interrupting the other person.
Threshold
— (Integer
)The duration of the interruption.
ParticipantRole
— (String
)Indicates whether the caller or customer was interrupting.
Possible values include:"AGENT"
"CUSTOMER"
AbsoluteTimeRange
— (map
)An object you can use to specify a time range (in milliseconds) for when you'd want to find the interruption. For example, you could search for an interruption between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)An object that allows percentages to specify the proportion of the call where there was a interruption. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
Negate
— (Boolean
)Set to
TRUE
to look for a time period where there was no interruption.
TranscriptFilter
— (map
)A condition that catches particular words or phrases based on a exact match. For example, if you set the phrase "I want to speak to the manager", only that exact phrase will be returned.
TranscriptFilterType
— required — (String
)Matches the phrase to the transcription output in a word for word fashion. For example, if you specify the phrase "I want to speak to the manager." Amazon Transcribe attempts to match that specific phrase to the transcription.
Possible values include:"EXACT"
AbsoluteTimeRange
— (map
)A time range, set in seconds, between two points in the call.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)An object that allows percentages to specify the proportion of the call where you would like to apply a filter. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole
— (String
)Determines whether the customer or the agent is speaking the phrases that you've specified.
Possible values include:"AGENT"
"CUSTOMER"
Negate
— (Boolean
)If
TRUE
, the rule that you specify is applied to everything except for the phrases that you specify.Targets
— required — (Array<String>
)The phrases that you're specifying for the transcript filter to match.
SentimentFilter
— (map
)A condition that is applied to a particular customer sentiment.
Sentiments
— required — (Array<String>
)An array that enables you to specify sentiments for the customer or agent. You can specify one or more values.
AbsoluteTimeRange
— (map
)The time range, measured in seconds, of the sentiment.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)The time range, set in percentages, that correspond to proportion of the call.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole
— (String
)A value that determines whether the sentiment belongs to the customer or the agent.
Possible values include:"AGENT"
"CUSTOMER"
Negate
— (Boolean
)Set to
TRUE
to look for sentiments that weren't specified in the request.
CreateTime
— (Date
)A timestamp that shows when the call analytics category was created.
LastUpdateTime
— (Date
)A timestamp that shows when the call analytics category was most recently updated.
-
(AWS.Response)
—
Returns:
getCallAnalyticsJob(params = {}, callback) ⇒ AWS.Request
Returns information about a call analytics job. To see the status of the job, check the CallAnalyticsJobStatus
field. If the status is COMPLETED
, the job is finished and you can find the results at the location specified in the TranscriptFileUri
field. If you enable personally identifiable information (PII) redaction, the redacted transcript appears in the RedactedTranscriptFileUri
field.
Service Reference:
Examples:
Calling the getCallAnalyticsJob operation
var params = {
CallAnalyticsJobName: 'STRING_VALUE' /* required */
};
transcribeservice.getCallAnalyticsJob(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
CallAnalyticsJobName
— (String
)The name of the analytics job you want information about. This value is case sensitive.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:CallAnalyticsJob
— (map
)An object that contains the results of your call analytics job.
CallAnalyticsJobName
— (String
)The name of the call analytics job.
CallAnalyticsJobStatus
— (String
)The status of the analytics job.
Possible values include:"QUEUED"
"IN_PROGRESS"
"FAILED"
"COMPLETED"
LanguageCode
— (String
)If you know the language spoken between the customer and the agent, specify a language code for this field.
If you don't know the language, you can leave this field blank, and Amazon Transcribe will use machine learning to automatically identify the language. To improve the accuracy of language identification, you can provide an array containing the possible language codes for the language spoken in your audio. Refer to Supported languages and language-specific features for additional information.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
MediaSampleRateHertz
— (Integer
)The sample rate, in Hertz, of the audio.
MediaFormat
— (String
)The format of the input audio file. Note: for call analytics jobs, only the following media formats are supported: MP3, MP4, WAV, FLAC, OGG, and WebM.
Possible values include:"mp3"
"mp4"
"wav"
"flac"
"ogg"
"amr"
"webm"
Media
— (map
)Describes the input media file in a transcription request.
MediaFileUri
— (String
)The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
RedactedMediaFileUri
— (String
)The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
Transcript
— (map
)Identifies the location of a transcription.
TranscriptFileUri
— (String
)The S3 object location of the transcript.
Use this URI to access the transcript. If you specified an S3 bucket in the
OutputBucketName
field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.RedactedTranscriptFileUri
— (String
)The S3 object location of the redacted transcript.
Use this URI to access the redacted transcript. If you specified an S3 bucket in the
OutputBucketName
field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.
StartTime
— (Date
)A timestamp that shows when the analytics job started processing.
CreationTime
— (Date
)A timestamp that shows when the analytics job was created.
CompletionTime
— (Date
)A timestamp that shows when the analytics job was completed.
FailureReason
— (String
)If the
AnalyticsJobStatus
isFAILED
, this field contains information about why the job failed.The
FailureReason
field can contain one of the following values:-
Unsupported media format
: The media format specified in theMediaFormat
field of the request isn't valid. See the description of theMediaFormat
field for a list of valid values. -
The media format provided does not match the detected media format
: The media format of the audio file doesn't match the format specified in theMediaFormat
field in the request. Check the media format of your media file and make sure the two values match. -
Invalid sample rate for audio file
: The sample rate specified in theMediaSampleRateHertz
of the request isn't valid. The sample rate must be between 8,000 and 48,000 Hertz. -
The sample rate provided does not match the detected sample rate
: The sample rate in the audio file doesn't match the sample rate specified in theMediaSampleRateHertz
field in the request. Check the sample rate of your media file and make sure that the two values match. -
Invalid file size: file size too large
: The size of your audio file is larger than what Amazon Transcribe Medical can process. For more information, see Guidelines and Quotas in the Amazon Transcribe Medical Guide. -
Invalid number of channels: number of channels too large
: Your audio contains more channels than Amazon Transcribe Medical is configured to process. To request additional channels, see Amazon Transcribe Medical Endpoints and Quotas in the Amazon Web Services General Reference.
-
DataAccessRoleArn
— (String
)The Amazon Resource Number (ARN) that you use to access the analytics job. ARNs have the format
arn:partition:service:region:account-id:resource-type/resource-id
.IdentifiedLanguageScore
— (Float
)A value between zero and one that Amazon Transcribe assigned to the language that it identified in the source audio. This value appears only when you don't provide a single language code. Larger values indicate that Amazon Transcribe has higher confidence in the language that it identified
Settings
— (map
)Provides information about the settings used to run a transcription job.
VocabularyName
— (String
)The name of a vocabulary to use when processing the call analytics job.
VocabularyFilterName
— (String
)The name of the vocabulary filter to use when running a call analytics job. The filter that you specify must have the same language code as the analytics job.
VocabularyFilterMethod
— (String
)Set to mask to remove filtered text from the transcript and replace it with three asterisks ("***") as placeholder text. Set to
Possible values include:remove
to remove filtered text from the transcript without using placeholder text. Set totag
to mark the word in the transcription output that matches the vocabulary filter. When you set the filter method totag
, the words matching your vocabulary filter are not masked or removed."remove"
"mask"
"tag"
LanguageModelName
— (String
)The structure used to describe a custom language model.
ContentRedaction
— (map
)Settings for content redaction within a transcription job.
RedactionType
— required — (String
)Request parameter that defines the entities to be redacted. The only accepted value is
Possible values include:PII
."PII"
RedactionOutput
— required — (String
)The output transcript file stored in either the default S3 bucket or in a bucket you specify.
When you choose
redacted
Amazon Transcribe outputs only the redacted transcript.When you choose
Possible values include:redacted_and_unredacted
Amazon Transcribe outputs both the redacted and unredacted transcripts."redacted"
"redacted_and_unredacted"
LanguageOptions
— (Array<String>
)When you run a call analytics job, you can specify the language spoken in the audio, or you can have Amazon Transcribe identify the language for you.
To specify a language, specify an array with one language code. If you don't know the language, you can leave this field blank and Amazon Transcribe will use machine learning to identify the language for you. To improve the ability of Amazon Transcribe to correctly identify the language, you can provide an array of the languages that can be present in the audio. Refer to Supported languages and language-specific features for additional information.
LanguageIdSettings
— (map<map>
)The language identification settings associated with your call analytics job. These settings include
VocabularyName
,VocabularyFilterName
, andLanguageModelName
.VocabularyName
— (String
)The name of the vocabulary you want to use when processing your transcription job. The vocabulary you specify must have the same language code as the transcription job; if the languages don't match, the vocabulary won't be applied.
VocabularyFilterName
— (String
)The name of the vocabulary filter you want to use when transcribing your audio. The filter you specify must have the same language code as the transcription job; if the languages don't match, the vocabulary filter won't be applied.
LanguageModelName
— (String
)The name of the language model you want to use when transcribing your audio. The model you specify must have the same language code as the transcription job; if the languages don't match, the language model won't be applied.
ChannelDefinitions
— (Array<map>
)Shows numeric values to indicate the channel assigned to the agent's audio and the channel assigned to the customer's audio.
ChannelId
— (Integer
)A value that indicates the audio channel.
ParticipantRole
— (String
)Indicates whether the person speaking on the audio channel is the agent or customer.
Possible values include:"AGENT"
"CUSTOMER"
-
(AWS.Response)
—
Returns:
getMedicalTranscriptionJob(params = {}, callback) ⇒ AWS.Request
Returns information about a transcription job from Amazon Transcribe Medical. To see the status of the job, check the TranscriptionJobStatus
field. If the status is COMPLETED
, the job is finished. You find the results of the completed job in the TranscriptFileUri
field.
Service Reference:
Examples:
Calling the getMedicalTranscriptionJob operation
var params = {
MedicalTranscriptionJobName: 'STRING_VALUE' /* required */
};
transcribeservice.getMedicalTranscriptionJob(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
MedicalTranscriptionJobName
— (String
)The name of the medical transcription job.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:MedicalTranscriptionJob
— (map
)An object that contains the results of the medical transcription job.
MedicalTranscriptionJobName
— (String
)The name for a given medical transcription job.
TranscriptionJobStatus
— (String
)The completion status of a medical transcription job.
Possible values include:"QUEUED"
"IN_PROGRESS"
"FAILED"
"COMPLETED"
LanguageCode
— (String
)The language code for the language spoken in the source audio file. US English (en-US) is the only supported language for medical transcriptions. Any other value you enter for language code results in a
Possible values include:BadRequestException
error."af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
MediaSampleRateHertz
— (Integer
)The sample rate, in Hertz, of the source audio containing medical information.
If you don't specify the sample rate, Amazon Transcribe Medical determines it for you. If you choose to specify the sample rate, it must match the rate detected by Amazon Transcribe Medical. In most cases, you should leave the
MedicalMediaSampleHertz
blank and let Amazon Transcribe Medical determine the sample rate.MediaFormat
— (String
)The format of the input media file.
Possible values include:"mp3"
"mp4"
"wav"
"flac"
"ogg"
"amr"
"webm"
Media
— (map
)Describes the input media file in a transcription request.
MediaFileUri
— (String
)The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
RedactedMediaFileUri
— (String
)The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
Transcript
— (map
)An object that contains the
MedicalTranscript
. TheMedicalTranscript
contains theTranscriptFileUri
.TranscriptFileUri
— (String
)The S3 object location of the medical transcript.
Use this URI to access the medical transcript. This URI points to the S3 bucket you created to store the medical transcript.
StartTime
— (Date
)A timestamp that shows when the job started processing.
CreationTime
— (Date
)A timestamp that shows when the job was created.
CompletionTime
— (Date
)A timestamp that shows when the job was completed.
FailureReason
— (String
)If the
TranscriptionJobStatus
field isFAILED
, this field contains information about why the job failed.The
FailureReason
field contains one of the following values:-
Unsupported media format
- The media format specified in theMediaFormat
field of the request isn't valid. See the description of theMediaFormat
field for a list of valid values. -
The media format provided does not match the detected media format
- The media format of the audio file doesn't match the format specified in theMediaFormat
field in the request. Check the media format of your media file and make sure the two values match. -
Invalid sample rate for audio file
- The sample rate specified in theMediaSampleRateHertz
of the request isn't valid. The sample rate must be between 8,000 and 48,000 Hertz. -
The sample rate provided does not match the detected sample rate
- The sample rate in the audio file doesn't match the sample rate specified in theMediaSampleRateHertz
field in the request. Check the sample rate of your media file and make sure that the two values match. -
Invalid file size: file size too large
- The size of your audio file is larger than what Amazon Transcribe Medical can process. For more information, see Guidelines and Quotas in the Amazon Transcribe Medical Guide -
Invalid number of channels: number of channels too large
- Your audio contains more channels than Amazon Transcribe Medical is configured to process. To request additional channels, see Amazon Transcribe Medical Endpoints and Quotas in the Amazon Web Services General Reference
-
Settings
— (map
)Object that contains object.
ShowSpeakerLabels
— (Boolean
)Determines whether the transcription job uses speaker recognition to identify different speakers in the input audio. Speaker recognition labels individual speakers in the audio file. If you set the
ShowSpeakerLabels
field to true, you must also set the maximum number of speaker labels in theMaxSpeakerLabels
field.You can't set both
ShowSpeakerLabels
andChannelIdentification
in the same request. If you set both, your request returns aBadRequestException
.MaxSpeakerLabels
— (Integer
)The maximum number of speakers to identify in the input audio. If there are more speakers in the audio than this number, multiple speakers are identified as a single speaker. If you specify the
MaxSpeakerLabels
field, you must set theShowSpeakerLabels
field to true.ChannelIdentification
— (Boolean
)Instructs Amazon Transcribe Medical to process each audio channel separately and then merge the transcription output of each channel into a single transcription.
Amazon Transcribe Medical also produces a transcription of each item detected on an audio channel, including the start time and end time of the item and alternative transcriptions of item. The alternative transcriptions also come with confidence scores provided by Amazon Transcribe Medical.
You can't set both
ShowSpeakerLabels
andChannelIdentification
in the same request. If you set both, your request returns aBadRequestException
ShowAlternatives
— (Boolean
)Determines whether alternative transcripts are generated along with the transcript that has the highest confidence. If you set
ShowAlternatives
field to true, you must also set the maximum number of alternatives to return in theMaxAlternatives
field.MaxAlternatives
— (Integer
)The maximum number of alternatives that you tell the service to return. If you specify the
MaxAlternatives
field, you must set theShowAlternatives
field to true.VocabularyName
— (String
)The name of the vocabulary to use when processing a medical transcription job.
ContentIdentificationType
— (String
)Shows the type of content that you've configured Amazon Transcribe Medical to identify in a transcription job. If the value is
Possible values include:PHI
, you've configured the job to identify personal health information (PHI) in the transcription output."PHI"
Specialty
— (String
)The medical specialty of any clinicians providing a dictation or having a conversation. Refer to Transcribing a medical conversationfor a list of supported specialties.
Possible values include:"PRIMARYCARE"
Type
— (String
)The type of speech in the transcription job.
Possible values include:CONVERSATION
is generally used for patient-physician dialogues.DICTATION
is the setting for physicians speaking their notes after seeing a patient. For more information, see What is Amazon Transcribe Medical?."CONVERSATION"
"DICTATION"
Tags
— (Array<map>
)A key:value pair assigned to a given medical transcription job.
Key
— required — (String
)The first part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the key is 'Department'.
Value
— required — (String
)The second part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the value is 'Sales'.
-
(AWS.Response)
—
Returns:
getMedicalVocabulary(params = {}, callback) ⇒ AWS.Request
Retrieves information about a medical vocabulary.
Service Reference:
Examples:
Calling the getMedicalVocabulary operation
var params = {
VocabularyName: 'STRING_VALUE' /* required */
};
transcribeservice.getMedicalVocabulary(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
VocabularyName
— (String
)The name of the vocabulary that you want information about. The value is case sensitive.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:VocabularyName
— (String
)The name of the vocabulary returned by Amazon Transcribe Medical.
LanguageCode
— (String
)The valid language code for your vocabulary entries.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
VocabularyState
— (String
)The processing state of the vocabulary. If the
Possible values include:VocabularyState
isREADY
then you can use it in theStartMedicalTranscriptionJob
operation."PENDING"
"READY"
"FAILED"
LastModifiedTime
— (Date
)The date and time that the vocabulary was last modified with a text file different from the one that was previously used.
FailureReason
— (String
)If the
VocabularyState
isFAILED
, this field contains information about why the job failed.DownloadUri
— (String
)The location in Amazon S3 where the vocabulary is stored. Use this URI to get the contents of the vocabulary. You can download your vocabulary from the URI for a limited time.
-
(AWS.Response)
—
Returns:
getTranscriptionJob(params = {}, callback) ⇒ AWS.Request
Returns information about a transcription job. To see the status of the job, check the TranscriptionJobStatus
field. If the status is COMPLETED
, the job is finished and you can find the results at the location specified in the TranscriptFileUri
field. If you enable content redaction, the redacted transcript appears in RedactedTranscriptFileUri
.
Service Reference:
Examples:
Calling the getTranscriptionJob operation
var params = {
TranscriptionJobName: 'STRING_VALUE' /* required */
};
transcribeservice.getTranscriptionJob(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
TranscriptionJobName
— (String
)The name of the job.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:TranscriptionJob
— (map
)An object that contains the results of the transcription job.
TranscriptionJobName
— (String
)The name of the transcription job.
TranscriptionJobStatus
— (String
)The status of the transcription job.
Possible values include:"QUEUED"
"IN_PROGRESS"
"FAILED"
"COMPLETED"
LanguageCode
— (String
)The language code for the input speech.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
MediaSampleRateHertz
— (Integer
)The sample rate, in Hertz, of the audio track in the input media file.
MediaFormat
— (String
)The format of the input media file.
Possible values include:"mp3"
"mp4"
"wav"
"flac"
"ogg"
"amr"
"webm"
Media
— (map
)An object that describes the input media for the transcription job.
MediaFileUri
— (String
)The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
RedactedMediaFileUri
— (String
)The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
Transcript
— (map
)An object that describes the output of the transcription job.
TranscriptFileUri
— (String
)The S3 object location of the transcript.
Use this URI to access the transcript. If you specified an S3 bucket in the
OutputBucketName
field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.RedactedTranscriptFileUri
— (String
)The S3 object location of the redacted transcript.
Use this URI to access the redacted transcript. If you specified an S3 bucket in the
OutputBucketName
field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.
StartTime
— (Date
)A timestamp that shows when the job started processing.
CreationTime
— (Date
)A timestamp that shows when the job was created.
CompletionTime
— (Date
)A timestamp that shows when the job completed.
FailureReason
— (String
)If the
TranscriptionJobStatus
field isFAILED
, this field contains information about why the job failed.The
FailureReason
field can contain one of the following values:-
Unsupported media format
- The media format specified in theMediaFormat
field of the request isn't valid. See the description of theMediaFormat
field for a list of valid values. -
The media format provided does not match the detected media format
- The media format of the audio file doesn't match the format specified in theMediaFormat
field in the request. Check the media format of your media file and make sure that the two values match. -
Invalid sample rate for audio file
- The sample rate specified in theMediaSampleRateHertz
of the request isn't valid. The sample rate must be between 8,000 and 48,000 Hertz. -
The sample rate provided does not match the detected sample rate
- The sample rate in the audio file doesn't match the sample rate specified in theMediaSampleRateHertz
field in the request. Check the sample rate of your media file and make sure that the two values match. -
Invalid file size: file size too large
- The size of your audio file is larger than Amazon Transcribe can process. For more information, see Limits in the Amazon Transcribe Developer Guide. -
Invalid number of channels: number of channels too large
- Your audio contains more channels than Amazon Transcribe is configured to process. To request additional channels, see Amazon Transcribe Limits in the Amazon Web Services General Reference.
-
Settings
— (map
)Optional settings for the transcription job. Use these settings to turn on speaker recognition, to set the maximum number of speakers that should be identified and to specify a custom vocabulary to use when processing the transcription job.
VocabularyName
— (String
)The name of a vocabulary to use when processing the transcription job.
ShowSpeakerLabels
— (Boolean
)Determines whether the transcription job uses speaker recognition to identify different speakers in the input audio. Speaker recognition labels individual speakers in the audio file. If you set the
ShowSpeakerLabels
field to true, you must also set the maximum number of speaker labelsMaxSpeakerLabels
field.You can't set both
ShowSpeakerLabels
andChannelIdentification
in the same request. If you set both, your request returns aBadRequestException
.MaxSpeakerLabels
— (Integer
)The maximum number of speakers to identify in the input audio. If there are more speakers in the audio than this number, multiple speakers are identified as a single speaker. If you specify the
MaxSpeakerLabels
field, you must set theShowSpeakerLabels
field to true.ChannelIdentification
— (Boolean
)Instructs Amazon Transcribe to process each audio channel separately and then merge the transcription output of each channel into a single transcription.
Amazon Transcribe also produces a transcription of each item detected on an audio channel, including the start time and end time of the item and alternative transcriptions of the item including the confidence that Amazon Transcribe has in the transcription.
You can't set both
ShowSpeakerLabels
andChannelIdentification
in the same request. If you set both, your request returns aBadRequestException
.ShowAlternatives
— (Boolean
)Determines whether the transcription contains alternative transcriptions. If you set the
ShowAlternatives
field to true, you must also set the maximum number of alternatives to return in theMaxAlternatives
field.MaxAlternatives
— (Integer
)The number of alternative transcriptions that the service should return. If you specify the
MaxAlternatives
field, you must set theShowAlternatives
field to true.VocabularyFilterName
— (String
)The name of the vocabulary filter to use when transcribing the audio. The filter that you specify must have the same language code as the transcription job.
VocabularyFilterMethod
— (String
)Set to
Possible values include:mask
to remove filtered text from the transcript and replace it with three asterisks ("***") as placeholder text. Set toremove
to remove filtered text from the transcript without using placeholder text. Set totag
to mark the word in the transcription output that matches the vocabulary filter. When you set the filter method totag
, the words matching your vocabulary filter are not masked or removed."remove"
"mask"
"tag"
ModelSettings
— (map
)An object containing the details of your custom language model.
LanguageModelName
— (String
)The name of your custom language model.
JobExecutionSettings
— (map
)Provides information about how a transcription job is executed.
AllowDeferredExecution
— (Boolean
)Indicates whether a job should be queued by Amazon Transcribe when the concurrent execution limit is exceeded. When the
AllowDeferredExecution
field is true, jobs are queued and executed when the number of executing jobs falls below the concurrent execution limit. If the field is false, Amazon Transcribe returns aLimitExceededException
exception.Note that job queuing is enabled by default for call analytics jobs.
If you specify the
AllowDeferredExecution
field, you must specify theDataAccessRoleArn
field.DataAccessRoleArn
— (String
)The Amazon Resource Name (ARN), in the form
arn:partition:service:region:account-id:resource-type/resource-id
, of a role that has access to the S3 bucket that contains the input files. Amazon Transcribe assumes this role to read queued media files. If you have specified an output S3 bucket for the transcription results, this role should have access to the output bucket as well.If you specify the
AllowDeferredExecution
field, you must specify theDataAccessRoleArn
field.
ContentRedaction
— (map
)An object that describes content redaction settings for the transcription job.
RedactionType
— required — (String
)Request parameter that defines the entities to be redacted. The only accepted value is
Possible values include:PII
."PII"
RedactionOutput
— required — (String
)The output transcript file stored in either the default S3 bucket or in a bucket you specify.
When you choose
redacted
Amazon Transcribe outputs only the redacted transcript.When you choose
Possible values include:redacted_and_unredacted
Amazon Transcribe outputs both the redacted and unredacted transcripts."redacted"
"redacted_and_unredacted"
IdentifyLanguage
— (Boolean
)A value that shows if automatic language identification was enabled for a transcription job.
LanguageOptions
— (Array<String>
)An object that shows the optional array of languages inputted for transcription jobs with automatic language identification enabled.
IdentifiedLanguageScore
— (Float
)A value between zero and one that Amazon Transcribe assigned to the language that it identified in the source audio. Larger values indicate that Amazon Transcribe has higher confidence in the language it identified.
Tags
— (Array<map>
)A key:value pair assigned to a given transcription job.
Key
— required — (String
)The first part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the key is 'Department'.
Value
— required — (String
)The second part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the value is 'Sales'.
Subtitles
— (map
)Generate subtitles for your batch transcription job.
Formats
— (Array<String>
)Specify the output format for your subtitle file; if you select both SRT and VTT formats, two output files are genereated.
SubtitleFileUris
— (Array<String>
)Choose the output location for your subtitle file. This location must be an S3 bucket.
LanguageIdSettings
— (map<map>
)Language-specific settings that can be specified when language identification is enabled for your transcription job. These settings include
VocabularyName
,VocabularyFilterName
, andLanguageModelName
LanguageModelName.VocabularyName
— (String
)The name of the vocabulary you want to use when processing your transcription job. The vocabulary you specify must have the same language code as the transcription job; if the languages don't match, the vocabulary won't be applied.
VocabularyFilterName
— (String
)The name of the vocabulary filter you want to use when transcribing your audio. The filter you specify must have the same language code as the transcription job; if the languages don't match, the vocabulary filter won't be applied.
LanguageModelName
— (String
)The name of the language model you want to use when transcribing your audio. The model you specify must have the same language code as the transcription job; if the languages don't match, the language model won't be applied.
-
(AWS.Response)
—
Returns:
getVocabulary(params = {}, callback) ⇒ AWS.Request
Gets information about a vocabulary.
Service Reference:
Examples:
Calling the getVocabulary operation
var params = {
VocabularyName: 'STRING_VALUE' /* required */
};
transcribeservice.getVocabulary(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
VocabularyName
— (String
)The name of the vocabulary to return information about. The name is case sensitive.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:VocabularyName
— (String
)The name of the vocabulary to return.
LanguageCode
— (String
)The language code of the vocabulary entries.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
VocabularyState
— (String
)The processing state of the vocabulary.
Possible values include:"PENDING"
"READY"
"FAILED"
LastModifiedTime
— (Date
)The date and time that the vocabulary was last modified.
FailureReason
— (String
)If the
VocabularyState
field isFAILED
, this field contains information about why the job failed.DownloadUri
— (String
)The S3 location where the vocabulary is stored. Use this URI to get the contents of the vocabulary. The URI is available for a limited time.
-
(AWS.Response)
—
Returns:
getVocabularyFilter(params = {}, callback) ⇒ AWS.Request
Returns information about a vocabulary filter.
Service Reference:
Examples:
Calling the getVocabularyFilter operation
var params = {
VocabularyFilterName: 'STRING_VALUE' /* required */
};
transcribeservice.getVocabularyFilter(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
VocabularyFilterName
— (String
)The name of the vocabulary filter for which to return information.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:VocabularyFilterName
— (String
)The name of the vocabulary filter.
LanguageCode
— (String
)The language code of the words in the vocabulary filter.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
LastModifiedTime
— (Date
)The date and time that the contents of the vocabulary filter were updated.
DownloadUri
— (String
)The URI of the list of words in the vocabulary filter. You can use this URI to get the list of words.
-
(AWS.Response)
—
Returns:
listCallAnalyticsCategories(params = {}, callback) ⇒ AWS.Request
Provides more information about the call analytics categories that you've created. You can use the information in this list to find a specific category. You can then use the operation to get more information about it.
Service Reference:
Examples:
Calling the listCallAnalyticsCategories operation
var params = {
MaxResults: 'NUMBER_VALUE',
NextToken: 'STRING_VALUE'
};
transcribeservice.listCallAnalyticsCategories(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
NextToken
— (String
)When included,
NextToken
fetches the next set of categories if the result of the previous request was truncated.MaxResults
— (Integer
)The maximum number of categories to return in each page of results. If there are fewer results than the value you specify, only the actual results are returned. If you do not specify a value, the default of 5 is used.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:NextToken
— (String
)The operation returns a page of jobs at a time. The maximum size of the list is set by the
MaxResults
parameter. If there are more categories in the list than the page size, Amazon Transcribe returns theNextPage
token. Include the token in the next request to the operation to return the next page of analytics categories.Categories
— (Array<map>
)A list of objects containing information about analytics categories.
CategoryName
— (String
)The name of the call analytics category.
Rules
— (Array<map>
)The rules used to create a call analytics category.
NonTalkTimeFilter
— (map
)A condition for a time period when neither the customer nor the agent was talking.
Threshold
— (Integer
)The duration of the period when neither the customer nor agent was talking.
AbsoluteTimeRange
— (map
)An object you can use to specify a time range (in milliseconds) for when no one is talking. For example, you could specify a time period between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)An object that allows percentages to specify the proportion of the call where there was silence. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
Negate
— (Boolean
)Set to
TRUE
to look for a time period when people were talking.
InterruptionFilter
— (map
)A condition for a time period when either the customer or agent was interrupting the other person.
Threshold
— (Integer
)The duration of the interruption.
ParticipantRole
— (String
)Indicates whether the caller or customer was interrupting.
Possible values include:"AGENT"
"CUSTOMER"
AbsoluteTimeRange
— (map
)An object you can use to specify a time range (in milliseconds) for when you'd want to find the interruption. For example, you could search for an interruption between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)An object that allows percentages to specify the proportion of the call where there was a interruption. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
Negate
— (Boolean
)Set to
TRUE
to look for a time period where there was no interruption.
TranscriptFilter
— (map
)A condition that catches particular words or phrases based on a exact match. For example, if you set the phrase "I want to speak to the manager", only that exact phrase will be returned.
TranscriptFilterType
— required — (String
)Matches the phrase to the transcription output in a word for word fashion. For example, if you specify the phrase "I want to speak to the manager." Amazon Transcribe attempts to match that specific phrase to the transcription.
Possible values include:"EXACT"
AbsoluteTimeRange
— (map
)A time range, set in seconds, between two points in the call.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)An object that allows percentages to specify the proportion of the call where you would like to apply a filter. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole
— (String
)Determines whether the customer or the agent is speaking the phrases that you've specified.
Possible values include:"AGENT"
"CUSTOMER"
Negate
— (Boolean
)If
TRUE
, the rule that you specify is applied to everything except for the phrases that you specify.Targets
— required — (Array<String>
)The phrases that you're specifying for the transcript filter to match.
SentimentFilter
— (map
)A condition that is applied to a particular customer sentiment.
Sentiments
— required — (Array<String>
)An array that enables you to specify sentiments for the customer or agent. You can specify one or more values.
AbsoluteTimeRange
— (map
)The time range, measured in seconds, of the sentiment.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)The time range, set in percentages, that correspond to proportion of the call.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole
— (String
)A value that determines whether the sentiment belongs to the customer or the agent.
Possible values include:"AGENT"
"CUSTOMER"
Negate
— (Boolean
)Set to
TRUE
to look for sentiments that weren't specified in the request.
CreateTime
— (Date
)A timestamp that shows when the call analytics category was created.
LastUpdateTime
— (Date
)A timestamp that shows when the call analytics category was most recently updated.
-
(AWS.Response)
—
Returns:
listCallAnalyticsJobs(params = {}, callback) ⇒ AWS.Request
List call analytics jobs with a specified status or substring that matches their names.
Service Reference:
Examples:
Calling the listCallAnalyticsJobs operation
var params = {
JobNameContains: 'STRING_VALUE',
MaxResults: 'NUMBER_VALUE',
NextToken: 'STRING_VALUE',
Status: QUEUED | IN_PROGRESS | FAILED | COMPLETED
};
transcribeservice.listCallAnalyticsJobs(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
Status
— (String
)When specified, returns only call analytics jobs with the specified status. Jobs are ordered by creation date, with the most recent jobs returned first. If you don't specify a status, Amazon Transcribe returns all analytics jobs ordered by creation date.
Possible values include:"QUEUED"
"IN_PROGRESS"
"FAILED"
"COMPLETED"
JobNameContains
— (String
)When specified, the jobs returned in the list are limited to jobs whose name contains the specified string.
NextToken
— (String
)If you receive a truncated result in the previous request of , include
NextToken
to fetch the next set of jobs.MaxResults
— (Integer
)The maximum number of call analytics jobs to return in each page of results. If there are fewer results than the value you specify, only the actual results are returned. If you do not specify a value, the default of 5 is used.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Status
— (String
)When specified, returns only call analytics jobs with that status. Jobs are ordered by creation date, with the most recent jobs returned first. If you don't specify a status, Amazon Transcribe returns all transcription jobs ordered by creation date.
Possible values include:"QUEUED"
"IN_PROGRESS"
"FAILED"
"COMPLETED"
NextToken
— (String
)The operation returns a page of jobs at a time. The maximum size of the page is set by the
MaxResults
parameter. If there are more jobs in the list than the page size, Amazon Transcribe returns theNextPage
token. Include the token in your next request to the operation to return next page of jobs.CallAnalyticsJobSummaries
— (Array<map>
)A list of objects containing summary information for a transcription job.
CallAnalyticsJobName
— (String
)The name of the call analytics job.
CreationTime
— (Date
)A timestamp that shows when the call analytics job was created.
StartTime
— (Date
)A timestamp that shows when the job began processing.
CompletionTime
— (Date
)A timestamp that shows when the job was completed.
LanguageCode
— (String
)The language of the transcript in the source audio file.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
CallAnalyticsJobStatus
— (String
)The status of the call analytics job.
Possible values include:"QUEUED"
"IN_PROGRESS"
"FAILED"
"COMPLETED"
FailureReason
— (String
)If the
CallAnalyticsJobStatus
isFAILED
, a description of the error.
-
(AWS.Response)
—
Returns:
listLanguageModels(params = {}, callback) ⇒ AWS.Request
Provides more information about the custom language models you've created. You can use the information in this list to find a specific custom language model. You can then use the operation to get more information about it.
Service Reference:
Examples:
Calling the listLanguageModels operation
var params = {
MaxResults: 'NUMBER_VALUE',
NameContains: 'STRING_VALUE',
NextToken: 'STRING_VALUE',
StatusEquals: IN_PROGRESS | FAILED | COMPLETED
};
transcribeservice.listLanguageModels(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
StatusEquals
— (String
)When specified, returns only custom language models with the specified status. Language models are ordered by creation date, with the newest models first. If you don't specify a status, Amazon Transcribe returns all custom language models ordered by date.
Possible values include:"IN_PROGRESS"
"FAILED"
"COMPLETED"
NameContains
— (String
)When specified, the custom language model names returned contain the substring you've specified.
NextToken
— (String
)When included, fetches the next set of jobs if the result of the previous request was truncated.
MaxResults
— (Integer
)The maximum number of language models to return in each page of results. If there are fewer results than the value you specify, only the actual results are returned. If you do not specify a value, the default of 5 is used.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:NextToken
— (String
)The operation returns a page of jobs at a time. The maximum size of the list is set by the MaxResults parameter. If there are more language models in the list than the page size, Amazon Transcribe returns the
NextPage
token. Include the token in the next request to the operation to return the next page of language models.Models
— (Array<map>
)A list of objects containing information about custom language models.
ModelName
— (String
)The name of the custom language model.
CreateTime
— (Date
)The time the custom language model was created.
LastModifiedTime
— (Date
)The most recent time the custom language model was modified.
LanguageCode
— (String
)The language code you used to create your custom language model.
Possible values include:"en-US"
"hi-IN"
"es-US"
"en-GB"
"en-AU"
BaseModelName
— (String
)The Amazon Transcribe standard language model, or base model used to create the custom language model.
Possible values include:"NarrowBand"
"WideBand"
ModelStatus
— (String
)The creation status of a custom language model. When the status is
Possible values include:COMPLETED
the model is ready for use."IN_PROGRESS"
"FAILED"
"COMPLETED"
UpgradeAvailability
— (Boolean
)Whether the base model used for the custom language model is up to date. If this field is
true
then you are running the most up-to-date version of the base model in your custom language model.FailureReason
— (String
)The reason why the custom language model couldn't be created.
InputDataConfig
— (map
)The data access role and Amazon S3 prefixes for the input files used to train the custom language model.
S3Uri
— required — (String
)The Amazon S3 prefix you specify to access the plain text files that you use to train your custom language model.
TuningDataS3Uri
— (String
)The Amazon S3 prefix you specify to access the plain text files that you use to tune your custom language model.
DataAccessRoleArn
— required — (String
)The Amazon Resource Name (ARN) that uniquely identifies the permissions you've given Amazon Transcribe to access your Amazon S3 buckets containing your media files or text data. ARNs have the format
arn:partition:service:region:account-id:resource-type/resource-id
.
-
(AWS.Response)
—
Returns:
listMedicalTranscriptionJobs(params = {}, callback) ⇒ AWS.Request
Lists medical transcription jobs with a specified status or substring that matches their names.
Service Reference:
Examples:
Calling the listMedicalTranscriptionJobs operation
var params = {
JobNameContains: 'STRING_VALUE',
MaxResults: 'NUMBER_VALUE',
NextToken: 'STRING_VALUE',
Status: QUEUED | IN_PROGRESS | FAILED | COMPLETED
};
transcribeservice.listMedicalTranscriptionJobs(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
Status
— (String
)When specified, returns only medical transcription jobs with the specified status. Jobs are ordered by creation date, with the newest jobs returned first. If you don't specify a status, Amazon Transcribe Medical returns all transcription jobs ordered by creation date.
Possible values include:"QUEUED"
"IN_PROGRESS"
"FAILED"
"COMPLETED"
JobNameContains
— (String
)When specified, the jobs returned in the list are limited to jobs whose name contains the specified string.
NextToken
— (String
)If you a receive a truncated result in the previous request of
ListMedicalTranscriptionJobs
, includeNextToken
to fetch the next set of jobs.MaxResults
— (Integer
)The maximum number of medical transcription jobs to return in each page of results. If there are fewer results than the value you specify, only the actual results are returned. If you do not specify a value, the default of 5 is used.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Status
— (String
)The requested status of the medical transcription jobs returned.
Possible values include:"QUEUED"
"IN_PROGRESS"
"FAILED"
"COMPLETED"
NextToken
— (String
)The
ListMedicalTranscriptionJobs
operation returns a page of jobs at a time. The maximum size of the page is set by theMaxResults
parameter. If the number of jobs exceeds what can fit on a page, Amazon Transcribe Medical returns theNextPage
token. Include the token in the next request to theListMedicalTranscriptionJobs
operation to return in the next page of jobs.MedicalTranscriptionJobSummaries
— (Array<map>
)A list of objects containing summary information for a transcription job.
MedicalTranscriptionJobName
— (String
)The name of a medical transcription job.
CreationTime
— (Date
)A timestamp that shows when the medical transcription job was created.
StartTime
— (Date
)A timestamp that shows when the job began processing.
CompletionTime
— (Date
)A timestamp that shows when the job was completed.
LanguageCode
— (String
)The language of the transcript in the source audio file.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
TranscriptionJobStatus
— (String
)The status of the medical transcription job.
Possible values include:"QUEUED"
"IN_PROGRESS"
"FAILED"
"COMPLETED"
FailureReason
— (String
)If the
TranscriptionJobStatus
field isFAILED
, a description of the error.OutputLocationType
— (String
)Indicates the location of the transcription job's output. This field must be the path of an S3 bucket; if you don't already have an S3 bucket, one is created based on the path you add.
Possible values include:"CUSTOMER_BUCKET"
"SERVICE_BUCKET"
Specialty
— (String
)The medical specialty of the transcription job. Refer to Transcribing a medical conversationfor a list of supported specialties.
Possible values include:"PRIMARYCARE"
ContentIdentificationType
— (String
)Shows the type of information you've configured Amazon Transcribe Medical to identify in a transcription job. If the value is
Possible values include:PHI
, you've configured the transcription job to identify personal health information (PHI)."PHI"
Type
— (String
)The speech of the clinician in the input audio.
Possible values include:"CONVERSATION"
"DICTATION"
-
(AWS.Response)
—
Returns:
listMedicalVocabularies(params = {}, callback) ⇒ AWS.Request
Returns a list of vocabularies that match the specified criteria. If you don't enter a value in any of the request parameters, returns the entire list of vocabularies.
Service Reference:
Examples:
Calling the listMedicalVocabularies operation
var params = {
MaxResults: 'NUMBER_VALUE',
NameContains: 'STRING_VALUE',
NextToken: 'STRING_VALUE',
StateEquals: PENDING | READY | FAILED
};
transcribeservice.listMedicalVocabularies(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
NextToken
— (String
)If the result of your previous request to
ListMedicalVocabularies
was truncated, include theNextToken
to fetch the next set of vocabularies.MaxResults
— (Integer
)The maximum number of vocabularies to return in each page of results. If there are fewer results than the value you specify, only the actual results are returned. If you do not specify a value, the default of 5 is used.
StateEquals
— (String
)When specified, returns only vocabularies with the
Possible values include:VocabularyState
equal to the specified vocabulary state. Use this field to see which vocabularies are ready for your medical transcription jobs."PENDING"
"READY"
"FAILED"
NameContains
— (String
)Returns vocabularies whose names contain the specified string. The search is not case sensitive.
ListMedicalVocabularies
returns both "vocabularyname
" and "VocabularyName
".
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Status
— (String
)The requested vocabulary state.
Possible values include:"PENDING"
"READY"
"FAILED"
NextToken
— (String
)The
ListMedicalVocabularies
operation returns a page of vocabularies at a time. You set the maximum number of vocabularies to return on a page with theMaxResults
parameter. If there are more jobs in the list will fit on a page, Amazon Transcribe Medical returns theNextPage
token. To return the next page of vocabularies, include the token in the next request to theListMedicalVocabularies
operation .Vocabularies
— (Array<map>
)A list of objects that describe the vocabularies that match your search criteria.
VocabularyName
— (String
)The name of the vocabulary.
LanguageCode
— (String
)The language code of the vocabulary entries.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
LastModifiedTime
— (Date
)The date and time that the vocabulary was last modified.
VocabularyState
— (String
)The processing state of the vocabulary. If the state is
Possible values include:READY
you can use the vocabulary in aStartTranscriptionJob
request."PENDING"
"READY"
"FAILED"
-
(AWS.Response)
—
Returns:
listTagsForResource(params = {}, callback) ⇒ AWS.Request
Lists all tags associated with a given transcription job, vocabulary, or resource.
Service Reference:
Examples:
Calling the listTagsForResource operation
var params = {
ResourceArn: 'STRING_VALUE' /* required */
};
transcribeservice.listTagsForResource(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
ResourceArn
— (String
)Lists all tags associated with a given Amazon Resource Name (ARN). ARNs have the format
arn:partition:service:region:account-id:resource-type/resource-id
(for example,arn:aws:transcribe:us-east-1:account-id:transcription-job/your-job-name
). Valid values forresource-type
are:transcription-job
,medical-transcription-job
,vocabulary
,medical-vocabulary
,vocabulary-filter
, andlanguage-model
.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:ResourceArn
— (String
)Lists all tags associated with the given Amazon Resource Name (ARN).
Tags
— (Array<map>
)Lists all tags associated with the given transcription job, vocabulary, or resource.
Key
— required — (String
)The first part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the key is 'Department'.
Value
— required — (String
)The second part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the value is 'Sales'.
-
(AWS.Response)
—
Returns:
listTranscriptionJobs(params = {}, callback) ⇒ AWS.Request
Lists transcription jobs with the specified status.
Service Reference:
Examples:
Calling the listTranscriptionJobs operation
var params = {
JobNameContains: 'STRING_VALUE',
MaxResults: 'NUMBER_VALUE',
NextToken: 'STRING_VALUE',
Status: QUEUED | IN_PROGRESS | FAILED | COMPLETED
};
transcribeservice.listTranscriptionJobs(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
Status
— (String
)When specified, returns only transcription jobs with the specified status. Jobs are ordered by creation date, with the newest jobs returned first. If you don’t specify a status, Amazon Transcribe returns all transcription jobs ordered by creation date.
Possible values include:"QUEUED"
"IN_PROGRESS"
"FAILED"
"COMPLETED"
JobNameContains
— (String
)When specified, the jobs returned in the list are limited to jobs whose name contains the specified string.
NextToken
— (String
)If the result of the previous request to
ListTranscriptionJobs
is truncated, include theNextToken
to fetch the next set of jobs.MaxResults
— (Integer
)The maximum number of jobs to return in each page of results. If there are fewer results than the value you specify, only the actual results are returned. If you do not specify a value, the default of 5 is used.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Status
— (String
)The requested status of the jobs returned.
Possible values include:"QUEUED"
"IN_PROGRESS"
"FAILED"
"COMPLETED"
NextToken
— (String
)The
ListTranscriptionJobs
operation returns a page of jobs at a time. The maximum size of the page is set by theMaxResults
parameter. If there are more jobs in the list than the page size, Amazon Transcribe returns theNextPage
token. Include the token in the next request to theListTranscriptionJobs
operation to return in the next page of jobs.TranscriptionJobSummaries
— (Array<map>
)A list of objects containing summary information for a transcription job.
TranscriptionJobName
— (String
)The name of the transcription job.
CreationTime
— (Date
)A timestamp that shows when the job was created.
StartTime
— (Date
)A timestamp that shows when the job started processing.
CompletionTime
— (Date
)A timestamp that shows when the job was completed.
LanguageCode
— (String
)The language code for the input speech.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
TranscriptionJobStatus
— (String
)The status of the transcription job. When the status is
Possible values include:COMPLETED
, use theGetTranscriptionJob
operation to get the results of the transcription."QUEUED"
"IN_PROGRESS"
"FAILED"
"COMPLETED"
FailureReason
— (String
)If the
TranscriptionJobStatus
field isFAILED
, a description of the error.OutputLocationType
— (String
)Indicates the location of the output of the transcription job.
If the value is
CUSTOMER_BUCKET
then the location is the S3 bucket specified in theoutputBucketName
field when the transcription job was started with theStartTranscriptionJob
operation.If the value is
Possible values include:SERVICE_BUCKET
then the output is stored by Amazon Transcribe and can be retrieved using the URI in theGetTranscriptionJob
response'sTranscriptFileUri
field."CUSTOMER_BUCKET"
"SERVICE_BUCKET"
ContentRedaction
— (map
)The content redaction settings of the transcription job.
RedactionType
— required — (String
)Request parameter that defines the entities to be redacted. The only accepted value is
Possible values include:PII
."PII"
RedactionOutput
— required — (String
)The output transcript file stored in either the default S3 bucket or in a bucket you specify.
When you choose
redacted
Amazon Transcribe outputs only the redacted transcript.When you choose
Possible values include:redacted_and_unredacted
Amazon Transcribe outputs both the redacted and unredacted transcripts."redacted"
"redacted_and_unredacted"
ModelSettings
— (map
)The object used to call your custom language model to your transcription job.
LanguageModelName
— (String
)The name of your custom language model.
IdentifyLanguage
— (Boolean
)Whether automatic language identification was enabled for a transcription job.
IdentifiedLanguageScore
— (Float
)A value between zero and one that Amazon Transcribe assigned to the language it identified in the source audio. A higher score indicates that Amazon Transcribe is more confident in the language it identified.
-
(AWS.Response)
—
Returns:
listVocabularies(params = {}, callback) ⇒ AWS.Request
Returns a list of vocabularies that match the specified criteria. If no criteria are specified, returns the entire list of vocabularies.
Service Reference:
Examples:
Calling the listVocabularies operation
var params = {
MaxResults: 'NUMBER_VALUE',
NameContains: 'STRING_VALUE',
NextToken: 'STRING_VALUE',
StateEquals: PENDING | READY | FAILED
};
transcribeservice.listVocabularies(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
NextToken
— (String
)If the result of the previous request to
ListVocabularies
was truncated, include theNextToken
to fetch the next set of jobs.MaxResults
— (Integer
)The maximum number of vocabularies to return in each page of results. If there are fewer results than the value you specify, only the actual results are returned. If you do not specify a value, the default of 5 is used.
StateEquals
— (String
)When specified, only returns vocabularies with the
Possible values include:VocabularyState
field equal to the specified state."PENDING"
"READY"
"FAILED"
NameContains
— (String
)When specified, the vocabularies returned in the list are limited to vocabularies whose name contains the specified string. The search is not case sensitive,
ListVocabularies
returns both "vocabularyname" and "VocabularyName" in the response list.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:Status
— (String
)The requested vocabulary state.
Possible values include:"PENDING"
"READY"
"FAILED"
NextToken
— (String
)The
ListVocabularies
operation returns a page of vocabularies at a time. The maximum size of the page is set in theMaxResults
parameter. If there are more jobs in the list than will fit on the page, Amazon Transcribe returns theNextPage
token. To return in the next page of jobs, include the token in the next request to theListVocabularies
operation.Vocabularies
— (Array<map>
)A list of objects that describe the vocabularies that match the search criteria in the request.
VocabularyName
— (String
)The name of the vocabulary.
LanguageCode
— (String
)The language code of the vocabulary entries.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
LastModifiedTime
— (Date
)The date and time that the vocabulary was last modified.
VocabularyState
— (String
)The processing state of the vocabulary. If the state is
Possible values include:READY
you can use the vocabulary in aStartTranscriptionJob
request."PENDING"
"READY"
"FAILED"
-
(AWS.Response)
—
Returns:
listVocabularyFilters(params = {}, callback) ⇒ AWS.Request
Gets information about vocabulary filters.
Service Reference:
Examples:
Calling the listVocabularyFilters operation
var params = {
MaxResults: 'NUMBER_VALUE',
NameContains: 'STRING_VALUE',
NextToken: 'STRING_VALUE'
};
transcribeservice.listVocabularyFilters(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
NextToken
— (String
)If the result of the previous request to
ListVocabularyFilters
was truncated, include theNextToken
to fetch the next set of collections.MaxResults
— (Integer
)The maximum number of filters to return in each page of results. If there are fewer results than the value you specify, only the actual results are returned. If you do not specify a value, the default of 5 is used.
NameContains
— (String
)Filters the response so that it only contains vocabulary filters whose name contains the specified string.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:NextToken
— (String
)The
ListVocabularyFilters
operation returns a page of collections at a time. The maximum size of the page is set by theMaxResults
parameter. If there are more jobs in the list than the page size, Amazon Transcribe returns theNextPage
token. Include the token in the next request to theListVocabularyFilters
operation to return in the next page of jobs.VocabularyFilters
— (Array<map>
)The list of vocabulary filters. It contains at most
MaxResults
number of filters. If there are more filters, call theListVocabularyFilters
operation again with theNextToken
parameter in the request set to the value of theNextToken
field in the response.VocabularyFilterName
— (String
)The name of the vocabulary filter. The name must be unique in the account that holds the filter.
LanguageCode
— (String
)The language code of the words in the vocabulary filter.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
LastModifiedTime
— (Date
)The date and time that the vocabulary was last updated.
-
(AWS.Response)
—
Returns:
startCallAnalyticsJob(params = {}, callback) ⇒ AWS.Request
Starts an asynchronous analytics job that not only transcribes the audio recording of a caller and agent, but also returns additional insights. These insights include how quickly or loudly the caller or agent was speaking. To retrieve additional insights with your analytics jobs, create categories. A category is a way to classify analytics jobs based on attributes, such as a customer's sentiment or a particular phrase being used during the call. For more information, see the operation.
Service Reference:
Examples:
Calling the startCallAnalyticsJob operation
var params = {
CallAnalyticsJobName: 'STRING_VALUE', /* required */
DataAccessRoleArn: 'STRING_VALUE', /* required */
Media: { /* required */
MediaFileUri: 'STRING_VALUE',
RedactedMediaFileUri: 'STRING_VALUE'
},
ChannelDefinitions: [
{
ChannelId: 'NUMBER_VALUE',
ParticipantRole: AGENT | CUSTOMER
},
/* more items */
],
OutputEncryptionKMSKeyId: 'STRING_VALUE',
OutputLocation: 'STRING_VALUE',
Settings: {
ContentRedaction: {
RedactionOutput: redacted | redacted_and_unredacted, /* required */
RedactionType: PII /* required */
},
LanguageIdSettings: {
'<LanguageCode>': {
LanguageModelName: 'STRING_VALUE',
VocabularyFilterName: 'STRING_VALUE',
VocabularyName: 'STRING_VALUE'
},
/* '<LanguageCode>': ... */
},
LanguageModelName: 'STRING_VALUE',
LanguageOptions: [
af-ZA | ar-AE | ar-SA | cy-GB | da-DK | de-CH | de-DE | en-AB | en-AU | en-GB | en-IE | en-IN | en-US | en-WL | es-ES | es-US | fa-IR | fr-CA | fr-FR | ga-IE | gd-GB | he-IL | hi-IN | id-ID | it-IT | ja-JP | ko-KR | ms-MY | nl-NL | pt-BR | pt-PT | ru-RU | ta-IN | te-IN | tr-TR | zh-CN | zh-TW | th-TH | en-ZA | en-NZ,
/* more items */
],
VocabularyFilterMethod: remove | mask | tag,
VocabularyFilterName: 'STRING_VALUE',
VocabularyName: 'STRING_VALUE'
}
};
transcribeservice.startCallAnalyticsJob(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
CallAnalyticsJobName
— (String
)The name of the call analytics job. You can't use the string "." or ".." by themselves as the job name. The name must also be unique within an Amazon Web Services account. If you try to create a call analytics job with the same name as a previous call analytics job, you get a
ConflictException
error.Media
— (map
)Describes the input media file in a transcription request.
MediaFileUri
— (String
)The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
RedactedMediaFileUri
— (String
)The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
OutputLocation
— (String
)The Amazon S3 location where the output of the call analytics job is stored. You can provide the following location types to store the output of call analytics job:
-
s3://DOC-EXAMPLE-BUCKET1
If you specify a bucket, Amazon Transcribe saves the output of the analytics job as a JSON file at the root level of the bucket.
-
s3://DOC-EXAMPLE-BUCKET1/folder/
f you specify a path, Amazon Transcribe saves the output of the analytics job as s3://DOC-EXAMPLE-BUCKET1/folder/your-transcription-job-name.json
If you specify a folder, you must provide a trailing slash.
-
s3://DOC-EXAMPLE-BUCKET1/folder/filename.json
If you provide a path that has the filename specified, Amazon Transcribe saves the output of the analytics job as s3://DOC-EXAMPLEBUCKET1/folder/filename.json
You can specify an Amazon Web Services Key Management Service (KMS) key to encrypt the output of our analytics job using the
OutputEncryptionKMSKeyId
parameter. If you don't specify a KMS key, Amazon Transcribe uses the default Amazon S3 key for server-side encryption of the analytics job output that is placed in your S3 bucket.-
OutputEncryptionKMSKeyId
— (String
)The Amazon Resource Name (ARN) of the Amazon Web Services Key Management Service key used to encrypt the output of the call analytics job. The user calling the operation must have permission to use the specified KMS key.
You use either of the following to identify an Amazon Web Services KMS key in the current account:
-
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
-
KMS Key Alias: "alias/ExampleAlias"
You can use either of the following to identify a KMS key in the current account or another account:
-
Amazon Resource Name (ARN) of a KMS key in the current account or another account: "arn:aws:kms:region:account ID:key/1234abcd-12ab-34cd-56ef1234567890ab"
-
ARN of a KMS Key Alias: "arn:aws:kms:region:account ID:alias/ExampleAlias"
If you don't specify an encryption key, the output of the call analytics job is encrypted with the default Amazon S3 key (SSE-S3).
If you specify a KMS key to encrypt your output, you must also specify an output location in the
OutputLocation
parameter.-
DataAccessRoleArn
— (String
)The Amazon Resource Name (ARN) of a role that has access to the S3 bucket that contains your input files. Amazon Transcribe assumes this role to read queued audio files. If you have specified an output S3 bucket for your transcription results, this role should have access to the output bucket as well.
Settings
— (map
)A
Settings
object that provides optional settings for a call analytics job.VocabularyName
— (String
)The name of a vocabulary to use when processing the call analytics job.
VocabularyFilterName
— (String
)The name of the vocabulary filter to use when running a call analytics job. The filter that you specify must have the same language code as the analytics job.
VocabularyFilterMethod
— (String
)Set to mask to remove filtered text from the transcript and replace it with three asterisks ("***") as placeholder text. Set to
Possible values include:remove
to remove filtered text from the transcript without using placeholder text. Set totag
to mark the word in the transcription output that matches the vocabulary filter. When you set the filter method totag
, the words matching your vocabulary filter are not masked or removed."remove"
"mask"
"tag"
LanguageModelName
— (String
)The structure used to describe a custom language model.
ContentRedaction
— (map
)Settings for content redaction within a transcription job.
RedactionType
— required — (String
)Request parameter that defines the entities to be redacted. The only accepted value is
Possible values include:PII
."PII"
RedactionOutput
— required — (String
)The output transcript file stored in either the default S3 bucket or in a bucket you specify.
When you choose
redacted
Amazon Transcribe outputs only the redacted transcript.When you choose
Possible values include:redacted_and_unredacted
Amazon Transcribe outputs both the redacted and unredacted transcripts."redacted"
"redacted_and_unredacted"
LanguageOptions
— (Array<String>
)When you run a call analytics job, you can specify the language spoken in the audio, or you can have Amazon Transcribe identify the language for you.
To specify a language, specify an array with one language code. If you don't know the language, you can leave this field blank and Amazon Transcribe will use machine learning to identify the language for you. To improve the ability of Amazon Transcribe to correctly identify the language, you can provide an array of the languages that can be present in the audio. Refer to Supported languages and language-specific features for additional information.
LanguageIdSettings
— (map<map>
)The language identification settings associated with your call analytics job. These settings include
VocabularyName
,VocabularyFilterName
, andLanguageModelName
.VocabularyName
— (String
)The name of the vocabulary you want to use when processing your transcription job. The vocabulary you specify must have the same language code as the transcription job; if the languages don't match, the vocabulary won't be applied.
VocabularyFilterName
— (String
)The name of the vocabulary filter you want to use when transcribing your audio. The filter you specify must have the same language code as the transcription job; if the languages don't match, the vocabulary filter won't be applied.
LanguageModelName
— (String
)The name of the language model you want to use when transcribing your audio. The model you specify must have the same language code as the transcription job; if the languages don't match, the language model won't be applied.
ChannelDefinitions
— (Array<map>
)When you start a call analytics job, you must pass an array that maps the agent and the customer to specific audio channels. The values you can assign to a channel are 0 and 1. The agent and the customer must each have their own channel. You can't assign more than one channel to an agent or customer.
ChannelId
— (Integer
)A value that indicates the audio channel.
ParticipantRole
— (String
)Indicates whether the person speaking on the audio channel is the agent or customer.
Possible values include:"AGENT"
"CUSTOMER"
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:CallAnalyticsJob
— (map
)An object containing the details of the asynchronous call analytics job.
CallAnalyticsJobName
— (String
)The name of the call analytics job.
CallAnalyticsJobStatus
— (String
)The status of the analytics job.
Possible values include:"QUEUED"
"IN_PROGRESS"
"FAILED"
"COMPLETED"
LanguageCode
— (String
)If you know the language spoken between the customer and the agent, specify a language code for this field.
If you don't know the language, you can leave this field blank, and Amazon Transcribe will use machine learning to automatically identify the language. To improve the accuracy of language identification, you can provide an array containing the possible language codes for the language spoken in your audio. Refer to Supported languages and language-specific features for additional information.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
MediaSampleRateHertz
— (Integer
)The sample rate, in Hertz, of the audio.
MediaFormat
— (String
)The format of the input audio file. Note: for call analytics jobs, only the following media formats are supported: MP3, MP4, WAV, FLAC, OGG, and WebM.
Possible values include:"mp3"
"mp4"
"wav"
"flac"
"ogg"
"amr"
"webm"
Media
— (map
)Describes the input media file in a transcription request.
MediaFileUri
— (String
)The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
RedactedMediaFileUri
— (String
)The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
Transcript
— (map
)Identifies the location of a transcription.
TranscriptFileUri
— (String
)The S3 object location of the transcript.
Use this URI to access the transcript. If you specified an S3 bucket in the
OutputBucketName
field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.RedactedTranscriptFileUri
— (String
)The S3 object location of the redacted transcript.
Use this URI to access the redacted transcript. If you specified an S3 bucket in the
OutputBucketName
field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.
StartTime
— (Date
)A timestamp that shows when the analytics job started processing.
CreationTime
— (Date
)A timestamp that shows when the analytics job was created.
CompletionTime
— (Date
)A timestamp that shows when the analytics job was completed.
FailureReason
— (String
)If the
AnalyticsJobStatus
isFAILED
, this field contains information about why the job failed.The
FailureReason
field can contain one of the following values:-
Unsupported media format
: The media format specified in theMediaFormat
field of the request isn't valid. See the description of theMediaFormat
field for a list of valid values. -
The media format provided does not match the detected media format
: The media format of the audio file doesn't match the format specified in theMediaFormat
field in the request. Check the media format of your media file and make sure the two values match. -
Invalid sample rate for audio file
: The sample rate specified in theMediaSampleRateHertz
of the request isn't valid. The sample rate must be between 8,000 and 48,000 Hertz. -
The sample rate provided does not match the detected sample rate
: The sample rate in the audio file doesn't match the sample rate specified in theMediaSampleRateHertz
field in the request. Check the sample rate of your media file and make sure that the two values match. -
Invalid file size: file size too large
: The size of your audio file is larger than what Amazon Transcribe Medical can process. For more information, see Guidelines and Quotas in the Amazon Transcribe Medical Guide. -
Invalid number of channels: number of channels too large
: Your audio contains more channels than Amazon Transcribe Medical is configured to process. To request additional channels, see Amazon Transcribe Medical Endpoints and Quotas in the Amazon Web Services General Reference.
-
DataAccessRoleArn
— (String
)The Amazon Resource Number (ARN) that you use to access the analytics job. ARNs have the format
arn:partition:service:region:account-id:resource-type/resource-id
.IdentifiedLanguageScore
— (Float
)A value between zero and one that Amazon Transcribe assigned to the language that it identified in the source audio. This value appears only when you don't provide a single language code. Larger values indicate that Amazon Transcribe has higher confidence in the language that it identified
Settings
— (map
)Provides information about the settings used to run a transcription job.
VocabularyName
— (String
)The name of a vocabulary to use when processing the call analytics job.
VocabularyFilterName
— (String
)The name of the vocabulary filter to use when running a call analytics job. The filter that you specify must have the same language code as the analytics job.
VocabularyFilterMethod
— (String
)Set to mask to remove filtered text from the transcript and replace it with three asterisks ("***") as placeholder text. Set to
Possible values include:remove
to remove filtered text from the transcript without using placeholder text. Set totag
to mark the word in the transcription output that matches the vocabulary filter. When you set the filter method totag
, the words matching your vocabulary filter are not masked or removed."remove"
"mask"
"tag"
LanguageModelName
— (String
)The structure used to describe a custom language model.
ContentRedaction
— (map
)Settings for content redaction within a transcription job.
RedactionType
— required — (String
)Request parameter that defines the entities to be redacted. The only accepted value is
Possible values include:PII
."PII"
RedactionOutput
— required — (String
)The output transcript file stored in either the default S3 bucket or in a bucket you specify.
When you choose
redacted
Amazon Transcribe outputs only the redacted transcript.When you choose
Possible values include:redacted_and_unredacted
Amazon Transcribe outputs both the redacted and unredacted transcripts."redacted"
"redacted_and_unredacted"
LanguageOptions
— (Array<String>
)When you run a call analytics job, you can specify the language spoken in the audio, or you can have Amazon Transcribe identify the language for you.
To specify a language, specify an array with one language code. If you don't know the language, you can leave this field blank and Amazon Transcribe will use machine learning to identify the language for you. To improve the ability of Amazon Transcribe to correctly identify the language, you can provide an array of the languages that can be present in the audio. Refer to Supported languages and language-specific features for additional information.
LanguageIdSettings
— (map<map>
)The language identification settings associated with your call analytics job. These settings include
VocabularyName
,VocabularyFilterName
, andLanguageModelName
.VocabularyName
— (String
)The name of the vocabulary you want to use when processing your transcription job. The vocabulary you specify must have the same language code as the transcription job; if the languages don't match, the vocabulary won't be applied.
VocabularyFilterName
— (String
)The name of the vocabulary filter you want to use when transcribing your audio. The filter you specify must have the same language code as the transcription job; if the languages don't match, the vocabulary filter won't be applied.
LanguageModelName
— (String
)The name of the language model you want to use when transcribing your audio. The model you specify must have the same language code as the transcription job; if the languages don't match, the language model won't be applied.
ChannelDefinitions
— (Array<map>
)Shows numeric values to indicate the channel assigned to the agent's audio and the channel assigned to the customer's audio.
ChannelId
— (Integer
)A value that indicates the audio channel.
ParticipantRole
— (String
)Indicates whether the person speaking on the audio channel is the agent or customer.
Possible values include:"AGENT"
"CUSTOMER"
-
(AWS.Response)
—
Returns:
startMedicalTranscriptionJob(params = {}, callback) ⇒ AWS.Request
Starts a batch job to transcribe medical speech to text.
Service Reference:
Examples:
Calling the startMedicalTranscriptionJob operation
var params = {
LanguageCode: af-ZA | ar-AE | ar-SA | cy-GB | da-DK | de-CH | de-DE | en-AB | en-AU | en-GB | en-IE | en-IN | en-US | en-WL | es-ES | es-US | fa-IR | fr-CA | fr-FR | ga-IE | gd-GB | he-IL | hi-IN | id-ID | it-IT | ja-JP | ko-KR | ms-MY | nl-NL | pt-BR | pt-PT | ru-RU | ta-IN | te-IN | tr-TR | zh-CN | zh-TW | th-TH | en-ZA | en-NZ, /* required */
Media: { /* required */
MediaFileUri: 'STRING_VALUE',
RedactedMediaFileUri: 'STRING_VALUE'
},
MedicalTranscriptionJobName: 'STRING_VALUE', /* required */
OutputBucketName: 'STRING_VALUE', /* required */
Specialty: PRIMARYCARE, /* required */
Type: CONVERSATION | DICTATION, /* required */
ContentIdentificationType: PHI,
KMSEncryptionContext: {
'<NonEmptyString>': 'STRING_VALUE',
/* '<NonEmptyString>': ... */
},
MediaFormat: mp3 | mp4 | wav | flac | ogg | amr | webm,
MediaSampleRateHertz: 'NUMBER_VALUE',
OutputEncryptionKMSKeyId: 'STRING_VALUE',
OutputKey: 'STRING_VALUE',
Settings: {
ChannelIdentification: true || false,
MaxAlternatives: 'NUMBER_VALUE',
MaxSpeakerLabels: 'NUMBER_VALUE',
ShowAlternatives: true || false,
ShowSpeakerLabels: true || false,
VocabularyName: 'STRING_VALUE'
},
Tags: [
{
Key: 'STRING_VALUE', /* required */
Value: 'STRING_VALUE' /* required */
},
/* more items */
]
};
transcribeservice.startMedicalTranscriptionJob(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
MedicalTranscriptionJobName
— (String
)The name of the medical transcription job. You can't use the strings "
.
" or "..
" by themselves as the job name. The name must also be unique within an Amazon Web Services account. If you try to create a medical transcription job with the same name as a previous medical transcription job, you get aConflictException
error.LanguageCode
— (String
)The language code for the language spoken in the input media file. US English (en-US) is the valid value for medical transcription jobs. Any other value you enter for language code results in a
Possible values include:BadRequestException
error."af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
MediaSampleRateHertz
— (Integer
)The sample rate, in Hertz, of the audio track in the input media file.
If you do not specify the media sample rate, Amazon Transcribe Medical determines the sample rate. If you specify the sample rate, it must match the rate detected by Amazon Transcribe Medical. In most cases, you should leave the
MediaSampleRateHertz
field blank and let Amazon Transcribe Medical determine the sample rate.MediaFormat
— (String
)The audio format of the input media file.
Possible values include:"mp3"
"mp4"
"wav"
"flac"
"ogg"
"amr"
"webm"
Media
— (map
)Describes the input media file in a transcription request.
MediaFileUri
— (String
)The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
RedactedMediaFileUri
— (String
)The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
OutputBucketName
— (String
)The Amazon S3 location where the transcription is stored.
You must set
OutputBucketName
for Amazon Transcribe Medical to store the transcription results. Your transcript appears in the S3 location you specify. When you call the GetMedicalTranscriptionJob, the operation returns this location in theTranscriptFileUri
field. The S3 bucket must have permissions that allow Amazon Transcribe Medical to put files in the bucket. For more information, see Permissions Required for IAM User Roles.You can specify an Amazon Web Services Key Management Service (KMS) key to encrypt the output of your transcription using the
OutputEncryptionKMSKeyId
parameter. If you don't specify a KMS key, Amazon Transcribe Medical uses the default Amazon S3 key for server-side encryption of transcripts that are placed in your S3 bucket.OutputKey
— (String
)You can specify a location in an Amazon S3 bucket to store the output of your medical transcription job.
If you don't specify an output key, Amazon Transcribe Medical stores the output of your transcription job in the Amazon S3 bucket you specified. By default, the object key is "your-transcription-job-name.json".
You can use output keys to specify the Amazon S3 prefix and file name of the transcription output. For example, specifying the Amazon S3 prefix, "folder1/folder2/", as an output key would lead to the output being stored as "folder1/folder2/your-transcription-job-name.json". If you specify "my-other-job-name.json" as the output key, the object key is changed to "my-other-job-name.json". You can use an output key to change both the prefix and the file name, for example "folder/my-other-job-name.json".
If you specify an output key, you must also specify an S3 bucket in the
OutputBucketName
parameter.OutputEncryptionKMSKeyId
— (String
)The Amazon Resource Name (ARN) of the Amazon Web Services Key Management Service (KMS) key used to encrypt the output of the transcription job. The user calling the StartMedicalTranscriptionJob operation must have permission to use the specified KMS key.
You use either of the following to identify a KMS key in the current account:
-
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
-
KMS Key Alias: "alias/ExampleAlias"
You can use either of the following to identify a KMS key in the current account or another account:
-
Amazon Resource Name (ARN) of a KMS key in the current account or another account: "arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab"
-
ARN of a KMS Key Alias: "arn:aws:kms:region:account ID:alias/ExampleAlias"
If you don't specify an encryption key, the output of the medical transcription job is encrypted with the default Amazon S3 key (SSE-S3).
If you specify a KMS key to encrypt your output, you must also specify an output location in the
OutputBucketName
parameter.-
KMSEncryptionContext
— (map<String>
)A map of plain text, non-secret key:value pairs, known as encryption context pairs, that provide an added layer of security for your data.
Settings
— (map
)Optional settings for the medical transcription job.
ShowSpeakerLabels
— (Boolean
)Determines whether the transcription job uses speaker recognition to identify different speakers in the input audio. Speaker recognition labels individual speakers in the audio file. If you set the
ShowSpeakerLabels
field to true, you must also set the maximum number of speaker labels in theMaxSpeakerLabels
field.You can't set both
ShowSpeakerLabels
andChannelIdentification
in the same request. If you set both, your request returns aBadRequestException
.MaxSpeakerLabels
— (Integer
)The maximum number of speakers to identify in the input audio. If there are more speakers in the audio than this number, multiple speakers are identified as a single speaker. If you specify the
MaxSpeakerLabels
field, you must set theShowSpeakerLabels
field to true.ChannelIdentification
— (Boolean
)Instructs Amazon Transcribe Medical to process each audio channel separately and then merge the transcription output of each channel into a single transcription.
Amazon Transcribe Medical also produces a transcription of each item detected on an audio channel, including the start time and end time of the item and alternative transcriptions of item. The alternative transcriptions also come with confidence scores provided by Amazon Transcribe Medical.
You can't set both
ShowSpeakerLabels
andChannelIdentification
in the same request. If you set both, your request returns aBadRequestException
ShowAlternatives
— (Boolean
)Determines whether alternative transcripts are generated along with the transcript that has the highest confidence. If you set
ShowAlternatives
field to true, you must also set the maximum number of alternatives to return in theMaxAlternatives
field.MaxAlternatives
— (Integer
)The maximum number of alternatives that you tell the service to return. If you specify the
MaxAlternatives
field, you must set theShowAlternatives
field to true.VocabularyName
— (String
)The name of the vocabulary to use when processing a medical transcription job.
ContentIdentificationType
— (String
)You can configure Amazon Transcribe Medical to label content in the transcription output. If you specify
Possible values include:PHI
, Amazon Transcribe Medical labels the personal health information (PHI) that it identifies in the transcription output."PHI"
Specialty
— (String
)The medical specialty of any clinician speaking in the input media.
Possible values include:"PRIMARYCARE"
Type
— (String
)The type of speech in the input audio.
Possible values include:CONVERSATION
refers to conversations between two or more speakers, e.g., a conversations between doctors and patients.DICTATION
refers to single-speaker dictated speech, such as clinical notes."CONVERSATION"
"DICTATION"
Tags
— (Array<map>
)Add tags to an Amazon Transcribe medical transcription job.
Key
— required — (String
)The first part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the key is 'Department'.
Value
— required — (String
)The second part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the value is 'Sales'.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:MedicalTranscriptionJob
— (map
)A batch job submitted to transcribe medical speech to text.
MedicalTranscriptionJobName
— (String
)The name for a given medical transcription job.
TranscriptionJobStatus
— (String
)The completion status of a medical transcription job.
Possible values include:"QUEUED"
"IN_PROGRESS"
"FAILED"
"COMPLETED"
LanguageCode
— (String
)The language code for the language spoken in the source audio file. US English (en-US) is the only supported language for medical transcriptions. Any other value you enter for language code results in a
Possible values include:BadRequestException
error."af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
MediaSampleRateHertz
— (Integer
)The sample rate, in Hertz, of the source audio containing medical information.
If you don't specify the sample rate, Amazon Transcribe Medical determines it for you. If you choose to specify the sample rate, it must match the rate detected by Amazon Transcribe Medical. In most cases, you should leave the
MedicalMediaSampleHertz
blank and let Amazon Transcribe Medical determine the sample rate.MediaFormat
— (String
)The format of the input media file.
Possible values include:"mp3"
"mp4"
"wav"
"flac"
"ogg"
"amr"
"webm"
Media
— (map
)Describes the input media file in a transcription request.
MediaFileUri
— (String
)The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
RedactedMediaFileUri
— (String
)The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
Transcript
— (map
)An object that contains the
MedicalTranscript
. TheMedicalTranscript
contains theTranscriptFileUri
.TranscriptFileUri
— (String
)The S3 object location of the medical transcript.
Use this URI to access the medical transcript. This URI points to the S3 bucket you created to store the medical transcript.
StartTime
— (Date
)A timestamp that shows when the job started processing.
CreationTime
— (Date
)A timestamp that shows when the job was created.
CompletionTime
— (Date
)A timestamp that shows when the job was completed.
FailureReason
— (String
)If the
TranscriptionJobStatus
field isFAILED
, this field contains information about why the job failed.The
FailureReason
field contains one of the following values:-
Unsupported media format
- The media format specified in theMediaFormat
field of the request isn't valid. See the description of theMediaFormat
field for a list of valid values. -
The media format provided does not match the detected media format
- The media format of the audio file doesn't match the format specified in theMediaFormat
field in the request. Check the media format of your media file and make sure the two values match. -
Invalid sample rate for audio file
- The sample rate specified in theMediaSampleRateHertz
of the request isn't valid. The sample rate must be between 8,000 and 48,000 Hertz. -
The sample rate provided does not match the detected sample rate
- The sample rate in the audio file doesn't match the sample rate specified in theMediaSampleRateHertz
field in the request. Check the sample rate of your media file and make sure that the two values match. -
Invalid file size: file size too large
- The size of your audio file is larger than what Amazon Transcribe Medical can process. For more information, see Guidelines and Quotas in the Amazon Transcribe Medical Guide -
Invalid number of channels: number of channels too large
- Your audio contains more channels than Amazon Transcribe Medical is configured to process. To request additional channels, see Amazon Transcribe Medical Endpoints and Quotas in the Amazon Web Services General Reference
-
Settings
— (map
)Object that contains object.
ShowSpeakerLabels
— (Boolean
)Determines whether the transcription job uses speaker recognition to identify different speakers in the input audio. Speaker recognition labels individual speakers in the audio file. If you set the
ShowSpeakerLabels
field to true, you must also set the maximum number of speaker labels in theMaxSpeakerLabels
field.You can't set both
ShowSpeakerLabels
andChannelIdentification
in the same request. If you set both, your request returns aBadRequestException
.MaxSpeakerLabels
— (Integer
)The maximum number of speakers to identify in the input audio. If there are more speakers in the audio than this number, multiple speakers are identified as a single speaker. If you specify the
MaxSpeakerLabels
field, you must set theShowSpeakerLabels
field to true.ChannelIdentification
— (Boolean
)Instructs Amazon Transcribe Medical to process each audio channel separately and then merge the transcription output of each channel into a single transcription.
Amazon Transcribe Medical also produces a transcription of each item detected on an audio channel, including the start time and end time of the item and alternative transcriptions of item. The alternative transcriptions also come with confidence scores provided by Amazon Transcribe Medical.
You can't set both
ShowSpeakerLabels
andChannelIdentification
in the same request. If you set both, your request returns aBadRequestException
ShowAlternatives
— (Boolean
)Determines whether alternative transcripts are generated along with the transcript that has the highest confidence. If you set
ShowAlternatives
field to true, you must also set the maximum number of alternatives to return in theMaxAlternatives
field.MaxAlternatives
— (Integer
)The maximum number of alternatives that you tell the service to return. If you specify the
MaxAlternatives
field, you must set theShowAlternatives
field to true.VocabularyName
— (String
)The name of the vocabulary to use when processing a medical transcription job.
ContentIdentificationType
— (String
)Shows the type of content that you've configured Amazon Transcribe Medical to identify in a transcription job. If the value is
Possible values include:PHI
, you've configured the job to identify personal health information (PHI) in the transcription output."PHI"
Specialty
— (String
)The medical specialty of any clinicians providing a dictation or having a conversation. Refer to Transcribing a medical conversationfor a list of supported specialties.
Possible values include:"PRIMARYCARE"
Type
— (String
)The type of speech in the transcription job.
Possible values include:CONVERSATION
is generally used for patient-physician dialogues.DICTATION
is the setting for physicians speaking their notes after seeing a patient. For more information, see What is Amazon Transcribe Medical?."CONVERSATION"
"DICTATION"
Tags
— (Array<map>
)A key:value pair assigned to a given medical transcription job.
Key
— required — (String
)The first part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the key is 'Department'.
Value
— required — (String
)The second part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the value is 'Sales'.
-
(AWS.Response)
—
Returns:
startTranscriptionJob(params = {}, callback) ⇒ AWS.Request
Starts an asynchronous job to transcribe speech to text.
Service Reference:
Examples:
Calling the startTranscriptionJob operation
var params = {
Media: { /* required */
MediaFileUri: 'STRING_VALUE',
RedactedMediaFileUri: 'STRING_VALUE'
},
TranscriptionJobName: 'STRING_VALUE', /* required */
ContentRedaction: {
RedactionOutput: redacted | redacted_and_unredacted, /* required */
RedactionType: PII /* required */
},
IdentifyLanguage: true || false,
JobExecutionSettings: {
AllowDeferredExecution: true || false,
DataAccessRoleArn: 'STRING_VALUE'
},
KMSEncryptionContext: {
'<NonEmptyString>': 'STRING_VALUE',
/* '<NonEmptyString>': ... */
},
LanguageCode: af-ZA | ar-AE | ar-SA | cy-GB | da-DK | de-CH | de-DE | en-AB | en-AU | en-GB | en-IE | en-IN | en-US | en-WL | es-ES | es-US | fa-IR | fr-CA | fr-FR | ga-IE | gd-GB | he-IL | hi-IN | id-ID | it-IT | ja-JP | ko-KR | ms-MY | nl-NL | pt-BR | pt-PT | ru-RU | ta-IN | te-IN | tr-TR | zh-CN | zh-TW | th-TH | en-ZA | en-NZ,
LanguageIdSettings: {
'<LanguageCode>': {
LanguageModelName: 'STRING_VALUE',
VocabularyFilterName: 'STRING_VALUE',
VocabularyName: 'STRING_VALUE'
},
/* '<LanguageCode>': ... */
},
LanguageOptions: [
af-ZA | ar-AE | ar-SA | cy-GB | da-DK | de-CH | de-DE | en-AB | en-AU | en-GB | en-IE | en-IN | en-US | en-WL | es-ES | es-US | fa-IR | fr-CA | fr-FR | ga-IE | gd-GB | he-IL | hi-IN | id-ID | it-IT | ja-JP | ko-KR | ms-MY | nl-NL | pt-BR | pt-PT | ru-RU | ta-IN | te-IN | tr-TR | zh-CN | zh-TW | th-TH | en-ZA | en-NZ,
/* more items */
],
MediaFormat: mp3 | mp4 | wav | flac | ogg | amr | webm,
MediaSampleRateHertz: 'NUMBER_VALUE',
ModelSettings: {
LanguageModelName: 'STRING_VALUE'
},
OutputBucketName: 'STRING_VALUE',
OutputEncryptionKMSKeyId: 'STRING_VALUE',
OutputKey: 'STRING_VALUE',
Settings: {
ChannelIdentification: true || false,
MaxAlternatives: 'NUMBER_VALUE',
MaxSpeakerLabels: 'NUMBER_VALUE',
ShowAlternatives: true || false,
ShowSpeakerLabels: true || false,
VocabularyFilterMethod: remove | mask | tag,
VocabularyFilterName: 'STRING_VALUE',
VocabularyName: 'STRING_VALUE'
},
Subtitles: {
Formats: [
vtt | srt,
/* more items */
]
},
Tags: [
{
Key: 'STRING_VALUE', /* required */
Value: 'STRING_VALUE' /* required */
},
/* more items */
]
};
transcribeservice.startTranscriptionJob(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
TranscriptionJobName
— (String
)The name of the job. You can't use the strings "
.
" or "..
" by themselves as the job name. The name must also be unique within an Amazon Web Services account. If you try to create a transcription job with the same name as a previous transcription job, you get aConflictException
error.LanguageCode
— (String
)The language code for the language used in the input media file.
To transcribe speech in Modern Standard Arabic (ar-SA), your audio or video file must be encoded at a sample rate of 16,000 Hz or higher.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
MediaSampleRateHertz
— (Integer
)The sample rate, in Hertz, of the audio track in the input media file.
If you do not specify the media sample rate, Amazon Transcribe determines the sample rate. If you specify the sample rate, it must match the sample rate detected by Amazon Transcribe. In most cases, you should leave the
MediaSampleRateHertz
field blank and let Amazon Transcribe determine the sample rate.MediaFormat
— (String
)The format of the input media file.
Possible values include:"mp3"
"mp4"
"wav"
"flac"
"ogg"
"amr"
"webm"
Media
— (map
)An object that describes the input media for a transcription job.
MediaFileUri
— (String
)The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
RedactedMediaFileUri
— (String
)The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
OutputBucketName
— (String
)The location where the transcription is stored.
If you set the
OutputBucketName
, Amazon Transcribe puts the transcript in the specified S3 bucket. When you call the GetTranscriptionJob operation, the operation returns this location in theTranscriptFileUri
field. If you enable content redaction, the redacted transcript appears inRedactedTranscriptFileUri
. If you enable content redaction and choose to output an unredacted transcript, that transcript's location still appears in theTranscriptFileUri
. The S3 bucket must have permissions that allow Amazon Transcribe to put files in the bucket. For more information, see Permissions Required for IAM User Roles.You can specify an Amazon Web Services Key Management Service (KMS) key to encrypt the output of your transcription using the
OutputEncryptionKMSKeyId
parameter. If you don't specify a KMS key, Amazon Transcribe uses the default Amazon S3 key for server-side encryption of transcripts that are placed in your S3 bucket.If you don't set the
OutputBucketName
, Amazon Transcribe generates a pre-signed URL, a shareable URL that provides secure access to your transcription, and returns it in theTranscriptFileUri
field. Use this URL to download the transcription.OutputKey
— (String
)You can specify a location in an Amazon S3 bucket to store the output of your transcription job.
If you don't specify an output key, Amazon Transcribe stores the output of your transcription job in the Amazon S3 bucket you specified. By default, the object key is "your-transcription-job-name.json".
You can use output keys to specify the Amazon S3 prefix and file name of the transcription output. For example, specifying the Amazon S3 prefix, "folder1/folder2/", as an output key would lead to the output being stored as "folder1/folder2/your-transcription-job-name.json". If you specify "my-other-job-name.json" as the output key, the object key is changed to "my-other-job-name.json". You can use an output key to change both the prefix and the file name, for example "folder/my-other-job-name.json".
If you specify an output key, you must also specify an S3 bucket in the
OutputBucketName
parameter.OutputEncryptionKMSKeyId
— (String
)The Amazon Resource Name (ARN) of the Amazon Web Services Key Management Service (KMS) key used to encrypt the output of the transcription job. The user calling the
StartTranscriptionJob
operation must have permission to use the specified KMS key.You can use either of the following to identify a KMS key in the current account:
-
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
-
KMS Key Alias: "alias/ExampleAlias"
You can use either of the following to identify a KMS key in the current account or another account:
-
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:region:account ID:key/1234abcd-12ab-34cd-56ef-1234567890ab"
-
ARN of a KMS Key Alias: "arn:aws:kms:region:account-ID:alias/ExampleAlias"
If you don't specify an encryption key, the output of the transcription job is encrypted with the default Amazon S3 key (SSE-S3).
If you specify a KMS key to encrypt your output, you must also specify an output location in the
OutputBucketName
parameter.-
KMSEncryptionContext
— (map<String>
)A map of plain text, non-secret key:value pairs, known as encryption context pairs, that provide an added layer of security for your data.
Settings
— (map
)A
Settings
object that provides optional settings for a transcription job.VocabularyName
— (String
)The name of a vocabulary to use when processing the transcription job.
ShowSpeakerLabels
— (Boolean
)Determines whether the transcription job uses speaker recognition to identify different speakers in the input audio. Speaker recognition labels individual speakers in the audio file. If you set the
ShowSpeakerLabels
field to true, you must also set the maximum number of speaker labelsMaxSpeakerLabels
field.You can't set both
ShowSpeakerLabels
andChannelIdentification
in the same request. If you set both, your request returns aBadRequestException
.MaxSpeakerLabels
— (Integer
)The maximum number of speakers to identify in the input audio. If there are more speakers in the audio than this number, multiple speakers are identified as a single speaker. If you specify the
MaxSpeakerLabels
field, you must set theShowSpeakerLabels
field to true.ChannelIdentification
— (Boolean
)Instructs Amazon Transcribe to process each audio channel separately and then merge the transcription output of each channel into a single transcription.
Amazon Transcribe also produces a transcription of each item detected on an audio channel, including the start time and end time of the item and alternative transcriptions of the item including the confidence that Amazon Transcribe has in the transcription.
You can't set both
ShowSpeakerLabels
andChannelIdentification
in the same request. If you set both, your request returns aBadRequestException
.ShowAlternatives
— (Boolean
)Determines whether the transcription contains alternative transcriptions. If you set the
ShowAlternatives
field to true, you must also set the maximum number of alternatives to return in theMaxAlternatives
field.MaxAlternatives
— (Integer
)The number of alternative transcriptions that the service should return. If you specify the
MaxAlternatives
field, you must set theShowAlternatives
field to true.VocabularyFilterName
— (String
)The name of the vocabulary filter to use when transcribing the audio. The filter that you specify must have the same language code as the transcription job.
VocabularyFilterMethod
— (String
)Set to
Possible values include:mask
to remove filtered text from the transcript and replace it with three asterisks ("***") as placeholder text. Set toremove
to remove filtered text from the transcript without using placeholder text. Set totag
to mark the word in the transcription output that matches the vocabulary filter. When you set the filter method totag
, the words matching your vocabulary filter are not masked or removed."remove"
"mask"
"tag"
ModelSettings
— (map
)Choose the custom language model you use for your transcription job in this parameter.
LanguageModelName
— (String
)The name of your custom language model.
JobExecutionSettings
— (map
)Provides information about how a transcription job is executed. Use this field to indicate that the job can be queued for deferred execution if the concurrency limit is reached and there are no slots available to immediately run the job.
AllowDeferredExecution
— (Boolean
)Indicates whether a job should be queued by Amazon Transcribe when the concurrent execution limit is exceeded. When the
AllowDeferredExecution
field is true, jobs are queued and executed when the number of executing jobs falls below the concurrent execution limit. If the field is false, Amazon Transcribe returns aLimitExceededException
exception.Note that job queuing is enabled by default for call analytics jobs.
If you specify the
AllowDeferredExecution
field, you must specify theDataAccessRoleArn
field.DataAccessRoleArn
— (String
)The Amazon Resource Name (ARN), in the form
arn:partition:service:region:account-id:resource-type/resource-id
, of a role that has access to the S3 bucket that contains the input files. Amazon Transcribe assumes this role to read queued media files. If you have specified an output S3 bucket for the transcription results, this role should have access to the output bucket as well.If you specify the
AllowDeferredExecution
field, you must specify theDataAccessRoleArn
field.
ContentRedaction
— (map
)An object that contains the request parameters for content redaction.
RedactionType
— required — (String
)Request parameter that defines the entities to be redacted. The only accepted value is
Possible values include:PII
."PII"
RedactionOutput
— required — (String
)The output transcript file stored in either the default S3 bucket or in a bucket you specify.
When you choose
redacted
Amazon Transcribe outputs only the redacted transcript.When you choose
Possible values include:redacted_and_unredacted
Amazon Transcribe outputs both the redacted and unredacted transcripts."redacted"
"redacted_and_unredacted"
IdentifyLanguage
— (Boolean
)Set this field to
true
to enable automatic language identification. Automatic language identification is disabled by default. You receive aBadRequestException
error if you enter a value for aLanguageCode
.LanguageOptions
— (Array<String>
)An object containing a list of languages that might be present in your collection of audio files. Automatic language identification chooses a language that best matches the source audio from that list.
To transcribe speech in Modern Standard Arabic (ar-SA), your audio or video file must be encoded at a sample rate of 16,000 Hz or higher.
Subtitles
— (map
)Add subtitles to your batch transcription job.
Formats
— (Array<String>
)Specify the output format for your subtitle file.
Tags
— (Array<map>
)Add tags to an Amazon Transcribe transcription job.
Key
— required — (String
)The first part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the key is 'Department'.
Value
— required — (String
)The second part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the value is 'Sales'.
LanguageIdSettings
— (map<map>
)The language identification settings associated with your transcription job. These settings include
VocabularyName
,VocabularyFilterName
, andLanguageModelName
.VocabularyName
— (String
)The name of the vocabulary you want to use when processing your transcription job. The vocabulary you specify must have the same language code as the transcription job; if the languages don't match, the vocabulary won't be applied.
VocabularyFilterName
— (String
)The name of the vocabulary filter you want to use when transcribing your audio. The filter you specify must have the same language code as the transcription job; if the languages don't match, the vocabulary filter won't be applied.
LanguageModelName
— (String
)The name of the language model you want to use when transcribing your audio. The model you specify must have the same language code as the transcription job; if the languages don't match, the language model won't be applied.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:TranscriptionJob
— (map
)An object containing details of the asynchronous transcription job.
TranscriptionJobName
— (String
)The name of the transcription job.
TranscriptionJobStatus
— (String
)The status of the transcription job.
Possible values include:"QUEUED"
"IN_PROGRESS"
"FAILED"
"COMPLETED"
LanguageCode
— (String
)The language code for the input speech.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
MediaSampleRateHertz
— (Integer
)The sample rate, in Hertz, of the audio track in the input media file.
MediaFormat
— (String
)The format of the input media file.
Possible values include:"mp3"
"mp4"
"wav"
"flac"
"ogg"
"amr"
"webm"
Media
— (map
)An object that describes the input media for the transcription job.
MediaFileUri
— (String
)The S3 object location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
RedactedMediaFileUri
— (String
)The S3 object location for your redacted output media file. This is only supported for call analytics jobs.
Transcript
— (map
)An object that describes the output of the transcription job.
TranscriptFileUri
— (String
)The S3 object location of the transcript.
Use this URI to access the transcript. If you specified an S3 bucket in the
OutputBucketName
field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.RedactedTranscriptFileUri
— (String
)The S3 object location of the redacted transcript.
Use this URI to access the redacted transcript. If you specified an S3 bucket in the
OutputBucketName
field when you created the job, this is the URI of that bucket. If you chose to store the transcript in Amazon Transcribe, this is a shareable URL that provides secure access to that location.
StartTime
— (Date
)A timestamp that shows when the job started processing.
CreationTime
— (Date
)A timestamp that shows when the job was created.
CompletionTime
— (Date
)A timestamp that shows when the job completed.
FailureReason
— (String
)If the
TranscriptionJobStatus
field isFAILED
, this field contains information about why the job failed.The
FailureReason
field can contain one of the following values:-
Unsupported media format
- The media format specified in theMediaFormat
field of the request isn't valid. See the description of theMediaFormat
field for a list of valid values. -
The media format provided does not match the detected media format
- The media format of the audio file doesn't match the format specified in theMediaFormat
field in the request. Check the media format of your media file and make sure that the two values match. -
Invalid sample rate for audio file
- The sample rate specified in theMediaSampleRateHertz
of the request isn't valid. The sample rate must be between 8,000 and 48,000 Hertz. -
The sample rate provided does not match the detected sample rate
- The sample rate in the audio file doesn't match the sample rate specified in theMediaSampleRateHertz
field in the request. Check the sample rate of your media file and make sure that the two values match. -
Invalid file size: file size too large
- The size of your audio file is larger than Amazon Transcribe can process. For more information, see Limits in the Amazon Transcribe Developer Guide. -
Invalid number of channels: number of channels too large
- Your audio contains more channels than Amazon Transcribe is configured to process. To request additional channels, see Amazon Transcribe Limits in the Amazon Web Services General Reference.
-
Settings
— (map
)Optional settings for the transcription job. Use these settings to turn on speaker recognition, to set the maximum number of speakers that should be identified and to specify a custom vocabulary to use when processing the transcription job.
VocabularyName
— (String
)The name of a vocabulary to use when processing the transcription job.
ShowSpeakerLabels
— (Boolean
)Determines whether the transcription job uses speaker recognition to identify different speakers in the input audio. Speaker recognition labels individual speakers in the audio file. If you set the
ShowSpeakerLabels
field to true, you must also set the maximum number of speaker labelsMaxSpeakerLabels
field.You can't set both
ShowSpeakerLabels
andChannelIdentification
in the same request. If you set both, your request returns aBadRequestException
.MaxSpeakerLabels
— (Integer
)The maximum number of speakers to identify in the input audio. If there are more speakers in the audio than this number, multiple speakers are identified as a single speaker. If you specify the
MaxSpeakerLabels
field, you must set theShowSpeakerLabels
field to true.ChannelIdentification
— (Boolean
)Instructs Amazon Transcribe to process each audio channel separately and then merge the transcription output of each channel into a single transcription.
Amazon Transcribe also produces a transcription of each item detected on an audio channel, including the start time and end time of the item and alternative transcriptions of the item including the confidence that Amazon Transcribe has in the transcription.
You can't set both
ShowSpeakerLabels
andChannelIdentification
in the same request. If you set both, your request returns aBadRequestException
.ShowAlternatives
— (Boolean
)Determines whether the transcription contains alternative transcriptions. If you set the
ShowAlternatives
field to true, you must also set the maximum number of alternatives to return in theMaxAlternatives
field.MaxAlternatives
— (Integer
)The number of alternative transcriptions that the service should return. If you specify the
MaxAlternatives
field, you must set theShowAlternatives
field to true.VocabularyFilterName
— (String
)The name of the vocabulary filter to use when transcribing the audio. The filter that you specify must have the same language code as the transcription job.
VocabularyFilterMethod
— (String
)Set to
Possible values include:mask
to remove filtered text from the transcript and replace it with three asterisks ("***") as placeholder text. Set toremove
to remove filtered text from the transcript without using placeholder text. Set totag
to mark the word in the transcription output that matches the vocabulary filter. When you set the filter method totag
, the words matching your vocabulary filter are not masked or removed."remove"
"mask"
"tag"
ModelSettings
— (map
)An object containing the details of your custom language model.
LanguageModelName
— (String
)The name of your custom language model.
JobExecutionSettings
— (map
)Provides information about how a transcription job is executed.
AllowDeferredExecution
— (Boolean
)Indicates whether a job should be queued by Amazon Transcribe when the concurrent execution limit is exceeded. When the
AllowDeferredExecution
field is true, jobs are queued and executed when the number of executing jobs falls below the concurrent execution limit. If the field is false, Amazon Transcribe returns aLimitExceededException
exception.Note that job queuing is enabled by default for call analytics jobs.
If you specify the
AllowDeferredExecution
field, you must specify theDataAccessRoleArn
field.DataAccessRoleArn
— (String
)The Amazon Resource Name (ARN), in the form
arn:partition:service:region:account-id:resource-type/resource-id
, of a role that has access to the S3 bucket that contains the input files. Amazon Transcribe assumes this role to read queued media files. If you have specified an output S3 bucket for the transcription results, this role should have access to the output bucket as well.If you specify the
AllowDeferredExecution
field, you must specify theDataAccessRoleArn
field.
ContentRedaction
— (map
)An object that describes content redaction settings for the transcription job.
RedactionType
— required — (String
)Request parameter that defines the entities to be redacted. The only accepted value is
Possible values include:PII
."PII"
RedactionOutput
— required — (String
)The output transcript file stored in either the default S3 bucket or in a bucket you specify.
When you choose
redacted
Amazon Transcribe outputs only the redacted transcript.When you choose
Possible values include:redacted_and_unredacted
Amazon Transcribe outputs both the redacted and unredacted transcripts."redacted"
"redacted_and_unredacted"
IdentifyLanguage
— (Boolean
)A value that shows if automatic language identification was enabled for a transcription job.
LanguageOptions
— (Array<String>
)An object that shows the optional array of languages inputted for transcription jobs with automatic language identification enabled.
IdentifiedLanguageScore
— (Float
)A value between zero and one that Amazon Transcribe assigned to the language that it identified in the source audio. Larger values indicate that Amazon Transcribe has higher confidence in the language it identified.
Tags
— (Array<map>
)A key:value pair assigned to a given transcription job.
Key
— required — (String
)The first part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the key is 'Department'.
Value
— required — (String
)The second part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the value is 'Sales'.
Subtitles
— (map
)Generate subtitles for your batch transcription job.
Formats
— (Array<String>
)Specify the output format for your subtitle file; if you select both SRT and VTT formats, two output files are genereated.
SubtitleFileUris
— (Array<String>
)Choose the output location for your subtitle file. This location must be an S3 bucket.
LanguageIdSettings
— (map<map>
)Language-specific settings that can be specified when language identification is enabled for your transcription job. These settings include
VocabularyName
,VocabularyFilterName
, andLanguageModelName
LanguageModelName.VocabularyName
— (String
)The name of the vocabulary you want to use when processing your transcription job. The vocabulary you specify must have the same language code as the transcription job; if the languages don't match, the vocabulary won't be applied.
VocabularyFilterName
— (String
)The name of the vocabulary filter you want to use when transcribing your audio. The filter you specify must have the same language code as the transcription job; if the languages don't match, the vocabulary filter won't be applied.
LanguageModelName
— (String
)The name of the language model you want to use when transcribing your audio. The model you specify must have the same language code as the transcription job; if the languages don't match, the language model won't be applied.
-
(AWS.Response)
—
Returns:
tagResource(params = {}, callback) ⇒ AWS.Request
Tags an Amazon Transcribe resource with the given list of tags.
Service Reference:
Examples:
Calling the tagResource operation
var params = {
ResourceArn: 'STRING_VALUE', /* required */
Tags: [ /* required */
{
Key: 'STRING_VALUE', /* required */
Value: 'STRING_VALUE' /* required */
},
/* more items */
]
};
transcribeservice.tagResource(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
ResourceArn
— (String
)The Amazon Resource Name (ARN) of the Amazon Transcribe resource you want to tag. ARNs have the format
arn:partition:service:region:account-id:resource-type/resource-id
(for example,arn:aws:transcribe:us-east-1:account-id:transcription-job/your-job-name
). Valid values forresource-type
are:transcription-job
,medical-transcription-job
,vocabulary
,medical-vocabulary
,vocabulary-filter
, andlanguage-model
.Tags
— (Array<map>
)The tags you are assigning to a given Amazon Transcribe resource.
Key
— required — (String
)The first part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the key is 'Department'.
Value
— required — (String
)The second part of a key:value pair that forms a tag associated with a given resource. For example, in the tag ‘Department’:’Sales’, the value is 'Sales'.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs.
-
(AWS.Response)
—
Returns:
untagResource(params = {}, callback) ⇒ AWS.Request
Removes specified tags from a specified Amazon Transcribe resource.
Service Reference:
Examples:
Calling the untagResource operation
var params = {
ResourceArn: 'STRING_VALUE', /* required */
TagKeys: [ /* required */
'STRING_VALUE',
/* more items */
]
};
transcribeservice.untagResource(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
ResourceArn
— (String
)The Amazon Resource Name (ARN) of the Amazon Transcribe resource you want to remove tags from. ARNs have the format
arn:partition:service:region:account-id:resource-type/resource-id
(for example,arn:aws:transcribe:us-east-1:account-id:transcription-job/your-job-name
). Valid values forresource-type
are:transcription-job
,medical-transcription-job
,vocabulary
,medical-vocabulary
,vocabulary-filter
, andlanguage-model
.TagKeys
— (Array<String>
)A list of tag keys you want to remove from a specified Amazon Transcribe resource.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs.
-
(AWS.Response)
—
Returns:
updateCallAnalyticsCategory(params = {}, callback) ⇒ AWS.Request
Updates the call analytics category with new values. The UpdateCallAnalyticsCategory
operation overwrites all of the existing information with the values that you provide in the request.
Service Reference:
Examples:
Calling the updateCallAnalyticsCategory operation
var params = {
CategoryName: 'STRING_VALUE', /* required */
Rules: [ /* required */
{
InterruptionFilter: {
AbsoluteTimeRange: {
EndTime: 'NUMBER_VALUE',
First: 'NUMBER_VALUE',
Last: 'NUMBER_VALUE',
StartTime: 'NUMBER_VALUE'
},
Negate: true || false,
ParticipantRole: AGENT | CUSTOMER,
RelativeTimeRange: {
EndPercentage: 'NUMBER_VALUE',
First: 'NUMBER_VALUE',
Last: 'NUMBER_VALUE',
StartPercentage: 'NUMBER_VALUE'
},
Threshold: 'NUMBER_VALUE'
},
NonTalkTimeFilter: {
AbsoluteTimeRange: {
EndTime: 'NUMBER_VALUE',
First: 'NUMBER_VALUE',
Last: 'NUMBER_VALUE',
StartTime: 'NUMBER_VALUE'
},
Negate: true || false,
RelativeTimeRange: {
EndPercentage: 'NUMBER_VALUE',
First: 'NUMBER_VALUE',
Last: 'NUMBER_VALUE',
StartPercentage: 'NUMBER_VALUE'
},
Threshold: 'NUMBER_VALUE'
},
SentimentFilter: {
Sentiments: [ /* required */
POSITIVE | NEGATIVE | NEUTRAL | MIXED,
/* more items */
],
AbsoluteTimeRange: {
EndTime: 'NUMBER_VALUE',
First: 'NUMBER_VALUE',
Last: 'NUMBER_VALUE',
StartTime: 'NUMBER_VALUE'
},
Negate: true || false,
ParticipantRole: AGENT | CUSTOMER,
RelativeTimeRange: {
EndPercentage: 'NUMBER_VALUE',
First: 'NUMBER_VALUE',
Last: 'NUMBER_VALUE',
StartPercentage: 'NUMBER_VALUE'
}
},
TranscriptFilter: {
Targets: [ /* required */
'STRING_VALUE',
/* more items */
],
TranscriptFilterType: EXACT, /* required */
AbsoluteTimeRange: {
EndTime: 'NUMBER_VALUE',
First: 'NUMBER_VALUE',
Last: 'NUMBER_VALUE',
StartTime: 'NUMBER_VALUE'
},
Negate: true || false,
ParticipantRole: AGENT | CUSTOMER,
RelativeTimeRange: {
EndPercentage: 'NUMBER_VALUE',
First: 'NUMBER_VALUE',
Last: 'NUMBER_VALUE',
StartPercentage: 'NUMBER_VALUE'
}
}
},
/* more items */
]
};
transcribeservice.updateCallAnalyticsCategory(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
CategoryName
— (String
)The name of the analytics category to update. The name is case sensitive. If you try to update a call analytics category with the same name as a previous category you will receive a
ConflictException
error.Rules
— (Array<map>
)The rules used for the updated analytics category. The rules that you provide in this field replace the ones that are currently being used.
NonTalkTimeFilter
— (map
)A condition for a time period when neither the customer nor the agent was talking.
Threshold
— (Integer
)The duration of the period when neither the customer nor agent was talking.
AbsoluteTimeRange
— (map
)An object you can use to specify a time range (in milliseconds) for when no one is talking. For example, you could specify a time period between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)An object that allows percentages to specify the proportion of the call where there was silence. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
Negate
— (Boolean
)Set to
TRUE
to look for a time period when people were talking.
InterruptionFilter
— (map
)A condition for a time period when either the customer or agent was interrupting the other person.
Threshold
— (Integer
)The duration of the interruption.
ParticipantRole
— (String
)Indicates whether the caller or customer was interrupting.
Possible values include:"AGENT"
"CUSTOMER"
AbsoluteTimeRange
— (map
)An object you can use to specify a time range (in milliseconds) for when you'd want to find the interruption. For example, you could search for an interruption between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)An object that allows percentages to specify the proportion of the call where there was a interruption. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
Negate
— (Boolean
)Set to
TRUE
to look for a time period where there was no interruption.
TranscriptFilter
— (map
)A condition that catches particular words or phrases based on a exact match. For example, if you set the phrase "I want to speak to the manager", only that exact phrase will be returned.
TranscriptFilterType
— required — (String
)Matches the phrase to the transcription output in a word for word fashion. For example, if you specify the phrase "I want to speak to the manager." Amazon Transcribe attempts to match that specific phrase to the transcription.
Possible values include:"EXACT"
AbsoluteTimeRange
— (map
)A time range, set in seconds, between two points in the call.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)An object that allows percentages to specify the proportion of the call where you would like to apply a filter. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole
— (String
)Determines whether the customer or the agent is speaking the phrases that you've specified.
Possible values include:"AGENT"
"CUSTOMER"
Negate
— (Boolean
)If
TRUE
, the rule that you specify is applied to everything except for the phrases that you specify.Targets
— required — (Array<String>
)The phrases that you're specifying for the transcript filter to match.
SentimentFilter
— (map
)A condition that is applied to a particular customer sentiment.
Sentiments
— required — (Array<String>
)An array that enables you to specify sentiments for the customer or agent. You can specify one or more values.
AbsoluteTimeRange
— (map
)The time range, measured in seconds, of the sentiment.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)The time range, set in percentages, that correspond to proportion of the call.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole
— (String
)A value that determines whether the sentiment belongs to the customer or the agent.
Possible values include:"AGENT"
"CUSTOMER"
Negate
— (Boolean
)Set to
TRUE
to look for sentiments that weren't specified in the request.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:CategoryProperties
— (map
)The attributes describing the analytics category. You can see information such as the rules that you've used to update the category and when the category was originally created.
CategoryName
— (String
)The name of the call analytics category.
Rules
— (Array<map>
)The rules used to create a call analytics category.
NonTalkTimeFilter
— (map
)A condition for a time period when neither the customer nor the agent was talking.
Threshold
— (Integer
)The duration of the period when neither the customer nor agent was talking.
AbsoluteTimeRange
— (map
)An object you can use to specify a time range (in milliseconds) for when no one is talking. For example, you could specify a time period between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)An object that allows percentages to specify the proportion of the call where there was silence. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
Negate
— (Boolean
)Set to
TRUE
to look for a time period when people were talking.
InterruptionFilter
— (map
)A condition for a time period when either the customer or agent was interrupting the other person.
Threshold
— (Integer
)The duration of the interruption.
ParticipantRole
— (String
)Indicates whether the caller or customer was interrupting.
Possible values include:"AGENT"
"CUSTOMER"
AbsoluteTimeRange
— (map
)An object you can use to specify a time range (in milliseconds) for when you'd want to find the interruption. For example, you could search for an interruption between the 30,000 millisecond mark and the 45,000 millisecond mark. You could also specify the time period as the first 15,000 milliseconds or the last 15,000 milliseconds.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)An object that allows percentages to specify the proportion of the call where there was a interruption. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
Negate
— (Boolean
)Set to
TRUE
to look for a time period where there was no interruption.
TranscriptFilter
— (map
)A condition that catches particular words or phrases based on a exact match. For example, if you set the phrase "I want to speak to the manager", only that exact phrase will be returned.
TranscriptFilterType
— required — (String
)Matches the phrase to the transcription output in a word for word fashion. For example, if you specify the phrase "I want to speak to the manager." Amazon Transcribe attempts to match that specific phrase to the transcription.
Possible values include:"EXACT"
AbsoluteTimeRange
— (map
)A time range, set in seconds, between two points in the call.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)An object that allows percentages to specify the proportion of the call where you would like to apply a filter. For example, you can specify the first half of the call. You can also specify the period of time between halfway through to three-quarters of the way through the call. Because the length of conversation can vary between calls, you can apply relative time ranges across all calls.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole
— (String
)Determines whether the customer or the agent is speaking the phrases that you've specified.
Possible values include:"AGENT"
"CUSTOMER"
Negate
— (Boolean
)If
TRUE
, the rule that you specify is applied to everything except for the phrases that you specify.Targets
— required — (Array<String>
)The phrases that you're specifying for the transcript filter to match.
SentimentFilter
— (map
)A condition that is applied to a particular customer sentiment.
Sentiments
— required — (Array<String>
)An array that enables you to specify sentiments for the customer or agent. You can specify one or more values.
AbsoluteTimeRange
— (map
)The time range, measured in seconds, of the sentiment.
StartTime
— (Integer
)A value that indicates the beginning of the time range in seconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
EndTime
— (Integer
)A value that indicates the end of the time range in milliseconds. To set absolute time range, you must specify a start time and an end time. For example, if you specify the following values:
-
StartTime - 10000
-
Endtime - 50000
The time range is set between 10,000 milliseconds and 50,000 milliseconds into the call.
-
First
— (Integer
)A time range from the beginning of the call to the value that you've specified. For example, if you specify 100000, the time range is set to the first 100,000 milliseconds of the call.
Last
— (Integer
)A time range from the value that you've specified to the end of the call. For example, if you specify 100000, the time range is set to the last 100,000 milliseconds of the call.
RelativeTimeRange
— (map
)The time range, set in percentages, that correspond to proportion of the call.
StartPercentage
— (Integer
)A value that indicates the percentage of the beginning of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
EndPercentage
— (Integer
)A value that indicates the percentage of the end of the time range. To set a relative time range, you must specify a start percentage and an end percentage. For example, if you specify the following values:
-
StartPercentage - 10
-
EndPercentage - 50
This looks at the time range starting from 10% of the way into the call to 50% of the way through the call. For a call that lasts 100,000 milliseconds, this example range would apply from the 10,000 millisecond mark to the 50,000 millisecond mark.
-
First
— (Integer
)A range that takes the portion of the call up to the time in milliseconds set by the value that you've specified. For example, if you specify
120000
, the time range is set for the first 120,000 milliseconds of the call.Last
— (Integer
)A range that takes the portion of the call from the time in milliseconds set by the value that you've specified to the end of the call. For example, if you specify
120000
, the time range is set for the last 120,000 milliseconds of the call.
ParticipantRole
— (String
)A value that determines whether the sentiment belongs to the customer or the agent.
Possible values include:"AGENT"
"CUSTOMER"
Negate
— (Boolean
)Set to
TRUE
to look for sentiments that weren't specified in the request.
CreateTime
— (Date
)A timestamp that shows when the call analytics category was created.
LastUpdateTime
— (Date
)A timestamp that shows when the call analytics category was most recently updated.
-
(AWS.Response)
—
Returns:
updateMedicalVocabulary(params = {}, callback) ⇒ AWS.Request
Updates a vocabulary with new values that you provide in a different text file from the one you used to create the vocabulary. The UpdateMedicalVocabulary
operation overwrites all of the existing information with the values that you provide in the request.
Service Reference:
Examples:
Calling the updateMedicalVocabulary operation
var params = {
LanguageCode: af-ZA | ar-AE | ar-SA | cy-GB | da-DK | de-CH | de-DE | en-AB | en-AU | en-GB | en-IE | en-IN | en-US | en-WL | es-ES | es-US | fa-IR | fr-CA | fr-FR | ga-IE | gd-GB | he-IL | hi-IN | id-ID | it-IT | ja-JP | ko-KR | ms-MY | nl-NL | pt-BR | pt-PT | ru-RU | ta-IN | te-IN | tr-TR | zh-CN | zh-TW | th-TH | en-ZA | en-NZ, /* required */
VocabularyName: 'STRING_VALUE', /* required */
VocabularyFileUri: 'STRING_VALUE'
};
transcribeservice.updateMedicalVocabulary(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
VocabularyName
— (String
)The name of the vocabulary to update. The name is case sensitive. If you try to update a vocabulary with the same name as a vocabulary you've already made, you get a
ConflictException
error.LanguageCode
— (String
)The language code of the language used for the entries in the updated vocabulary. US English (en-US) is the only valid language code in Amazon Transcribe Medical.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
VocabularyFileUri
— (String
)The location in Amazon S3 of the text file that contains your custom vocabulary. The URI must be in the same Amazon Web Services Region as the resource that you are calling. The following is the format for a URI:
https://s3.<aws-region>.amazonaws.com/<bucket-name>/<keyprefix>/<objectkey>
For example:
https://s3.us-east-1.amazonaws.com/AWSDOC-EXAMPLE-BUCKET/vocab.txt
For more information about Amazon S3 object names, see Object Keys in the Amazon S3 Developer Guide.
For more information about custom vocabularies in Amazon Transcribe Medical, see Medical Custom Vocabularies.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:VocabularyName
— (String
)The name of the updated vocabulary.
LanguageCode
— (String
)The language code for the language of the text file used to update the custom vocabulary. US English (en-US) is the only language supported in Amazon Transcribe Medical.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
LastModifiedTime
— (Date
)The date and time that the vocabulary was updated.
VocabularyState
— (String
)The processing state of the update to the vocabulary. When the
Possible values include:VocabularyState
field isREADY
, the vocabulary is ready to be used in aStartMedicalTranscriptionJob
request."PENDING"
"READY"
"FAILED"
-
(AWS.Response)
—
Returns:
updateVocabulary(params = {}, callback) ⇒ AWS.Request
Updates an existing vocabulary with new values. The UpdateVocabulary
operation overwrites all of the existing information with the values that you provide in the request.
Service Reference:
Examples:
Calling the updateVocabulary operation
var params = {
LanguageCode: af-ZA | ar-AE | ar-SA | cy-GB | da-DK | de-CH | de-DE | en-AB | en-AU | en-GB | en-IE | en-IN | en-US | en-WL | es-ES | es-US | fa-IR | fr-CA | fr-FR | ga-IE | gd-GB | he-IL | hi-IN | id-ID | it-IT | ja-JP | ko-KR | ms-MY | nl-NL | pt-BR | pt-PT | ru-RU | ta-IN | te-IN | tr-TR | zh-CN | zh-TW | th-TH | en-ZA | en-NZ, /* required */
VocabularyName: 'STRING_VALUE', /* required */
Phrases: [
'STRING_VALUE',
/* more items */
],
VocabularyFileUri: 'STRING_VALUE'
};
transcribeservice.updateVocabulary(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
VocabularyName
— (String
)The name of the vocabulary to update. The name is case sensitive. If you try to update a vocabulary with the same name as a previous vocabulary you will receive a
ConflictException
error.LanguageCode
— (String
)The language code of the vocabulary entries. For a list of languages and their corresponding language codes, see transcribe-whatis.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
Phrases
— (Array<String>
)An array of strings containing the vocabulary entries.
VocabularyFileUri
— (String
)The S3 location of the text file that contains the definition of the custom vocabulary. The URI must be in the same region as the API endpoint that you are calling. The general form is
For example:
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
For more information about custom vocabularies, see Custom Vocabularies.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:VocabularyName
— (String
)The name of the vocabulary that was updated.
LanguageCode
— (String
)The language code of the vocabulary entries.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
LastModifiedTime
— (Date
)The date and time that the vocabulary was updated.
VocabularyState
— (String
)The processing state of the vocabulary. When the
Possible values include:VocabularyState
field containsREADY
the vocabulary is ready to be used in aStartTranscriptionJob
request."PENDING"
"READY"
"FAILED"
-
(AWS.Response)
—
Returns:
updateVocabularyFilter(params = {}, callback) ⇒ AWS.Request
Updates a vocabulary filter with a new list of filtered words.
Service Reference:
Examples:
Calling the updateVocabularyFilter operation
var params = {
VocabularyFilterName: 'STRING_VALUE', /* required */
VocabularyFilterFileUri: 'STRING_VALUE',
Words: [
'STRING_VALUE',
/* more items */
]
};
transcribeservice.updateVocabularyFilter(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Parameters:
-
params
(Object)
(defaults to: {})
—
VocabularyFilterName
— (String
)The name of the vocabulary filter to update. If you try to update a vocabulary filter with the same name as another vocabulary filter, you get a
ConflictException
error.Words
— (Array<String>
)The words to use in the vocabulary filter. Only use characters from the character set defined for custom vocabularies. For a list of character sets, see Character Sets for Custom Vocabularies.
If you provide a list of words in the
Words
parameter, you can't use theVocabularyFilterFileUri
parameter.VocabularyFilterFileUri
— (String
)The Amazon S3 location of a text file used as input to create the vocabulary filter. Only use characters from the character set defined for custom vocabularies. For a list of character sets, see Character Sets for Custom Vocabularies.
The specified file must be less than 50 KB of UTF-8 characters.
If you provide the location of a list of words in the
VocabularyFilterFileUri
parameter, you can't use theWords
parameter.
Callback (callback):
-
function(err, data) { ... }
Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.
Context (this):
-
(AWS.Response)
—
the response object containing error, data properties, and the original request object.
Parameters:
-
err
(Error)
—
the error object returned from the request. Set to
null
if the request is successful. -
data
(Object)
—
the de-serialized data returned from the request. Set to
null
if a request error occurs. Thedata
object has the following properties:VocabularyFilterName
— (String
)The name of the updated vocabulary filter.
LanguageCode
— (String
)The language code of the words in the vocabulary filter.
Possible values include:"af-ZA"
"ar-AE"
"ar-SA"
"cy-GB"
"da-DK"
"de-CH"
"de-DE"
"en-AB"
"en-AU"
"en-GB"
"en-IE"
"en-IN"
"en-US"
"en-WL"
"es-ES"
"es-US"
"fa-IR"
"fr-CA"
"fr-FR"
"ga-IE"
"gd-GB"
"he-IL"
"hi-IN"
"id-ID"
"it-IT"
"ja-JP"
"ko-KR"
"ms-MY"
"nl-NL"
"pt-BR"
"pt-PT"
"ru-RU"
"ta-IN"
"te-IN"
"tr-TR"
"zh-CN"
"zh-TW"
"th-TH"
"en-ZA"
"en-NZ"
LastModifiedTime
— (Date
)The date and time that the vocabulary filter was updated.
-
(AWS.Response)
—
Returns: